Hi all
It would be nice to have a discussion around what design / framework we choose for initial CLI integration testing / validation -
This decision has big ramifications further on and is much harder to change the further we attach ourselves to it / add stuff on top of techdebt / status quo we create -
I feel that now we may be jumping to rolling our own when somebody has already solved many things for us -
Problem with hardcoded tests -
1) Not maintainable
2) The thing that runs these tests steals the show from the actual tests
3) The tests may or may not work - flaky tests
4) It adds to techdebt - all code is liability
5) Nobody else understands it - I doubt we genuinely ourselves even fully would understand it :)
Could we perhaps please instead use some stable CLI / UI test framework like bats as the very first step we know works
This is where we don't end up debugging the testing framework itself that preferably should stay out of our way ?
We should neither be writing any non-portable Rust code for handling pty's / low level stuff like that -
When there are plenty of existing tooling around us we can use to test our Rust code in integration.
rust-lang/rust also has really nice UI (CLI) tests that I would vouch for that is used to test rustc, rustdoc etc. -
There is also a crate called insta (https://crates.io/crates/insta) e.g. used in cargo-geiger that runs it on test crates with various data driven scenarios that compare snapshots.
All of these are pretty fast to set-up once you have a test scenarios laid out -
The very first step now would be to outline the test scenarios we want to do and then we can select and throw the test framework use around those scenarios holistically.
And if indeed we need to roll our own then we can but I am fairly positive that we don't -
This is why I previously asked that we should design all the different UX driven flows how we are supposed to use everything together for after we have built the thing -
This is also important to do before ideally building anything - e.g. validating the usage flow by intended design vs making user-hostile tooling - important mental model for design.
Ideally we need to have both failure and success conditions and we should also introduce failure points e.g. filesystem / network failures -
This is why also it is even more essential to first develop usage based flows so we can inject the failure scenarios on these -
Which in turn again influences on which test framework - from the beginning - we should choose - coming back to this topic.
This is also why I created those different roles in our notion that are like actors in the radicle project that then can be plugged into usage flows and can be cross-collaborated with other teams.
Best
Miss M
I think this probably belongs in the dev mailing list as a response to
Han's patch to introduce some integration testing? If so can you resend
it as a response to that patch so we can reply in the appropriate place?
Alternatively it might be it's own topic in `dev`, but either way,
`discuss` is for things which might affect users of link whereas `dev`
feels more appropriate for discussing testing approaches.
So this mailing list is dedicated to users only ?
Ok - In that case I'll re-frame the discussion on same subject
Developer-Users usually discover usability from tests and I think our users would perhaps have something to say reading low level code instead of reading tests - from usability perspective.
At least I do when I figure out how to use some upstream library -
The first thing I do is look for the tests and then the examples folder if the tests are not easily digestible.
All these integration tests show how wee see things working in Link so it is probably the best idea to keep them expressive -
Ideally all examples should be expressed solely as integration tests - this is something our users should contribute as well to help us develop use-cases ?
I've been scribbling in Product notion on how the usability scenarios work and how these should be expressed in integration tests to make sure we design something usable from product PoV.
I further expanded that to Entities and Identities that should be tested as whole - testing them as in isolation in isolated scenarios without full context leads to re-work.
Cheers