-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
assert_cmd experience report #63
Comments
I assume this was recent; after we did a major doc revamp to try to highlight all the pieces of assert_cmd / predicates. Was that helpful at all? Where did you feel the gap was remaining? I'd love for us to be able to have btw one of the motivations for this approach was how limiting the old API was. There was talk of people wanting to use |
Docs are awesome, yeah, but I prefer to familiarize myself with API by looking at the types and methods, rather then by reading the docs. That's why for me API's that export few concrete types on which you can call inherent methods work better.
Yeah, I totally understand that! Another possible design here is to have a |
Do you have any thoughts in how we are currently short of what pytest does? I had pytest in mind when designing For example, For |
I feel there is some concrete concerns with your three iterations of test implementations that could be elaborated on so I can understand why btw one of the big motivations for switching from built-in predicates to |
Heh, I bet we now can write a procedural macro which handles "show values of sub expressions" feature. I am not sure we can do "plugable renderers" thing without specialization: the way pytest works is that is has
They are not exactly OK: one of the reasons why lean towards a "do it yourself" solution is that I'll be able to write my own assertions,
Yeah, if you want open-world extensible, then such predicate-based solution is more-or less the sole design point. It's just that with my current style of testing, I prefer a closed world where I can write |
So
The assertions are on a concrete type ( Any thoughts in how this design is a step back from your ideal? |
Understandable. It is a downside to our current approach. My hope was through examples and documentation, we could avoid people having to worry about the details which is where I assume most of the cognitive load is. |
This was a trade off between giving the information people need without overloading them. I am open to changing this. I do like having something like // An optional debug helper. If streaming is enabled, all output from subprocess goes to the terminal as well. Super-valuable for eprintln! debugging
// .stream() I tend to try to design my tests so a failure can give me the context I need without having to go back and change it. This is one reason I prefer macros over predicates is the failure points to the test rather than the library, avoiding the need to re-run with For controlling outputting more information, an environment variable seems better than modifying the code. Any thoughts on
|
So let me see if I understand where the gap is with It isn't that |
The concrete example why this would be useful for me, from today's battle with travis. I run
which was rather unhelpful. When I added an additional
|
I think both won't harm? When running a single test from an IDE, sticking |
I forget about the IDE use case :), thanks And in general, thanks for taking the time to provide all this feedback. This has been a very helpful discussion! |
Looks like in On test failure, we So I guess we don't need a way to dump more information. |
Exactly! An API which would work for me would be something like this:
That is, I'd love a low-level helpers which I can use to build by own API (so, extract result and do matching by hand, rather then plugging-in into the predicates infra), plus a dead simple not-extensible API to write a first couple of tests. |
While I've not used it,
I've not yet dug into these parts of cargo. Where should I start? |
Here's the entry in the test harness (https://github.com/rust-lang/cargo/blob/485670b3983b52289a2f353d589c57fae2f60f82/tests/testsuite/support/mod.rs#L770-L774) Here's the impl: And here's the low-level thing which underpins the impl: |
As an update, we switched from extension traits back to wrapping So quick summary
I'm not actually quite sure what the value add is for all the complex streaming logic. We did recently change to write to |
I was trying this library out today and I was basically looking for something like this. It would really improve the experience. |
Hi! Today, I've written a bunch of tests for https://github.com/ferrous-systems/cargo-review-deps/blob/3ba1523f3b2c2cfe807bc76f42bf972d0efdc113/tests/test_cli.rs.
Initially, I've started with
assert_cmd
, then I've switched toassert_cli
and now I have "write my own minimal testing harness on top of std::process::Command" on the todo list. I'd like to document my reasoning behind these decisions. Note that these are just my personal preferences though!The first (but very minor) issue with
assert_cmd
was it's prelude/extension traits based design. It makes for an API which reads well, but it is definitely a speed bump for the new users who want to understand what API surface is available to them. I think that long term such design might be an overall win, but it is a slight disadvantage when you learn the library.The issue which fundamentally made me think "I'll stick with assert_cli for now" was the predicate-based design of assertions for stdout. It has two things which don't fit my style of testing. First, the same issue with prelude. The first example for
.stdout
starts withThat's an uncomfortable for me amount of things I need to import to do "quick & dirty smoke testing".
The second thing is extremely personal: I just hate assertion libraries with a passion :) Especially fluent assertion ones :-) I've tried them many times (b/c they are extremely popular), but every time the conclusion was that they make me less productive.
When I write tests, I usually follow the following stages.
assert!(a == b)
, without custom message. This the lowest-friction thing you can write, and making adding tests easy is super important.assert!(false)
in the console, and go to the test and write an elaborate hand-crafted error message with all kind of useful information, to help me debug the test.fn do_test(input: Input, expected: Expected)
which turnsarrange
andassert
phases of a test into Data (some JSON-like pods, a builder DSL or just a string with some regex-based DSL). Internally,do_test
does all kind of fancy validation of input and custom comparisons of expected and actual outcomes.I feel that assertion libraries sit somewhere between 1 and 2 here: they are significantly more difficult to use then plain assertions, but are not as good error-message quality wise as hand-crafted messages. And they don't really help with 3, where you need some kind of domain-specific diffing. EDIT: what works great as a midpoint between 1 & 3 is pytest / better_assertions style of thing, which generates reasonable diff from the plain assert.
So, TL;DR, the reason I've switched to
assert_cli
is to be able to do.stdout().contains("substring")
.The reason why I want to write my own harness instead of
assert_cli
has to do with mostly technical issues:current_dir
nicely, so I have to write custom code anyway.stderr
, which usually contains the clue as to why stdout is wrong :)I also want to import a couple of niceties from Cargo's harness. The API I'd love to have will look like this:
The text was updated successfully, but these errors were encountered: