-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restructure tests to eliminate need for code to add new ones #655
Conversation
8d0942b
to
9643cdc
Compare
The variable names are intentional. I'll add a comment to disable the error if they're OK with you, otherwise, I'll take naming suggestions. ("There are only two hard things in computer science…") |
Great work Vince, thanks. Some notes:
|
I used the same naming conventions as the existing test code. It was not the same as the SE code itself, but I figured if you were OK with it then … :) I'll do some work to try to conform to the rest of the SE code as best as I understand it. Both the instructions in general, and your test in particular, should work, and as I said in my posts above, do work here locally, albeit without pipx, which I don't use. I can run both all of the tests (with In looking at my fork, though, those two directories are indeed missing. But I added the entire tree in the PR commit, and when I add them manually now (e.g. |
And … it turns out git does have a problem with empty directories. Who knew? (Not me.) That's … ridiculous. The commands in the README should still work, the above excepted. That's how I run the tests all the time, with the caveat that I don't use pipx. But I run "pytest tests" from the tools directory and everything works. |
9643cdc
to
a8985bf
Compare
I updated the .py files to use more SE-like variable names, and similarly adjusted the top-level directory names. As before, everything runs fine here with the changes. However, we again run into the empty directory problem, because for some of these modules (ebook, file) we don't have any tests yet, as well as for one of the lint subtypes (filesystem). Until I can get a test in every one of those directories within each module (which is a lot of tests), it's going to error. I'm still pretty flabbergasted at that limitation of git. |
I think it's more important not to crash by default, even if Git makes it hard. You can put a file called |
Also note that running the pytest instructions in the README crashes, possibly due to missing files again. This is the output:
|
a8985bf
to
60fbfc4
Compare
Well, of course, we wouldn't commit the PR until this is resolved, one way or the other. The errors running the instructions are the same errors that happen in CI, and yes, they are caused by the missing directories. If CI works, the manual instructions should work, but if CI errors, then obviously running them manually will error as well. I've removed the directories locally, and with these small changes everything passes locally. We'll see in a minute how CI does. |
60fbfc4
to
5ac018d
Compare
The three tests that failed all pass locally, so I'll have to dig in further to find out what's going. Have to be later, though, I have work and other things to do today. |
It might help to try cloning this repo into a different local directory to see what it looks like fresh from Git. |
I didn't need to—I suspected the problem with the stdout tests, and those are fixed. But the problem with f-003 is a different kind of problem. That error reports a difference in LICENSE.MD, and in doing so, it reports the full path of the source, i.e. SE's copy in the templates directory. That is, of course, in a different place on everyone's local directory structure than it is on CI. We'll never get a match. A similar thing will occur with the other static files, e.g. core.css, logo.svg, etc. Do we need the full path in the error message, or might it be enough to say "standard LICENSE.MD" or somesuch? |
On a related note, I've long thought we should have an I can put together a command if you think that's reasonable. |
If you're referring to lint's output, then we want both full paths so the user can inspect the issue without having to guess where one or the other files are. For unit tests then it doesn't matter. You can try to do a regex replace to normalize the output so the tests can work.
Lint gives the user the full paths of files that don't match so it's easy to copy them over. I don't think building and supporting a whole new tool is necessary, this is basically just a |
The whole point here is not to have code on tests. If we have to have conditionals for the thing that tests do in their output, we're right back where we started, and you'll just have to add code every time you add a test. We don 't want people to have to think about "do I have to add code here?" when they add a test. |
Well, the point of tests is not to dictate the output of the codebase. The codebase has output for valid reasons, the tests must work around that output somehow. How it does that is up to you. |
Right, which is why tests are code. You can't deal with variable output without code. You can have tests without code with fixed output, or you can have tests with variable output and code. You asked for tests without code, which I tried to accommodate. I'm just telling you that's not happening with variable output. |
I asked for the minimum amount of code, and ideally none if possible. Sometimes "none" may not be possible, so it's OK if the situation demands. |
f19413b
to
ef11103
Compare
All right, everything passes, so this at least gets the new infrastructure in place, and I can get back to adding tests. |
OK great. I see that in Is the readme current? As I mentioned I think a step by step how-to for understanding how the framework works and adding new tests would be very helpful. Maybe you can work on that next because that will help me get up to speed more quickly as I review. A new, separate README file in the tests directory would be the place to put that, and we can direct people there from the main README. |
Grrr. No, when I expanded the names in the code per your request, I neglected to do the same in the files. Give me a little bit, I'm in the middle of something for work. |
I believe things are now in sync. One of the dangers of the --save-golden-files command is that if your environment isn't correct, you'll save the wrong thing and not realize it. Or at least that's theoretically possible… Since the current readme doesn't really go into any detail about the tests, it's still substantially correct. I'll take the write-up I did at the top of this issue and put it in a README for the test directory and maybe add some additional information. Do you want to leave any of the test documentation in the top-level readme at all, or just have a one-liner there that refers to the test readme? Note that everything about testing is not feature-complete yet, but I believe (hope!) the structure will hold. We already know about the PATH issues with some of the lint tests, and the ebook_command tests are taking files as input but updating different files, and I haven't figured out exactly how that's going to work yet. (The testing itself will work, it's the --save-golden-files that is going to be tricky.) Just noting it in case anyone else decides to dive in and start creating tests. |
In the main readme, just a line on how to run the tests and then point to the detailed tests readme for more information on installing/modifying tests. Let's get the readme/step by step done first so it's easier for me to review, then we can merge this in and work on other stuff. Nobody is going to jump on writing tests, believe me :) |
I added a first pass at a testing README, along with the update to the main README to point to it for testing information. |
Excellent work, thanks Vince! |
TLDR: I restructured the test code to eliminate the need for additional code to add a new test, per your wish. Now it's just a matter of adding a directory and files. Details below, which I'll put into a "how to" guide eventually. Still no new tests (I have some locally, but I didn't include them here), just restructuring. All the existing tests pass locally.
There are a lot of words to describe everything, but the structure itself is pretty obvious when looking at the directory tree underneath
tests
.DETAILS
I divided the
se
commands into different categories, according to what the command needs as input and what it produces as output. Each of the categories has atest_{category}.py
module file, and each has its own directory that contain the tests. There is also a data directory that contains two ebook directories:draftbook
that has a draft ebook as existed previously, andtestbook
that has a feature complete test ebook, i.e. it passes lint and should build without errors.In each of the category module files, I've listed in the documentation at the top which commands the module covers, and I have created a subdirectory underneath the category directory for each of those commands. Tests don't exist for each one yet, but I created all of the directories so it would be obvious where everything went.
The categories are:
stringcmds
—These take a string as input and output a string as well. There's a file for each command, and each line contains a comma-delimited input and (golden) output. At least one test exists for each of these commands. This is the only category with this structure (files for tests instead of directories), just because the commands are so simple.stdoutcmds
—These take a draft (i.e. incomplete) ebook as input and output text to stdout. Each command has a subdirectory, and each test for that command will be in a subdirectory beneath that one. I have just called them test-X, e.g. test-1, test-2, etc.draftcmds
—These take a draft ebook as input and update the ebook files in some way. Same structure as stdoutcmds.ebookcmds
—These take a test ebook as input and update the ebook files in some way (or, in the case of build, produce ebooks). Same structure as stdoutcmds.filecmds
—These take some kind of file(s) as input, or in the case of create-draft, nothing, and produce files (possibly in a directory tree) as output. Same structure as stdoutcmds.lint
—I kept lint as a separate category just because of how many tests there are going to be, and I wanted it to be easy to either exclude them or run just them.In order to simplify everything, I made it so all input files have to be in an ebook directory structure, i.e.
src/epub/[css,text]
. Trying to special-case the single directory instances just got too complicated. The structure can be created with a single command (or two, if both text and css are needed), and knowing that it has to be done every time actually makes the instructions easier as well.Any test directory underneath a command directory is a test for that command, and that command is what's called for the test by the module file. If arguments are needed, a
{cmd}-cmd
file, e.g.help-cmd
, is created in the top level of the test-X directory. That file just consists of a single line containing the command with whatever arguments are needed. But, again, it only needs to be created to pass arguments; otherwise, the command directory itself is sufficient.In each test directory, there is an
in
directory that contains the input files, and agolden
directory that contains the golden files, i.e. the proper results for the command, the files against which the test outputs are measured against.Populating the golden files for a test is still a matter of just adding the
--save-golden-fiiles
option to the pytest command when it's called.Adding a new test is simply a matter of:
text-X
directory underneath the command directory, where X should be one more than the biggest number that already exists. (That's just convention; they should be unique, but otherwise it doesn't really matter what they're called.)in
andgolden
subdirectories within the test directory.in
directory with an epub directory tree and whatever files are needed to test the command.golden
directory by running that individual test with the--save-golden-files
option.RUNNING TESTS
(None of this is new, it's all part of pytest, I'm just documenting it while I'm here.)
If running tests manually (vs via CI), a single module (e.g. stdoutcmds) can be tested by calling pytest with the module .py file, e.g.
pytest tests/test_stdoutcmds.py
from the tools directory.An individual test can be run by adding the test id to the module name, e.g.
pytest tests/test_stdoutcmds::test_stdoutcmds[build-manifest-test-1]
.Test IDs can be seen either by running a test with the
--collect-only option
, or by using the-v[v]
option on a test run.