Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

script to show test coverage for files that changed in the last month #535

Open
wants to merge 19 commits into
base: master
Choose a base branch
from

Conversation

NortySpock
Copy link
Contributor

@NortySpock NortySpock commented Dec 4, 2024

Admittedly, I like writing tests and I think writing or fixing a few tests is a good way to get to know a codebase. So I've been poking around the failing tests in Teiserver in order to learn my way around the codebase.

I realized, though, that another way to think about the problem of improving test coverage (without boiling the ocean) is to focus on the test coverage of recently changed files.

So, as an experiment, I wrote up this script to filter "test coverage per-file" with "only files changed in the last month". I think I will find it useful. Maybe it will be useful to others.

Example output:

...(end of actual test run output)...
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
[os_mon] memory supervisor port (memsup): Erlang has closed
COV    FILE                                                                                                    LINES RELEVANT   MISSED
  2.4% lib/teiserver_web/live/battles/match/show.ex                                                              428      125      122
 48.6% lib/teiserver/autohost/tachyon_handler.ex                                                                 192       37       19
 69.6% lib/teiserver/tachyon/transport.ex                                                                        238       69       21
 86.6% lib/teiserver/player/tachyon_handler.ex                                                                   249       67        9
100.0% lib/teiserver/tachyon/handler.ex                                                                           76        0        0

Copy link
Contributor

@geekingfrog geekingfrog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this script could be used in place of just these lines in the CI.
Maybe in another PR if you can't be bothered, but I'm a bit worried that these kind of scripts are barely ran, so having it run automatically would be nice.

Bonus point if the github action could report/comment on the PR, but I'm really not too fussed about that, may be too noisy.

# so the header is also emitted and highlighted
echo "LINES RELEVANT MISSED" >> ./cover/files_changed_in_last_month.txt

mix test --cover > ./cover/test_coverage_by_file.txt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would add the flag --exclude needs_attention, as it's done in the CI.
A bunch of tests have been marked as such. There are quite a few, and many of them are far from trivial to fix. Many of them are also about the spring protocol and I don't want to spend too much time on them since we want to deprecate the protocol. So in order to get a useful CI, they are now skipped.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added --exclude needs_attention

@NortySpock
Copy link
Contributor Author

NortySpock commented Dec 8, 2024

I think this script could be used in place of just these lines in the CI. Maybe in another PR if you can't be bothered, but I'm a bit worried that these kind of scripts are barely ran, so having it run automatically would be nice.

Bonus point if the github action could report/comment on the PR, but I'm really not too fussed about that, may be too noisy.

I just tried it out (setting set -euxo pipefail so the script fails rather than swallowing errors). However, the script form seemed to also fail on warnings when I set that mode, so perhaps we'd have to expand the GitHub actions to do all the same things this script does.

@NortySpock NortySpock marked this pull request as draft December 10, 2024 03:09
@geekingfrog
Copy link
Contributor

No, mix test doesn't fail on warning. The error you got was likely insufficient coverage:

FAILED: Expected minimum coverage of 80%, got 18.5%.

that isn't shown because of the grep. I don't know how to configure that, but that should be possible. Also, we could not care about this return code, we do run the test suite in another github action anyway.

@NortySpock
Copy link
Contributor Author

No, mix test doesn't fail on warning. The error you got was likely insufficient coverage:

FAILED: Expected minimum coverage of 80%, got 18.5%.

that isn't shown because of the grep. I don't know how to configure that, but that should be possible. Also, we could not care about this return code, we do run the test suite in another github action anyway.

Yeah, that was my conclusion as well. (maybe I should just figure out how to set the test coverage requirements instead of the following hack: I've re-arranged the sequence so that it runs the tests twice, once failing on error, and the second time using it to generate test coverage... )

However, I have not yet figured out why the GitHub action only goes back a few seconds -- I thought I had changed the action to do a deep clone rather than a shallow clone...

Run git diff b35d2d5fec7bfec473fe988937bb67672c7c124b '@{last month}' --name-only | tee ./cover/files_changed_in_last_month.txt
warning: log for 'HEAD' only goes back to Wed, 11 Dec 2024 03:56:03 +0000

I'm a little unsure if this simple report will even be used. It seemed possibly useful as a ad-hoc script but how often will this convince a developer to go back and actually improve test coverage?

@geekingfrog
Copy link
Contributor

I didn't find a way to set custom minimum coverage, although I admit I haven't searched for very long.
what you could do is set +e before running the coverage test so you effectively ignore the non 0 exit code?

I quite like the output of your script really, I think that would be a nice addition to the CI. But if that proves too much of a hassle, having the script readily available + a mention in the readme would already be quite nice really.

@NortySpock
Copy link
Contributor Author

NortySpock commented Dec 17, 2024

a way to set custom minimum coverage

I did find it, it's in coveralls.json. I adjusted it down to match the current test coverage level. (33.4% as of this writing. Obviously, adjust it whenever you need to.)

I added a blurb about it in the documentation.

I think this PR is as far as I'm going to be able to get it. I'm stumped on the last leg of the GitHub action, but I've left it commented it out for now in case someone has a flash of insight that I missed and can fix it.

Let me know if you need me to polish anything else or rebase or squash commits, but I think this is a reasonable amount of progress forward.

@NortySpock NortySpock marked this pull request as ready for review December 17, 2024 13:11
@NortySpock
Copy link
Contributor Author

I think I got it working. Of course, the tests failed, but had they succeeded, we might have seen a separate coverage report get generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants