-
-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tracking: testing #146
Comments
I think we could get something similar to what specsheet currently provides from https://github.com/charmbracelet/vhs, which has the advantage of being:
This is the current option I'm tentatively exploring. |
Also, see this example of vhs being used on a PR to generate gifs of the output antlr/antlr4#4383 That's so so so so so so so cool. Specially if we add automation. |
They are so awesome and make some really cool stuff. That is a use case I never considered.. With automated output? Great idea 👍
Truth OK so I never had any luck with that docker container from ogham/exa#1230 What do we think about assigning different areas and writing a bunch of regular old unit tests? Maybe start by making a sub directory in the xtests folder and moving some of the tests out of the files themselves and expanding on them? |
I'm honestly not sure. We might pick it apart for inspiration if we go that route, but then I think the current solution I'm working on could trivially be put in a docker container, and we'd not be using most of it. Probably something I'll explore more after #147 doesn't feel as experimental.
I'm not sure what you mean by assigning different areas, like saying someone takes care of testing some part of the codebase?
I think maybe this effort would be better spend in #147, but also, I'm not sure of the value of actually migrating this, because it is my goal to be able to generate something equivalent automatically (but also allowing manual edits). Also, I opened an issue in VHS, I'm not sure if I'm just not aware of how to do something or we'd need them to change their codebase ever so slightly. If I have some time next week I'll probably send them a PR for what I have in mind and then we can build VHS from that PR's branch until/if they merge it. edit: I've also already found a hacky workaround for this, which isn't ideal, but makes sure this is not blocking. But currently, the prototype kinda works, but without more detailed instructions of how to use it it's probably not that approachable, will also get to that as soon as possible. |
Like opening a Unit Test issue with a checklist of areas in the codebase that more traditional tests should be written for, and we can submit PR's for each area and check them off, etc after removing the tests from the bottom of the files and rewriting some more complete tests in the tests directory. I guess I'm just kinda lost as far as the format of all the existing ones.. My thoughts were to just start replacing them with regular unit tests one chunk at a time. |
I’m was wondering about how to transition from the Vagrant tests to a new solution. My idea at the time was to continue using I’m also asking because #162 should have been caught by the Anyway I’m looking forward contributing to tests when something’s in place. In the meantime, I’ll try to finish the MR I started instead of finding new/old bugs and fixing them. ;) |
Yea, my plan was essentially to use docker if I couldn't find something better. Also creating a subset of tests that could be run outside of docker, which would probably mean using The main argument against docker is that it's kind of slow and heavy. I think I'd like to see if using the nix sandbox via nix would be a good solution, as it sets the time on files to the unix epoch, uses a consistent user name, and just generally seems like it's very fit for purpose. |
Well, mainly I thought about using Docker/Podman, but I don’t have a preference, I just want it to be easy to use. I think running some tests by selectively removing information that depend on the host system is a good idea. I’ve tried using |
It shouldn't be? Depends how you approach it I guess 🤷♂️ I maintain a project built around docker so I'm biased there. Our tests run via shell scripts with BATS (although I'd rather something else), and despite our large image the tests can run rather quickly. Bulk of the project is just bash scripts running in a container providing all the dependencies needed (which get configured and tested against). There will be situations where it's notably faster to run on the host directly (eg: using Our CI with Github Actions builds the AMD64 image (over 500MB) very quickly due to caching earlier build layers from prior runs that would take much longer. The ARM64 image doesn't utilize the caching at present and takes approx 2 mins (relies on QEMU emulation). The test suite was originally taking over an hour to build the images and run all tests we have, but we've brought that down to around 5 minutes or less. A smaller binary without the more complicated test environment that the docker project I work on requires, should do much better? I haven't looked over Nix has it's perks, I like it's immutability benefits but I also found it was not as good as a |
This comment was marked as duplicate.
This comment was marked as duplicate.
#323 will add nix flake check tests. They ensure proper formatting and check rather the nix package builds on all targets it supports. |
#591 will start the transition to using the generated test_dir |
Status
Current testing is based on https://github.com/ogham/exa, which eza forks.
Here, vagrant, as well as https://github.com/ogham/specsheet is a huge pain.
Fixing something we don't like, instead of just finding a more permanent solution would be short sighted. So instead, I'm looking into various options.
Notice that generating releases will be done through nix at some point (via cross), as it allows for docker-tier build isolation, and could easily later be done on a build server, which would be very nice.We use cross now.See: #137 (comment)
Progress
trycmd seems like it will be the option we go for.
Todo:
And also just writing a lot of tests when this gets merged.
If there is anything else we should do, comment and I'll add it.
The text was updated successfully, but these errors were encountered: