Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automated tests and CI #20

Open
pfirsich opened this issue Aug 27, 2021 · 3 comments
Open

Automated tests and CI #20

pfirsich opened this issue Aug 27, 2021 · 3 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@pfirsich
Copy link
Owner

A big reason why I tend to procrastinate on some PRs is that I simply don't want to go through the hassle of testing. I use Linux and often it it works there, it means nothing for Mac and Windows. I don't even have a Mac. Often I ask myself if I should even push something, because I won't know if it's actually broken.

One idea was to create tests around minimal headless love games, that set t.window = nil and t.modules.* = false in conf.lua, which simply try to load some files from an asset directory and assert those files' content, so they error out, if something went wrong. I think that would cover a lot of games already. You could also have them load shared libraries.

Then you would put some pytest code around it and put it in a GitHub action, so I can simply merge stuff it it seems sensible and nothing turned red.

I would love for people to contribute some ideas (or PRs even) before I go forward with this.

@pfirsich pfirsich added enhancement New feature or request help wanted Extra attention is needed labels Aug 27, 2021
@pfirsich
Copy link
Owner Author

Because of stuff like in #15, it would be nice to have test cases, that check what kind targets would be rebuild, given a build directory and command line arguments, but the code is really badly structured to test this stuff well. Filesystem IO is sprinkled everywhere and it makes it really hard to test this properly. Ideally the code would be split into reading from disk, preparation/decision making and then actually writing to disk or the filesystem would be entirely abstracted away. I am not sure how to make this work, without a major code overhaul, which I am apprehensive about doing, because the tool is not tested well (and I pretty much don't use it, since I rarely use löve anymore).

@pfirsich
Copy link
Owner Author

Also there is a complication in testing in GitHub actions, since I want to build on three platforms (Linux, Mac, Windows) and then test those results on those same three platforms again, so I need to store the outputs somewhere. I bet GitHub actions provide facilities for this in some way, but I have not used them yet. I think testing lovejs builds automatically is almost out of the question, but if anyone has ideas, let me know.

@idbrii
Copy link
Contributor

idbrii commented Sep 4, 2021

Instead of trying to unit test the code, maybe it's more viable to test the application as a whole: test what users do and roughly verify the outputs.

  • test auto incrementing builds creates builds with different version numbers
  • test trying to accidentally overwrite a versioned build
  • test forcing successfully overwrites a versioned build
  • test making a build, unzip it, and ensure it has the platform appropriate executable. repeat for various combinations of build targets and clean/dirty build folder.
  • test pre/postbuild commands -- correct interpolation of variables, can run python.exe commands, ...

since I want to build on three platforms (Linux, Mac, Windows) and then test those results on those same three platforms again

I guess you'd want to build for all three platforms from all three platforms and test the resulting 9 builds on their respective platforms? I think runs-on and artifacts are what you want, but I haven't gone down those roads.

I think testing lovejs builds automatically is almost out of the question

Testing that they build without exceptions is definitely good, but yeah I don't know how you'd test that they work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants