-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: Enable caching #702
ci: Enable caching #702
Conversation
@@ -20,6 +20,8 @@ jobs: | |||
- name: Checkout sources | |||
uses: actions/checkout@v3 | |||
|
|||
- uses: Swatinem/rust-cache@v2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should use an approach as in https://github.com/rust-osdev/multiboot2/blob/main/.github/workflows/_build-rust.yml#L52 instead.
The solution you picked for caching only caches dependencies and only works reliably with a Cargo.lock file, as the README says. https://github.com/Swatinem/rust-cache
Hence, I'm in favor of a solution as in the Multiboot2 crate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to use a prebuilt solution if possible (easier to read the CI config then, and hopefully simpler to maintain), so I'd like to carefully check that Swatinem/rust-cache
really doesn't meet our needs.
That's a good point about Cargo.lock
, but we could probably just check in Cargo.lock
. The Cargo FAQ recommends against it, but I've often seen that recommendation debated. See rust-lang/cargo#8728 for example. Alternatively, we could skip checking in Cargo.lock
and see how much difference it makes for us in practice. We have a nightly CI run, and I think we might find that does a decent job of initializing our caches so that PR runs are fast (which is probably when we will actually notice and care about CI time).
As to only caching dependencies, I think that's fine. rust-cache
makes the claim that caching incremental builds of the crate itself is generally not effective, and I find that convincing. Caching the deps used by xtask
in particular is what will really save a lot of CI time I think.
Regarding total space used by the cache:
gh api \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
/repos/rust-osdev/uefi-rs/actions/cache/usage
{
"full_name": "rust-osdev/uefi-rs",
"active_caches_size_in_bytes": 5848673391,
"active_caches_count": 19
}
So about 5.5GiB currently in use for this project. That seems fine right?
Lastly, I tried running the CI on this PR, then running it a second time. All the jobs with the cache enabled ran faster. That seems like pretty strong evidence to me that the cache is working as intended. On the left is the first run without an up-to-date cache, and on the right is the run with a good cache:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a few ideas I like to try out, ok? I think, we can have a neat and simpler solution for everything.. let's see.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a few ideas I like to try out, ok? I think, we can have a neat and simpler solution for everything.. let's see.
Hm, I tested something like in the multiboot2 repo: one single big workflow that covers all the building, style checks, unit testing and integration tests that has a lot of configure options.
The outcome is a big feature-creep, semi-working variant.
@@ -36,6 +38,8 @@ jobs: | |||
- name: Checkout sources | |||
uses: actions/checkout@v3 | |||
|
|||
- uses: Swatinem/rust-cache@v2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, caching is ineffective and produces many GBs of artifacts on GitHub but we only have a 10GB limit.
My concern here is only that the same cache is uploaded multiple times.. simultaneously by every job as for every new change to the dependencies, they all do perform the cargo install in parallel.
Now that most jobs are on the stable channel, enabling caching should be more effective than it would have been on nightly.
e419f0f
to
1d46338
Compare
Could you please check the times with and without cache with the cargo.lock now being part of the tree and present the results in this PR? If it works, sure, why not. A "pretty good" solution with low effort is probably better than an over-engineered solution (as done in the Multiboot2 crate) that takes us much time. :) |
Very similar timings to before: https://github.com/rust-osdev/uefi-rs/actions/runs/4519831136 I think the addition of Cargo.lock doesn't make an immediate difference, it's more of an over-time thing as dependent crates release semver-compat updates that invalidate the cache. |
The I'm unsure why the Overall it seems like the cache is working well, our PR runs are faster (except often for the Windows job, which seems stubbornly slow.) Are there specific runs of the action you can link to that seem slower than expected? |
Now that most jobs are on the stable channel, enabling caching should be more effective than it would have been on nightly.
Checklist