-
-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks, part 1 #304
base: master
Are you sure you want to change the base?
Conversation
Thanks! Benchmarking is very important. I think we would want to consolidate these in FluxBench.jl (in this org), so that we can reliably track and reproduce our benchmarks. So maybe moving this over there would be better too. |
Thanks, I didn't know about FluxBench! What's the intended usage for it? I thought about running the benchmarks on every PR automatically to notify the author about possible performance regressions before it gets merged. Do you have in mind something similar for FluxBench? |
can this be triggered only per request, instead of every commit of every PR? I'm thinking to something similar to |
That's the job of the package, to have things in one place and be called on by the keeper bot |
@DhairyaLGandhi can you point @dfdx in the right direction for contributing? I didn't know about FluxBench and keeperbot as well, there are no comments about it and no documentation anywhere. Is keeper bot working already? |
@tkf provides a useful tool BenchmarkCI for this very specific purpose. It also supports an optional benchmark on a specific label, e.g., a non-op if |
Flux Bot* BenchmarksCI is definitely a good shout. This is also important to not have to deal with shared workers or different system setups adding noise, bit rather working on dedicated benchmarking machines. |
BenchmarkCI looks great! A couple of things I didn't understand from the discussion (perhaps, missing some context):
I guess some of the previous comments answer exactly these questions, but I can't connect the dots. |
I think it is fine to add microbenchmarks here until we sort out a more general benchmarking infrastructure. |
Given the increasing importance of NNlib in the ML ecosystem, I believe it's time to add automatic benchmarks. This PR is based on the amazing PkgBenchmark.jl and config from ProximalOperators.jl. This is only partial improvement since benchmark code compares a change with a baseline in master, but current master doesn't have any benchmarks at all, so benchmark job will fail by design. All the following PRs (with or without changes to the benchmarks) should work fine.
To be precise, here's what this PR does:
julia --project=benchmark benchmark/runbenchmarks.jl
What this PR does not: