Replies: 3 comments 2 replies
-
A very good idea! A few further existing issues related to performance with contained benchmarks: |
Beta Was this translation helpful? Give feedback.
-
Would such a suite be also the place for space leaks? |
Beta Was this translation helpful? Give feedback.
-
Here's a rough implementation proposal. The most straightforward way to run the benchmarks may be to do it at the process level. As a sample benchmark for discussion, we can use edges.pl from #207:
MemoryTo get memory use we could use
InstructionsFor other metrics we could use perf. See: https://perf.wiki.kernel.org/index.php/Tutorial#Benchmarking_with_perf_bench We can use
Inferences?As far as I know, time/1 doesn't output a ~deterministic counter like 'instructions'/'inferences', so it's not super useful here. Is there some way to get that metric? As a start for CI, I'm imagining a simple script that runs these commands for each Perhaps with something like https://github.com/benchmark-action/github-action-benchmark / https://github.com/marketplace/actions/continuous-benchmark Thoughts? |
Beta Was this translation helpful? Give feedback.
-
I think it would be helpful to run a collection of benchmarks as part of CI so we can immediately see how changes impact performance. As humans our immediate decision making is better guided by comparing our individual performance to our past selves rather than others (though occasional reality checks can be good too); adding benchmarks to CI is a good analogue for programs.
This breaks down into a few questions:
Some considerations:
As far as which benchmarks would be appropriate I have no basis for an opinion. Instead I'll reference some past discussions that mention performance, many of which contain minimized reproductions that may be good candidates for benchmarks:
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions