Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks to CI #48

Open
jwoertink opened this issue Mar 23, 2021 · 3 comments
Open

Add benchmarks to CI #48

jwoertink opened this issue Mar 23, 2021 · 3 comments

Comments

@jwoertink
Copy link
Member

I'm not sure how we can do this, but there's a src/benchmark.cr file. We should first, make sure this file is updated to fully benchmark all the different parts of the shard, and second, figure out a way to build a release version and run it. My thought is by adding that in, we can get a quick sense of a PR killing performance if the benchmark runs longer than X or something... (half baked idea)

@jwoertink
Copy link
Member Author

Just thinking about this again... What if the benchmark look at some specific number like "300ms" as an example... We say the router should ALWAYS be less than this number, if it is, then we exit with 0 or whatever the 👍 code is. If it's more, than we exit with an error code.

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ~/Development/crystal/crystal-1.0.0-1/bin/crystal build --release src/benchmark.cr

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 17s
❯ ./benchmark
Average time: 292.01ms

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 3s
❯ ./benchmark
Average time: 313.3ms

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 3s
❯ ./benchmark
Average time: 293.92ms

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ./benchmark
Average time: 293.03ms
lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ crystal build --release src/benchmark.cr

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 13s
❯ ./benchmark
Average time: 291.31ms

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 3s
❯ ./benchmark
Average time: 278.86ms

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ./benchmark
Average time: 287.62ms

lucky_router on  releases/0.4.2 [!?] via 🔮 v0.36.1 took 2s
❯ ./benchmark
Average time: 278.3ms

See, in this case, Crystal 1.0 seems to make the router just a little slower, and in one case, it would have failed to say "Hey, maybe we should see what's going on". As where Crystal 0.36.1 on average is about 10ms faster...

(NOTE: macOS does a binary check the first time you run one, so you need to run a few times to get a better sense of time)
(Also NOTE: the file in src/benchmark.cr doesn't account for all the features the router supports, and may not even be the most efficient)

@matthewmcgarvey
Copy link
Member

My ideal workflow is that a github action would run a benchmark and comment on the PR "The benchmark ran in X, it's X% faster/slower than the master branch"

@jwoertink
Copy link
Member Author

I like that idea too!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants