Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: integrate automated benchmarking tool #41

Open
btel opened this issue Jun 17, 2015 · 2 comments
Open

ENH: integrate automated benchmarking tool #41

btel opened this issue Jun 17, 2015 · 2 comments
Labels
enhancement Editing an existing module, improving something

Comments

@btel
Copy link

btel commented Jun 17, 2015

pandas uses vbench to run benchmarks on each new version of code. This way all possible "performance regressions" are early detected. Something similar could be useful for elephant.

More information:

@alperyeg alperyeg added the enhancement Editing an existing module, improving something label Jun 17, 2015
@dizcza
Copy link
Member

dizcza commented Dec 7, 2020

I've looked into CircleCI automated unittests benchmarks - there are none available (Jan 2021). The developers should make such by themselves. Typically, it's done as a group of tests for 7 to 10 most used functionalities in a package with the timings printed in sdtout or an artifact file that can be downloaded at each build. There is no software that parses such output timings automatically and makes nice plots with month-year as the X axis and tests duration as the Y axis.

@Moritz-Alexander-Kern
Copy link
Member

Moritz-Alexander-Kern commented Dec 14, 2023

Firstly, I'd like to mention that there's now a GitHub Action available for continuous benchmarking, see here: https://github.com/benchmark-action/github-action-benchmark

This action could be a useful addition to our testing pipeline, automating the process of running benchmarks with each new version of the code.

However, as of now, there hasn't been a specific focus on developing benchmarks for Elephant within the automated testing suite. Developing benchmarks requires a thoughtful approach, and ideally, we would need to create tests that provide meaningful insight into the performance of Elephants functionalities.

While there isn't a dedicated effort towards this yet, it's an open invitation to anyone. If you have the time and interest, contributing benchmarks for Elephant or, perhaps, benchmarks derived from real-world use case could kick off this project.

Moreover, if any of you have encountered performance regressions in the past, please take a moment to report them. Having real-world cases as precedence will also serve as a good starting point.

If you would like to develop a benchmark, please reach out to us, you are most welcome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Editing an existing module, improving something
Projects
None yet
Development

No branches or pull requests

4 participants