Add continuous performance benchmarking to Cosmos #868
Labels
area:performance
Related to performance, like memory usage, CPU usage, speed, etc
dbt:parse
Primarily related to dbt parse command or functionality
execution:local
Related to Local execution environment
parsing:custom
Related to custom parsing, like custom DAG parsing, custom DBT parsing, etc
Milestone
After performance integration tests were added in #827 a reasonable follow-up would be to integrate a tool like github-action-benchmark to have continuous benchmarking of Cosmos performance so that possible performance improvements or regressions by comparing benchmark results can be detected.
The work would involve using pytest-benchmark to output the performance benchmark results and then storing the results with the GH action like in the example here. We could setup alerts for PRs.
A follow-up to this could also involve benchmarking the DAG parsing times for various load methods, e.g.
LoadMode.DBT_LS
andLoadMode.DBT_MANIFEST
to track performance for parsing.The text was updated successfully, but these errors were encountered: