-
Notifications
You must be signed in to change notification settings - Fork 2
/
bench-show.cabal
192 lines (179 loc) · 6.18 KB
/
bench-show.cabal
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
name: bench-show
version: 0.3.2
license: BSD3
author: Harendra Kumar
maintainer: [email protected]
bug-reports: https://github.com/composewell/bench-show/issues
synopsis: Show, plot and compare benchmark results
description:
Generate text reports and graphical charts from the benchmark results generated
by @gauge@ or @criterion@ and stored in a CSV file. This tool is especially
useful when you have many benchmarks or if you want to compare benchmarks
across multiple packages. You can generate many interesting reports
including:
.
* Show individual reports for all the fields measured e.g. @time taken@, @peak
memory usage@, @allocations@, among many other fields measured by @gauge@
* Sort benchmark results on a specified criterion e.g. you may want to see the
biggest cpu hoggers or biggest memory hoggers on top
* Across two benchmark runs (e.g. before and after a change), show all the
operations that resulted in a regression of more than x% in descending
order, so that we can quickly identify and fix performance problems in our
application.
* Across two (or more) packages providing similar functionality, show all the
operations where the performance differs by more than 10%, so that we can
critically analyze the packages and choose the right one.
.
Quick Start: Use @gauge@ or @criterion@ to generate a @results.csv@ file, and
then use either the @bench-show@ executable or the library APIs to generate
textual or graphical reports.
.
@
$ bench-show report results.csv
$ bench-show graph results.csv output
@
.
@
report "results.csv" Nothing defaultConfig
graph "results.csv" "output" defaultConfig
@
.
There are many ways to present the reports, for example, you can show can
show % regression from a baseline in descending order textually as follows:
.
@
(time)(Median)(Diff using min estimator)
Benchmark streamly(0)(μs)(base) streamly(1)(%)(-base)
\--------- --------------------- ---------------------
zip 644.33 +23.28
map 653.36 +7.65
fold 639.96 -15.63
@
.
To show the same graphically:
.
<<src/docs/regression-percent-descending-median-time.svg>>
.
See the README and the "BenchShow.Tutorial" module for comprehensive
documentation.
category: Performance, Benchmarking
homepage: https://github.com/composewell/bench-show
license-file: LICENSE
tested-with: GHC==9.2.8
, GHC==9.4.8
, GHC==9.6.4
, GHC==9.8.1
copyright: 2017, 2018 Composewell Technologies
stability: Experimental
build-type: Simple
cabal-version: 1.18
extra-source-files:
Changelog.md
README.md
stack.yaml
test/results.csv
test/results.csvraw
test/results-doc.csv
test/results-doc-multi.csv
appveyor.yml
.github/workflows/packcheck.yml
.gitignore
TODO.md
extra-doc-files:
docs/full-median-time.svg
docs/grouped-median-time.svg
docs/grouped-delta-median-time.svg
docs/grouped-percent-delta-coeff-time.svg
docs/grouped-percent-delta-median-time.svg
docs/grouped-percent-delta-sorted-median-time.svg
docs/grouped-single-estimator-coeff-time.svg
docs/regression-percent-descending-median-time.svg
source-repository head
type: git
location: https://github.com/composewell/bench-show
flag no-charts
description: Don't build the modules that provide charting functionality
manual: True
default: False
flag no-colors
description: Use pretty printing without colors
manual: True
default: False
library
hs-source-dirs: lib
exposed-modules: BenchShow.Internal.Analysis
, BenchShow.Internal.Common
, BenchShow.Internal.Report
, BenchShow.Internal.Pretty
default-language: Haskell2010
default-extensions:
OverloadedStrings
RecordWildCards
ghc-options: -Wall
build-depends:
base >= 4.8 && < 5
, csv >= 0.1 && < 0.2
, filepath >= 1.3 && < 1.6
, mwc-random >= 0.13 && < 0.16
, directory >= 1.2 && < 1.4
, transformers >= 0.4 && < 0.7
, split >= 0.2 && < 0.3
, statistics >= 0.15 && < 0.17
, vector >= 0.10 && < 0.14
if !flag(no-charts)
exposed-modules: BenchShow
BenchShow.Tutorial
BenchShow.Internal.Graph
build-depends:
Chart >= 1.6 && < 2
, Chart-diagrams >= 1.6 && < 2
if !flag(no-colors)
build-depends:
ansi-wl-pprint >= 0.6 && < 1.1
else
cpp-options: -DNO_COLORS
executable bench-show
default-language: Haskell2010
hs-source-dirs: app
main-is: Main.hs
other-modules: Paths_bench_show
ghc-options: -Wall
build-depends:
base >= 4.8 && < 4.21
, optparse-applicative >= 0.14.2 && < 0.19
, optparse-simple >= 0.1.0 && < 0.2
, bench-show
if flag(no-charts)
cpp-options: -DNO_CHARTS
test-suite test
if flag(no-charts)
buildable: False
type: exitcode-stdio-1.0
default-language: Haskell2010
default-extensions:
OverloadedStrings
RecordWildCards
hs-source-dirs: test
main-is: Main.hs
ghc-options: -Wall
build-depends:
bench-show
, base >= 4.8 && < 4.21
, split >= 0.2 && < 0.3
, text >= 1.1.1 && < 2.2
-- , typed-process >= 0.1.0.0 && < 0.3
test-suite doc
if flag(no-charts)
buildable: False
type: exitcode-stdio-1.0
default-language: Haskell2010
default-extensions:
OverloadedStrings
RecordWildCards
hs-source-dirs: test
main-is: Doc.hs
ghc-options: -Wall
build-depends:
bench-show
, base >= 4.8 && < 4.21
, split >= 0.2 && < 0.3