Nonius is a framework for benchmarking small snippets of C++ code. It is very
+heavily inspired by Criterion, a similar Haskell-based tool. It
+runs your code, measures the time it takes to run, and then performs some
+statistical analysis on those measurements.
+
+
+How do I use it?
+
+
The library itself is header-only, so you don't have to build anything. It
+requires a C++11 capable compiler; it was tested with GCC 4.8.1, clang 3.4, and
+VC++ 18.0. Most development takes place in the devel branch with GCC with
+clang. The msvc branch tracks the latest successfully tested revision on
+VC++ and the stable branch tracks the latest revision that was tested
+successfully on all three compilers.
+
+
It depends on Boost for a few mathematical functions, for some
+string algorithms, and, in VC++, for the timing functions as well. Boost.Chrono
+is not a header-only library, but since it is only used with VC++ everything
+gets linked automatically without intervention.
+
+
In the CI server you can grab a single header file with everything, ready to
+be #included in your files.
+
+
There are examples of both simple and advanced usage in the examples folder.
+
+
If you just want to run a quick benchmark you can put everything in one file,
+as in the examples. If you prefer to separate things into different files, it
+is recommended that you create one small file with the runner code by #defining
+the macro NONIUS_RUNNER and then #including the nonius single header. In other
+files you don't #define that macro; just #include the header and write the
+benchmarks. Then compile and link everything together.
+
+
Nonius standard runner has several command-line options for configuring a run.
+Pass the --help flag to the compiled runner to see the various flags and a
+short description of each. The standard runner includes all your benchmarks and
+four reporters: plain text, CSV with raw timings, JUnit-compatible XML, and a
+nice HTML file with a scatter plot of the timings.
+
+
+Woah, what do all these numbers mean?
+
+
If you execute the standard runner without requesting a particular reporter,
+nonius will use plain text to report the results.
+
+
The first thing that nonius does when benchmarking is to find out where it is
+running. It estimates the resolution and the cost of using the clock. It will
+print out the mean of the samples it took, and also some information about the
+spread of those values, namely any outliers seen.
+
+
Outliers are classified as "low" or "high" depending on whether they are above
+or below the mean. They can be "mild" or "severe" if they are relatively far
+from the rest of the measurements. If you request verbose output the default
+reporter will provide outlier classification.
+
+
After ascertaining the characteristics of the environment, the benchmarks are
+run in sequence. Each one consists of taking a number of samples determined by
+the configuration (defaults to 100). Each sample consists of running the code
+being measured for a number of times that makes sure it takes enough time that
+the clock resolution does not affect the measurement.
+
+
After the measurements are performed, a statistical bootstrapping is performed
+on the data. The number of resamples is configurable but defaults to 100000.
+After the bootstrapping is done, the mean and standard deviation estimates are
+printed out, along with their confidence interval, followed by information about
+the outliers. The very last information tells us if the outliers might be
+important: if they affect the variance greatly, our measurements might not be
+very trustworthy. It could be that there is another factor affecting our
+measurements (say, some other application that was doing some heavy task at the
+same time), or maybe the code being measure varies wildly in performance.
+Nonius will provide the data; it's up to you to make sense of it.
+
+
+Are there any restrictions on the use of nonius?
+
+
Nonius is released under the CC0 license, which is essentially a public
+domain dedication with legalese to emulate the public domain as much as
+possible under jurisdictions that do not have such a concept. That means you
+can really do whatever you want with the code in nonius, because I waived as
+many of my rights on it as I am allowed.
+
+
However, currently nonius makes use of some code distributed under the
+CC-BY-NC and the MIT licenses. The html reporter uses the Highcharts JS
+and jQuery libraries for the interactive charts and the cpptemplate library
+for generating HTML from a template.
+
+
+What does "nonius" mean?
+
+
Nonius is a device created in 1542 by the Portuguese inventor
+Pedro Nunes (Petrus Nonius in Latin) that improved the accuracy of the
+astrolabe. It was adapted in 1631 by the French mathematician Pierre Vernier to
+create the vernier scale.
Nonius is a framework for benchmarking small snippets of C++ code. It is very
heavily inspired by Criterion, a similar Haskell-based tool. It
runs your code, measures the time it takes to run, and then performs some
statistical analysis on those measurements.
-
-How do I use it?
+
How do I use it?
The library itself is header-only, so you don't have to build anything. It
-requires a C++11 capable compiler; it was tested with GCC 4.8.1, clang 3.4, and
-VC++ 18.0. Most development takes place in the devel branch with GCC with
+requires a C++11 capable compiler; it was tested with GCC 4.8.3, clang 3.5, and
+VC++ 19.0. Most development takes place in the devel branch with GCC with
clang. The msvc branch tracks the latest successfully tested revision on
VC++ and the stable branch tracks the latest revision that was tested
successfully on all three compilers.
@@ -49,8 +47,9 @@
is not a header-only library, but since it is only used with VC++ everything
gets linked automatically without intervention.
-
In the CI server you can grab a single header file with everything, ready to
-be #included in your files.
+
In the releases
+page you can grab a single header file with everything, ready to be #included
+in your files.
There are examples of both simple and advanced usage in the examples folder.
@@ -67,8 +66,7 @@
four reporters: plain text, CSV with raw timings, JUnit-compatible XML, and a
nice HTML file with a scatter plot of the timings.
-
-Woah, what do all these numbers mean?
+
Woah, what do all these numbers mean?
If you execute the standard runner without requesting a particular reporter,
nonius will use plain text to report the results.
@@ -100,8 +98,7 @@
same time), or maybe the code being measure varies wildly in performance.
Nonius will provide the data; it's up to you to make sense of it.
-
-Are there any restrictions on the use of nonius?
+
Are there any restrictions on the use of nonius?
Nonius is released under the CC0 license, which is essentially a public
domain dedication with legalese to emulate the public domain as much as
@@ -114,8 +111,7 @@
and jQuery libraries for the interactive charts and the cpptemplate library
for generating HTML from a template.
-
-What does "nonius" mean?
+
What does "nonius" mean?
Nonius is a device created in 1542 by the Portuguese inventor
Pedro Nunes (Petrus Nonius in Latin) that improved the accuracy of the
diff --git a/test.md b/test.md
new file mode 100644
index 0000000..8ed5a88
--- /dev/null
+++ b/test.md
@@ -0,0 +1,83 @@
+---
+title: foo
+layout: test
+---
+Thingies!
+
+{% highlight console %}
+$ bin/examples/example5 -h
+Usage: bin/examples/example5 [OPTIONS]
+
+--help -h show this help message
+--samples=SAMPLES -s SAMPLES number of samples to collect (default: 100)
+--resamples=RESAMPLES -rs RESAMPLES number of resamples for the bootstrap (default: 100000)
+--confidence-interval=INTERVAL -ci INTERVAL confidence interval for the bootstrap (between 0 and 1, default: 0.95)
+--output=FILE -o FILE output file (default: )
+--reporter=REPORTER -r REPORTER reporter to use (default: standard)
+--title=TITLE -t TITLE set report title
+--no-analysis -A perform only measurements; do not perform any analysis
+--list -l list benchmarks
+--list-reporters -lr list available reporters
+--verbose -v show verbose output (mutually exclusive with -q)
+--summary -q show summary output (mutually exclusive with -v)
+$
+{% endhighlight %}
+
+{% highlight console %}
+$ bin/examples/example5
+clock resolution: mean is 24.4967 ns (20480002 iterations)
+
+benchmarking construct small
+collecting 100 samples, 403 iterations each, in estimated 2.418 ms
+mean: 40.3394 ns, lb 40.3318 ns, ub 40.365 ns, ci 0.95
+std dev: 0.0629761 ns, lb 0.0186515 ns, ub 0.139918 ns, ci 0.95
+found 3 outliers among 100 samples (3%)
+variance is unaffected by outliers
+
+benchmarking construct large
+collecting 100 samples, 272 iterations each, in estimated 2.448 ms
+mean: 51.5876 ns, lb 51.5589 ns, ub 51.7001 ns, ci 0.95
+std dev: 0.251572 ns, lb 0.0545789 ns, ub 0.581709 ns, ci 0.95
+found 5 outliers among 100 samples (5%)
+variance is unaffected by outliers
+
+benchmarking destroy small
+collecting 100 samples, 324 iterations each, in estimated 2.43 ms
+mean: 27.4421 ns, lb 27.4391 ns, ub 27.456 ns, ci 0.95
+std dev: 0.028566 ns, lb 0.00233815 ns, ub 0.0679242 ns, ci 0.95
+found 1 outliers among 100 samples (1%)
+variance is unaffected by outliers
+
+benchmarking destroy large
+collecting 100 samples, 269 iterations each, in estimated 2.4479 ms
+mean: 31.5494 ns, lb 31.5042 ns, ub 31.7417 ns, ci 0.95
+std dev: 0.406734 ns, lb 0.0355798 ns, ub 0.954117 ns, ci 0.95
+found 23 outliers among 100 samples (23%)
+variance is slightly inflated by outliers
+$
+{% endhighlight %}
+
+{% highlight console %}
+$ bin/examples/example5 -q
+
+construct small
+mean: 40.3974 ns
+std dev: 0.493846 ns
+variance is unaffected by outliers
+
+construct large
+mean: 51.6843 ns
+std dev: 0.619526 ns
+variance is unaffected by outliers
+
+destroy small
+mean: 27.4425 ns
+std dev: 0.0689035 ns
+variance is unaffected by outliers
+
+destroy large
+mean: 31.1184 ns
+std dev: 0.140576 ns
+variance is unaffected by outliers
+{% endhighlight %}
+
From 4db560dcae4aa5fb4433c896522699ff2a1ac7c4 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Wed, 18 Feb 2015 19:59:07 +0100
Subject: [PATCH 03/21] Basic docs
---
_layouts/{test.html => default.html} | 0
index.html | 130 ---------------------
index.md | 167 +++++++++++++++++++++++++++
test.md | 83 -------------
4 files changed, 167 insertions(+), 213 deletions(-)
rename _layouts/{test.html => default.html} (100%)
delete mode 100644 index.html
create mode 100644 index.md
delete mode 100644 test.md
diff --git a/_layouts/test.html b/_layouts/default.html
similarity index 100%
rename from _layouts/test.html
rename to _layouts/default.html
diff --git a/index.html b/index.html
deleted file mode 100644
index f401cf1..0000000
--- a/index.html
+++ /dev/null
@@ -1,130 +0,0 @@
-
-
-
-
-
-
-
-
-
-
- Nonius
-
-
-
-
-
-
-
-
Nonius
-
A C++ benchmarking framework
-
-
-
-
-
-
-
What is nonius?
-
-
Nonius is a framework for benchmarking small snippets of C++ code. It is very
-heavily inspired by Criterion, a similar Haskell-based tool. It
-runs your code, measures the time it takes to run, and then performs some
-statistical analysis on those measurements.
-
-
How do I use it?
-
-
The library itself is header-only, so you don't have to build anything. It
-requires a C++11 capable compiler; it was tested with GCC 4.8.3, clang 3.5, and
-VC++ 19.0. Most development takes place in the devel branch with GCC with
-clang. The msvc branch tracks the latest successfully tested revision on
-VC++ and the stable branch tracks the latest revision that was tested
-successfully on all three compilers.
-
-
It depends on Boost for a few mathematical functions, for some
-string algorithms, and, in VC++, for the timing functions as well. Boost.Chrono
-is not a header-only library, but since it is only used with VC++ everything
-gets linked automatically without intervention.
-
-
In the releases
-page you can grab a single header file with everything, ready to be #included
-in your files.
-
-
There are examples of both simple and advanced usage in the examples folder.
-
-
If you just want to run a quick benchmark you can put everything in one file,
-as in the examples. If you prefer to separate things into different files, it
-is recommended that you create one small file with the runner code by #defining
-the macro NONIUS_RUNNER and then #including the nonius single header. In other
-files you don't #define that macro; just #include the header and write the
-benchmarks. Then compile and link everything together.
-
-
Nonius standard runner has several command-line options for configuring a run.
-Pass the --help flag to the compiled runner to see the various flags and a
-short description of each. The standard runner includes all your benchmarks and
-four reporters: plain text, CSV with raw timings, JUnit-compatible XML, and a
-nice HTML file with a scatter plot of the timings.
-
-
Woah, what do all these numbers mean?
-
-
If you execute the standard runner without requesting a particular reporter,
-nonius will use plain text to report the results.
-
-
The first thing that nonius does when benchmarking is to find out where it is
-running. It estimates the resolution and the cost of using the clock. It will
-print out the mean of the samples it took, and also some information about the
-spread of those values, namely any outliers seen.
-
-
Outliers are classified as "low" or "high" depending on whether they are above
-or below the mean. They can be "mild" or "severe" if they are relatively far
-from the rest of the measurements. If you request verbose output the default
-reporter will provide outlier classification.
-
-
After ascertaining the characteristics of the environment, the benchmarks are
-run in sequence. Each one consists of taking a number of samples determined by
-the configuration (defaults to 100). Each sample consists of running the code
-being measured for a number of times that makes sure it takes enough time that
-the clock resolution does not affect the measurement.
-
-
After the measurements are performed, a statistical bootstrapping is performed
-on the data. The number of resamples is configurable but defaults to 100000.
-After the bootstrapping is done, the mean and standard deviation estimates are
-printed out, along with their confidence interval, followed by information about
-the outliers. The very last information tells us if the outliers might be
-important: if they affect the variance greatly, our measurements might not be
-very trustworthy. It could be that there is another factor affecting our
-measurements (say, some other application that was doing some heavy task at the
-same time), or maybe the code being measure varies wildly in performance.
-Nonius will provide the data; it's up to you to make sense of it.
-
-
Are there any restrictions on the use of nonius?
-
-
Nonius is released under the CC0 license, which is essentially a public
-domain dedication with legalese to emulate the public domain as much as
-possible under jurisdictions that do not have such a concept. That means you
-can really do whatever you want with the code in nonius, because I waived as
-many of my rights on it as I am allowed.
-
-
However, currently nonius makes use of some code distributed under the
-CC-BY-NC and the MIT licenses. The html reporter uses the Highcharts JS
-and jQuery libraries for the interactive charts and the cpptemplate library
-for generating HTML from a template.
-
-
What does "nonius" mean?
-
-
Nonius is a device created in 1542 by the Portuguese inventor
-Pedro Nunes (Petrus Nonius in Latin) that improved the accuracy of the
-astrolabe. It was adapted in 1631 by the French mathematician Pierre Vernier to
-create the vernier scale.
-
-
-
-
-
-
-
-
diff --git a/index.md b/index.md
new file mode 100644
index 0000000..c194893
--- /dev/null
+++ b/index.md
@@ -0,0 +1,167 @@
+---
+title: Nonius
+layout: default
+---
+## What is nonius?
+
+Nonius is a framework for benchmarking small snippets of C++ code. It is very
+heavily inspired by [Criterion], a similar Haskell-based tool. It runs your
+code, measures the time it takes to run, and then performs some statistical
+analysis on those measurements.
+
+ [Criterion]: http://www.serpentine.com/blog/2009/09/29/criterion-a-new-benchmarking-library-for-haskell/
+
+## How do I use it?
+
+### Installation and dependencies
+
+The library itself is header-only so you don't have to build it. It comes as a
+single header that you can drop the header somewhere and #include it in your
+code. You can grab the header from the [releases] page.
+
+ [releases]: https://github.com/rmartinho/nonius/releases
+
+
+You will need a C++11 capable compiler; it has been tested with GCC 4.8.3,
+clang 3.5, and VC++ 18.0. Older versions of these compilers may work, but there
+are no guarantees. Newer versions of these compilers are also supported.
+
+The library depends on [Booost] for a few mathematical functions, for some
+string algorithms, and, in some versions of VC++, for the timing functions as
+well. Boost.Chrono is not a header-only library, but since it is only used with
+VC++ everything gets linked automatically without intervention.
+
+ [Boost]: http://www.boost.org
+
+### Authoring benchmarks
+
+There are examples of both simple and advanced usage in the `examples` folder.
+For now that is the primary documentation. Once I am content with a stable
+interface there will be more detailed explanations.
+
+If you just want to run a quick benchmark you can put everything in one file, as
+in the examples. If you have something more complicated and prefer to separate
+things into different files, it is recommended that you create one small file
+with the runner infrastructure by #defining the macro `NONIUS_RUNNER` and
+then #including the nonius header.
+
+{% highlight cpp %}
+// runner file contents
+#define NONIUS_RUNNER
+#include "nonius.h++"
+{% endhighlight %}
+
+In other files you don't #define that macro; just #include the header and write
+the benchmarks.
+
+{% highlight cpp %}
+// other files
+#include "nonius.h++"
+
+// everything else goes here
+{% endhighlight %}
+
+Then you compile and link everything together as normal. Keep in mind that the
+statistical analysis is multithreaded so you may need to pass extra flags to
+your compiler (like `-pthread` in GCC). That gives you an executable with your
+benchmarks and with the nonius standard benchmark runner.
+
+### Running benchmarks
+
+The standard runner has several command-line options for configuring a run.
+Pass the `--help` flag to the compiled runner to see the various flags and a
+short description of each. The runner includes all your benchmarks and it comes
+equipped with four reporters: plain text, CSV with raw timings, JUnit-compatible
+XML, and an HTML file with a scatter plot of the timings.
+
+If you execute the runner without requesting a particular reporter, nonius will
+use plain text to report the results.
+
+The first thing that nonius does when running is testing the clock. By default
+it uses the clock provided by `std::chrono::high_resolution_clock`. The runner
+estimates the resolution and the cost of using the clock and then prints out
+that estimate.
+
+{% highlight console %}
+clock resolution: mean is 28.1296 ns (20480002 iterations)
+{% endhighlight %}
+
+After ascertaining the characteristics of the clock, the benchmarks are run in
+sequence. Each benchmark consists of taking a number of samples determined by
+the command-line flags (defaults to 100). Each of those samples consists of
+running the code being measured for a number of times that makes sure it takes
+enough time that the clock resolution does not affect the measurement. If you're
+benchmarking code that takes significantly more than the clock resolution to
+run, it will probably run it once for each sample. However, if one run of that
+code is too fast, nonius will scale it by running the code more than once per
+sample. This obviously implies that your benchmarks should be completely
+reentrant. There is also the underlying assumption that the time it takes to run
+the code does not vary wildly.
+
+{% highlight console %}
+benchmarking construct small
+collecting 100 samples, 438 iterations each, in estimated 2.8032 ms
+{% endhighlight %}
+
+After the measurements are performed, a statistical [bootstrapping] is performed
+on the samples. The number of resamples for that bootstrapping is configurable
+but defaults to 100000. After the bootstrapping is done, the runner will print
+estimates for the mean and standard deviation. The estimates come with a lower
+bound and an upper bound, and the confidence interval (which is configurable but
+defaults to 95%).
+
+ [bootstrapping]: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
+
+{% highlight console %}
+mean: 41.3622 ns, lb 41.3479 ns, ub 41.4251 ns, ci 0.95
+std dev: 0.130953 ns, lb 0.0209896 ns, ub 0.309054 ns, ci 0.95
+{% endhighlight %}
+
+After all that, the runner will tell you if about any samples that are outliers
+and whether those might be important: if they affect the variance greatly, our
+measurements might not be very trustworthy. It could be that there is another
+factor affecting our measurements (say, some other application that was doing
+some heavy task at the same time), or maybe the code being measure varies wildly
+in performance. Nonius will provide the data; it's up to you to make sense of
+it.
+
+{% highlight console %}
+found 19 outliers among 100 samples (19%)
+variance is unaffected by outliers
+{% endhighlight %}
+
+Outliers are classified as "low" or "high" depending on whether they are above
+or below the mean. They can be "mild" or "severe" if they are relatively far
+from the rest of the measurements. If you request verbose output the default
+reporter will give you outlier classification.
+
+## Licensing
+
+Nonius is released under the [CC0] license, which is essentially a public domain
+dedication with legalese to emulate the public domain as much as possible under
+jurisdictions that do not have such a concept. That means you can really do
+whatever you want with the code in nonius, because I waived as many of my rights
+on it as I am allowed.
+
+ [CC0]: http://creativecommons.org/publicdomain/zero/1.0/
+
+However, currently nonius makes use of some code distributed under the
+[CC-BY-NC] and the [MIT] licenses. The `html` reporter uses the [Highcharts JS]
+and [jQuery] libraries for the interactive charts and the [cpptemplate] library
+for generating HTML from a template.
+
+ [CC-BY-NC]: http://creativecommons.org/licenses/by-nc/3.0/
+ [MIT]: https://bitbucket.org/ginstrom/cpptemplate/raw/d4263ca998038f7ae18aeb9d2358f0c11f00552d/LICENSE.txt
+ [Highcharts JS]: http://www.highcharts.com/
+ [jQuery]: http://jquery.org/
+ [cpptemplate]: https://bitbucket.org/ginstrom/cpptemplate
+
+## Trivia
+
+A [nonius] is a device created in 1542 by the Portuguese inventor Pedro Nunes
+(Petrus Nonius in Latin) that improved the accuracy of the astrolabe. It was
+adapted in 1631 by the French mathematician Pierre Vernier to create the vernier
+scale.
+
+ [Nonius]: http://en.wikipedia.org/wiki/Nonius_%28device%29
+
diff --git a/test.md b/test.md
deleted file mode 100644
index 8ed5a88..0000000
--- a/test.md
+++ /dev/null
@@ -1,83 +0,0 @@
----
-title: foo
-layout: test
----
-Thingies!
-
-{% highlight console %}
-$ bin/examples/example5 -h
-Usage: bin/examples/example5 [OPTIONS]
-
---help -h show this help message
---samples=SAMPLES -s SAMPLES number of samples to collect (default: 100)
---resamples=RESAMPLES -rs RESAMPLES number of resamples for the bootstrap (default: 100000)
---confidence-interval=INTERVAL -ci INTERVAL confidence interval for the bootstrap (between 0 and 1, default: 0.95)
---output=FILE -o FILE output file (default: )
---reporter=REPORTER -r REPORTER reporter to use (default: standard)
---title=TITLE -t TITLE set report title
---no-analysis -A perform only measurements; do not perform any analysis
---list -l list benchmarks
---list-reporters -lr list available reporters
---verbose -v show verbose output (mutually exclusive with -q)
---summary -q show summary output (mutually exclusive with -v)
-$
-{% endhighlight %}
-
-{% highlight console %}
-$ bin/examples/example5
-clock resolution: mean is 24.4967 ns (20480002 iterations)
-
-benchmarking construct small
-collecting 100 samples, 403 iterations each, in estimated 2.418 ms
-mean: 40.3394 ns, lb 40.3318 ns, ub 40.365 ns, ci 0.95
-std dev: 0.0629761 ns, lb 0.0186515 ns, ub 0.139918 ns, ci 0.95
-found 3 outliers among 100 samples (3%)
-variance is unaffected by outliers
-
-benchmarking construct large
-collecting 100 samples, 272 iterations each, in estimated 2.448 ms
-mean: 51.5876 ns, lb 51.5589 ns, ub 51.7001 ns, ci 0.95
-std dev: 0.251572 ns, lb 0.0545789 ns, ub 0.581709 ns, ci 0.95
-found 5 outliers among 100 samples (5%)
-variance is unaffected by outliers
-
-benchmarking destroy small
-collecting 100 samples, 324 iterations each, in estimated 2.43 ms
-mean: 27.4421 ns, lb 27.4391 ns, ub 27.456 ns, ci 0.95
-std dev: 0.028566 ns, lb 0.00233815 ns, ub 0.0679242 ns, ci 0.95
-found 1 outliers among 100 samples (1%)
-variance is unaffected by outliers
-
-benchmarking destroy large
-collecting 100 samples, 269 iterations each, in estimated 2.4479 ms
-mean: 31.5494 ns, lb 31.5042 ns, ub 31.7417 ns, ci 0.95
-std dev: 0.406734 ns, lb 0.0355798 ns, ub 0.954117 ns, ci 0.95
-found 23 outliers among 100 samples (23%)
-variance is slightly inflated by outliers
-$
-{% endhighlight %}
-
-{% highlight console %}
-$ bin/examples/example5 -q
-
-construct small
-mean: 40.3974 ns
-std dev: 0.493846 ns
-variance is unaffected by outliers
-
-construct large
-mean: 51.6843 ns
-std dev: 0.619526 ns
-variance is unaffected by outliers
-
-destroy small
-mean: 27.4425 ns
-std dev: 0.0689035 ns
-variance is unaffected by outliers
-
-destroy large
-mean: 31.1184 ns
-std dev: 0.140576 ns
-variance is unaffected by outliers
-{% endhighlight %}
-
From a7a4efda9a3db6a527e14c6d5525097a163ac6b7 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 12:39:51 +0100
Subject: [PATCH 04/21] Fixes minor doc issues.
---
index.md | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/index.md b/index.md
index c194893..265d8ee 100644
--- a/index.md
+++ b/index.md
@@ -2,22 +2,24 @@
title: Nonius
layout: default
---
+
## What is nonius?
Nonius is a framework for benchmarking small snippets of C++ code. It is very
heavily inspired by [Criterion], a similar Haskell-based tool. It runs your
code, measures the time it takes to run, and then performs some statistical
-analysis on those measurements.
+analysis on those measurements. The [source code] can be found on GitHub.
[Criterion]: http://www.serpentine.com/blog/2009/09/29/criterion-a-new-benchmarking-library-for-haskell/
+ [source code]: https://github.com/rmartinho/nonius
## How do I use it?
### Installation and dependencies
The library itself is header-only so you don't have to build it. It comes as a
-single header that you can drop the header somewhere and #include it in your
-code. You can grab the header from the [releases] page.
+single header that you can drop somewhere and #include it in your code. You can
+grab the header from the [releases] page.
[releases]: https://github.com/rmartinho/nonius/releases
@@ -26,7 +28,7 @@ You will need a C++11 capable compiler; it has been tested with GCC 4.8.3,
clang 3.5, and VC++ 18.0. Older versions of these compilers may work, but there
are no guarantees. Newer versions of these compilers are also supported.
-The library depends on [Booost] for a few mathematical functions, for some
+The library depends on [Boost] for a few mathematical functions, for some
string algorithms, and, in some versions of VC++, for the timing functions as
well. Boost.Chrono is not a header-only library, but since it is only used with
VC++ everything gets linked automatically without intervention.
@@ -35,10 +37,12 @@ VC++ everything gets linked automatically without intervention.
### Authoring benchmarks
-There are examples of both simple and advanced usage in the `examples` folder.
+There are examples of both simple and advanced usage in the [examples] folder.
For now that is the primary documentation. Once I am content with a stable
interface there will be more detailed explanations.
+ [examples]: https://github.com/rmartinho/nonius/tree/devel/examples
+
If you just want to run a quick benchmark you can put everything in one file, as
in the examples. If you have something more complicated and prefer to separate
things into different files, it is recommended that you create one small file
@@ -135,6 +139,15 @@ or below the mean. They can be "mild" or "severe" if they are relatively far
from the rest of the measurements. If you request verbose output the default
reporter will give you outlier classification.
+{% highlight console %}
+found 19 outliers among 100 samples (19%)
+ 2 (2%) low mild
+ 3 (3%) high mild
+ 14 (14%) high severe
+variance introduced by outliers: 0.99%
+variance is unaffected by outliers
+{% endhighlight %}
+
## Licensing
Nonius is released under the [CC0] license, which is essentially a public domain
From f00098d0db5cca099f4ed6f22201cdb013d61ea3 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 14:27:07 +0100
Subject: [PATCH 05/21] Adds benchmark authoring guide.
---
authoring-benchmarks.md | 225 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 225 insertions(+)
create mode 100644 authoring-benchmarks.md
diff --git a/authoring-benchmarks.md b/authoring-benchmarks.md
new file mode 100644
index 0000000..83dd663
--- /dev/null
+++ b/authoring-benchmarks.md
@@ -0,0 +1,225 @@
+---
+title: Nonius - Authoring benchmarks
+layout: default
+---
+
+Writing benchmarks is not easy. Nonius simplifies certain aspects but you'll
+always need to take care about various aspects. Understanding a few things about
+the way nonius runs your code will be very helpful when writing your benchmarks.
+
+First off, let's go over some terminology that will be used throughout this
+guide.
+
+- *User code*: user code is the code that the user provides to be measured.
+- *Run*: one run is one execution of the user code.
+- *Sample*: one sample is one data point obtained by measuring the time it takes
+ to perform a certain number of runs. One sample can consist of more than one
+ run if the clock available does not have enough resolution to accurately
+ measure a single run. All samples for a given benchmark execution are obtained
+ with the same number of runs.
+
+## Execution procedure
+
+Now I can explain how a benchmark is executed in nonius. There are three main
+steps, though the first does not need to be repeated for every benchmark.
+
+1. *Environmental probe*: before any benchmarks can be executed, the clock's
+resolution is estimated. A few other environmental artifacts are also estimated
+at this point, like the cost of calling the clock function, but they almost
+never have any impact in the results.
+
+2. *Estimation*: the user code is executed a few times to obtain an estimate of
+the amount of runs that should be in each sample. This also has the potential
+effect of bringing relevant code and data into the caches before the actual
+measurement starts.
+
+3. *Measurement*: all the samples are collected sequentially by performing the
+number of runs estimated in the previous step for each sample.
+
+This already gives us one important rule for writing benchmarks for nonius: the
+benchmarks must be repeatable. The user code will be executed several times, and
+the number of times it will be executed during the estimation step cannot be
+known beforehand since it depends on the time it takes to execute the code.
+User code that cannot be executed repeatedly will lead to bogus results or
+crashes.
+
+## The optimizer
+
+Sometimes the optimizer will optimize away the very code that you want to
+measure. There are several ways to use results that will prevent the optimiser
+from removing them. You can use the `volatile` keyword, or you can output the
+value to standard output or to a file, both of which force the program to
+actually generate the value somehow.
+
+Nonius adds a third option. The values returned by any function provided as user
+code are guaranteed to be evaluated and not optimised out. This means that if
+your user code consists of computing a certain value, you don't need to bother
+with using `volatile` or forcing output. Just `return` it from the function.
+That helps with keeping the code in a natural fashion.
+
+Here's an example:
+
+ // may measure nothing at all by skipping the long calculation since its
+ // result is not used
+ NONIUS_BENCHMARK("no return", [] { long_calculation(); })
+
+ // the result of long_calculation() is guaranteed to be computed somehow
+ NONIUS_BENCHMARK("with return", [] { return long_calculation(); })
+
+However, there's no other form of control over the optimizer whatsoever. It is
+up to you to write a benchmark that actually measures what you want and doesn't
+just measure the time to do a whole bunch of nothing.
+
+To sum up, there are two simple rules: whatever you would do in handwritten code
+to control optimization still works in nonius; and nonius makes return values
+from user code into observable effects that can't be optimized away.
+
+## Interface
+
+The recommended way to use nonius is with the single header form. You can just
+`#include ` and everything is available.
+
+There are two distinct parts of the nonius interface: specifying benchmarks, and
+running benchmarks.
+
+### Specification
+
+Nonius includes an imperative interface to specify benchmarks for execution, but
+the declarative interface is much simpler. As of this writing the imperative
+interface is still subject to change, so it won't be documented.
+
+The declarative interface consists of the `NONIUS_BENCHMARK` macro. This macro
+expands to some machinery that registers the benchmark in a global registry that
+can be accessed by the standard runner.
+
+`NONIUS_BENCHMARK` takes two parameters: a string literal with a unique name to
+identify the benchmark, and a callable object with the actual code. This
+callable object is usually provided as a lambda expression.
+
+There are two types of callable objects that can be provided. The simplest ones
+take no arguments and just run the user code that needs to be measured. However,
+if the callable can be called with a `nonius::chronometer` argument, some
+advanced features are available. The simple callables are invoked once per run,
+while the advanced callables are invoked exactly twice: once during the
+estimation phase, and another time during the execution phase.
+
+ NONIUS_BENCHMARK("simple", [] { return long_computation(); });
+
+ NONIUS_BENCHMARK("advanced", [](nonius::chronometer meter) {
+ set_up();
+ meter.measure([] { return long_computation(); });
+ });
+
+These advanced callables no longer consist entirely of user code to be measured.
+In these cases, the code to be measured is provided via the
+`nonius::chronometer::measure` member function. This allows you to set up any
+kind of state that might be required for the benchmark but is not to be included
+in the measurements, like making a vector of random integers to feed to a
+sorting algorithm.
+
+A single call to `nonius::chronometer::measure` performs the actual measurements
+by invoking the callable object passed in as many times as necessary. Anything
+that needs to be done outside the measurement can be done outside the call to
+`measure`.
+
+The callable object passed in to `measure` can optionally accept an `int`
+parameter.
+
+ meter.measure([](int i) { return long_computation(i); });
+
+If it accepts an `int` parameter, the sequence number of each run will be passed
+in, starting with 0. This is useful if you want to measure some mutating code,
+for example. The number of runs can be known beforehand by calling
+`nonius::chronometer::runs`; with this one can set up a different instance to be
+mutated by each run.
+
+ std::vector v(meter.runs());
+ std::fill(v.begin(), v.end(), test_string());
+ meter.measure([&v](int i) { in_place_escape(v[i]); });
+
+Note that it is not possible to simply use the same instance for different runs
+and resetting it between each run since that would pollute the measurements with
+the resetting code.
+
+All of these tools give you a lot mileage, but there are two things that still
+need special handling: constructors and destructors. The problem is that if you
+use automatic objects they get destroyed by the end of the scope, so you end up
+measuring the time for construction and destruction together. And if you use
+dynamic allocation instead, you end up including the time to allocate memory in
+the measurements.
+
+To solve this conundrum, nonius provides class templates that let you manually
+construct and destroy objects without dynamic allocation and in a way that lets
+you measure construction and destruction separately.
+
+{% highlight cpp %}
+NONIUS_BENCHMARK("construct", [](nonius::chronometer meter)
+{
+ std::vector> storage(meter.runs());
+ meter.measure([&](int i) { storage[i].construct("thing"); });
+})
+
+NONIUS_BENCHMARK("destroy", [](nonius::chronometer meter)
+{
+ std::vector> storage(meter.runs());
+ for(auto&& o : storage)
+ o.construct("thing");
+ meter.measure([&](int i) { storage[i].destruct(); });
+})
+{% endhighlight %}
+
+`nonius::storage_for` objects are just pieces of raw storage suitable for `T`
+objects. You can use the `nonius::storage_for::construct` member function to call a constructor and
+create an object in that storage. So if you want to measure the time it takes
+for a certain constructor to run, you can just measure the time it takes to run
+this function.
+
+When the lifetime of a `nonius::storage_for` object ends, if an actual object was
+constructed there it will be automatically destroyed, so nothing leaks.
+
+If you want to measure a destructor, though, we need to use
+`nonius::destructable_object`. These objects are similar to
+`nonius::storage_for` in that construction of the `T` object is manual, but
+it does not destroy anything automatically. Instead, you are required to call
+the `nonius::destructable_object::destruct` member function, which is what you
+can use to measure the destruction time.
+
+### Execution
+
+Nonius includes an implementation of `main()` that provides a command-line
+runner. This means you can just make your benchmarks into an executable and
+you're good to go. If you want that default implementation of `main`, just
+`#define NONIUS_RUNNER` before #including the nonius header.
+
+You can also write your own main if you need something fancy, but for now that
+API is subject to change and not documented.
+
+Invoking the standard runner with the `--help` flag provides information about
+the options available. Here are some examples of common choices:
+
+> Run all benchmarks and provide a simple textual report
+>
+> $ runner
+>
+> Run all benchmarks and provide extra details
+>
+> $ runner -v
+>
+> Run all benchmarks collecting 500 samples instead of the default 100, and
+> report extra details
+>
+> $ runner -v -s 500
+>
+> Run all benchmarks and output all samples to a CSV file named `results.csv`
+>
+> $ runner -r csv -o results.csv
+>
+> Run all benchmarks and output a JUnit compatible report named `results.xml`
+>
+> $ runner -r junit -o results.xml
+>
+> Run all benchmarks and output an HTML report named `results.html` with the
+> title "Some benchmarks", using 250 samples per benchmark
+>
+> $ runner -r html -o results.html -t "Some benchmarks" -s 250
+>
From c373e25f570f2c3435e727a3578e5cefff9e5377 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 14:32:45 +0100
Subject: [PATCH 06/21] Links to authoring guide in index
---
index.md | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/index.md b/index.md
index 265d8ee..b1fbcd4 100644
--- a/index.md
+++ b/index.md
@@ -37,12 +37,16 @@ VC++ everything gets linked automatically without intervention.
### Authoring benchmarks
-There are examples of both simple and advanced usage in the [examples] folder.
-For now that is the primary documentation. Once I am content with a stable
-interface there will be more detailed explanations.
+Writing benchmarks with nonius is not complicated, but there are several things
+to keep in mind when doing so. There is a separate [guide] about the subject,
+and there are examples of both simple and advanced usage in the [examples]
+folder.
+ [guide]: authoring-benchmarks
[examples]: https://github.com/rmartinho/nonius/tree/devel/examples
+### Compiling benchmarks
+
If you just want to run a quick benchmark you can put everything in one file, as
in the examples. If you have something more complicated and prefer to separate
things into different files, it is recommended that you create one small file
@@ -68,7 +72,8 @@ the benchmarks.
Then you compile and link everything together as normal. Keep in mind that the
statistical analysis is multithreaded so you may need to pass extra flags to
your compiler (like `-pthread` in GCC). That gives you an executable with your
-benchmarks and with the nonius standard benchmark runner.
+benchmarks and with the nonius standard benchmark runner. And don't forget to
+enable optimisations!
### Running benchmarks
@@ -99,7 +104,7 @@ benchmarking code that takes significantly more than the clock resolution to
run, it will probably run it once for each sample. However, if one run of that
code is too fast, nonius will scale it by running the code more than once per
sample. This obviously implies that your benchmarks should be completely
-reentrant. There is also the underlying assumption that the time it takes to run
+repeatable. There is also the underlying assumption that the time it takes to run
the code does not vary wildly.
{% highlight console %}
From 20a8a9602a64494472a199f411f0032227e4fd19 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 14:43:12 +0100
Subject: [PATCH 07/21] Fixes ordered list style
---
css/stylesheet.css | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/css/stylesheet.css b/css/stylesheet.css
index 8d7bac3..8069fe9 100644
--- a/css/stylesheet.css
+++ b/css/stylesheet.css
@@ -284,9 +284,13 @@ ul li {
padding-left: 20px;
}
+ol {
+ padding: 10px;
+}
+
ol li {
- list-style: decimal inside;
- padding-left: 3px;
+ list-style: decimal;
+ padding-left: 10px;
}
dl dt {
From dd910a1032f5a44fe929cfb93b2cee7f78d74273 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 14:52:53 +0100
Subject: [PATCH 08/21] Moves execution guide out of authoring guide
---
authoring-benchmarks.md | 67 +++++++++--------------------------------
css/stylesheet.css | 5 ++-
index.md | 45 ++++++++++++++++++++++-----
3 files changed, 53 insertions(+), 64 deletions(-)
diff --git a/authoring-benchmarks.md b/authoring-benchmarks.md
index 83dd663..c966b84 100644
--- a/authoring-benchmarks.md
+++ b/authoring-benchmarks.md
@@ -59,12 +59,14 @@ That helps with keeping the code in a natural fashion.
Here's an example:
- // may measure nothing at all by skipping the long calculation since its
- // result is not used
- NONIUS_BENCHMARK("no return", [] { long_calculation(); })
+{% highlight cpp %}
+// may measure nothing at all by skipping the long calculation since its
+// result is not used
+NONIUS_BENCHMARK("no return", [] { long_calculation(); })
- // the result of long_calculation() is guaranteed to be computed somehow
- NONIUS_BENCHMARK("with return", [] { return long_calculation(); })
+// the result of long_calculation() is guaranteed to be computed somehow
+NONIUS_BENCHMARK("with return", [] { return long_calculation(); })
+{% endhighlight %}
However, there's no other form of control over the optimizer whatsoever. It is
up to you to write a benchmark that actually measures what you want and doesn't
@@ -74,15 +76,7 @@ To sum up, there are two simple rules: whatever you would do in handwritten code
to control optimization still works in nonius; and nonius makes return values
from user code into observable effects that can't be optimized away.
-## Interface
-
-The recommended way to use nonius is with the single header form. You can just
-`#include ` and everything is available.
-
-There are two distinct parts of the nonius interface: specifying benchmarks, and
-running benchmarks.
-
-### Specification
+## Benchmark specification
Nonius includes an imperative interface to specify benchmarks for execution, but
the declarative interface is much simpler. As of this writing the imperative
@@ -103,12 +97,14 @@ advanced features are available. The simple callables are invoked once per run,
while the advanced callables are invoked exactly twice: once during the
estimation phase, and another time during the execution phase.
+{% highlight cpp %}
NONIUS_BENCHMARK("simple", [] { return long_computation(); });
NONIUS_BENCHMARK("advanced", [](nonius::chronometer meter) {
set_up();
meter.measure([] { return long_computation(); });
});
+{% endhighlight %}
These advanced callables no longer consist entirely of user code to be measured.
In these cases, the code to be measured is provided via the
@@ -125,7 +121,9 @@ that needs to be done outside the measurement can be done outside the call to
The callable object passed in to `measure` can optionally accept an `int`
parameter.
+{% highlight cpp %}
meter.measure([](int i) { return long_computation(i); });
+{% endhighlight %}
If it accepts an `int` parameter, the sequence number of each run will be passed
in, starting with 0. This is useful if you want to measure some mutating code,
@@ -133,9 +131,11 @@ for example. The number of runs can be known beforehand by calling
`nonius::chronometer::runs`; with this one can set up a different instance to be
mutated by each run.
+{% highlight cpp %}
std::vector v(meter.runs());
std::fill(v.begin(), v.end(), test_string());
meter.measure([&v](int i) { in_place_escape(v[i]); });
+{% endhighlight %}
Note that it is not possible to simply use the same instance for different runs
and resetting it between each run since that would pollute the measurements with
@@ -184,42 +184,3 @@ it does not destroy anything automatically. Instead, you are required to call
the `nonius::destructable_object::destruct` member function, which is what you
can use to measure the destruction time.
-### Execution
-
-Nonius includes an implementation of `main()` that provides a command-line
-runner. This means you can just make your benchmarks into an executable and
-you're good to go. If you want that default implementation of `main`, just
-`#define NONIUS_RUNNER` before #including the nonius header.
-
-You can also write your own main if you need something fancy, but for now that
-API is subject to change and not documented.
-
-Invoking the standard runner with the `--help` flag provides information about
-the options available. Here are some examples of common choices:
-
-> Run all benchmarks and provide a simple textual report
->
-> $ runner
->
-> Run all benchmarks and provide extra details
->
-> $ runner -v
->
-> Run all benchmarks collecting 500 samples instead of the default 100, and
-> report extra details
->
-> $ runner -v -s 500
->
-> Run all benchmarks and output all samples to a CSV file named `results.csv`
->
-> $ runner -r csv -o results.csv
->
-> Run all benchmarks and output a JUnit compatible report named `results.xml`
->
-> $ runner -r junit -o results.xml
->
-> Run all benchmarks and output an HTML report named `results.html` with the
-> title "Some benchmarks", using 250 samples per benchmark
->
-> $ runner -r html -o results.html -t "Some benchmarks" -s 250
->
diff --git a/css/stylesheet.css b/css/stylesheet.css
index 8069fe9..b69b94e 100644
--- a/css/stylesheet.css
+++ b/css/stylesheet.css
@@ -285,12 +285,11 @@ ul li {
}
ol {
- padding: 10px;
+ padding-left: 10px;
}
-
ol li {
list-style: decimal;
- padding-left: 10px;
+ padding-left: 3px;
}
dl dt {
diff --git a/index.md b/index.md
index b1fbcd4..b236708 100644
--- a/index.md
+++ b/index.md
@@ -53,6 +53,9 @@ things into different files, it is recommended that you create one small file
with the runner infrastructure by #defining the macro `NONIUS_RUNNER` and
then #including the nonius header.
+You can also write your own `main` function instead, if you need something
+fancy, but for now that API is subject to change and not documented.
+
{% highlight cpp %}
// runner file contents
#define NONIUS_RUNNER
@@ -77,14 +80,40 @@ enable optimisations!
### Running benchmarks
-The standard runner has several command-line options for configuring a run.
-Pass the `--help` flag to the compiled runner to see the various flags and a
-short description of each. The runner includes all your benchmarks and it comes
-equipped with four reporters: plain text, CSV with raw timings, JUnit-compatible
-XML, and an HTML file with a scatter plot of the timings.
-
-If you execute the runner without requesting a particular reporter, nonius will
-use plain text to report the results.
+Invoking the standard runner with the `--help` flag provides information about
+the options available. Here are some examples of common choices:
+
+> Run all benchmarks and provide a simple textual report
+>
+> $ runner
+>
+> Run all benchmarks and provide extra details
+>
+> $ runner -v
+>
+> Run all benchmarks collecting 500 samples instead of the default 100, and
+> report extra details
+>
+> $ runner -v -s 500
+>
+> Run all benchmarks and output all samples to a CSV file named `results.csv`
+>
+> $ runner -r csv -o results.csv
+>
+> Run all benchmarks and output a JUnit compatible report named `results.xml`
+>
+> $ runner -r junit -o results.xml
+>
+> Run all benchmarks and output an HTML report named `results.html` with the
+> title "Some benchmarks", using 250 samples per benchmark
+>
+> $ runner -r html -o results.html -t "Some benchmarks" -s 250
+>
+
+The runner includes all your benchmarks and it comes equipped with four
+reporters: plain text, CSV with raw timings, JUnit-compatible XML, and an HTML
+file with a scatter plot of the timings. If you execute the runner without
+requesting a particular reporter, it will use plain text to report the results.
The first thing that nonius does when running is testing the clock. By default
it uses the clock provided by `std::chrono::high_resolution_clock`. The runner
From 8af5579f3a689e084dd8022cc490697b4c927aa6 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 14:53:51 +0100
Subject: [PATCH 09/21] Fixes grammar
---
index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/index.md b/index.md
index b236708..e9f26df 100644
--- a/index.md
+++ b/index.md
@@ -155,7 +155,7 @@ mean: 41.3622 ns, lb 41.3479 ns, ub 41.4251 ns, ci 0.95
std dev: 0.130953 ns, lb 0.0209896 ns, ub 0.309054 ns, ci 0.95
{% endhighlight %}
-After all that, the runner will tell you if about any samples that are outliers
+After all that, the runner will tell you about any samples that are outliers
and whether those might be important: if they affect the variance greatly, our
measurements might not be very trustworthy. It could be that there is another
factor affecting our measurements (say, some other application that was doing
From 13795518ec2bc26650f64d0388cf2558cff488ea Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 14:55:00 +0100
Subject: [PATCH 10/21] Fixes quotes
---
css/stylesheet.css | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/css/stylesheet.css b/css/stylesheet.css
index b69b94e..9e39673 100644
--- a/css/stylesheet.css
+++ b/css/stylesheet.css
@@ -273,10 +273,9 @@ p a {
}
blockquote {
- font-size: 1.6em;
border-left: 10px solid #e9e9e9;
- margin-bottom: 20px;
- padding: 0 0 0 30px;
+ margin-bottom: 10px;
+ padding: 0 0 0 10px;
}
ul li {
From 87335d0b00e43706ada1c9b4f71514d3f6d8e732 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 19 Feb 2015 14:58:36 +0100
Subject: [PATCH 11/21] Fixes indentation in examples
---
authoring-benchmarks.md | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/authoring-benchmarks.md b/authoring-benchmarks.md
index c966b84..6e76b49 100644
--- a/authoring-benchmarks.md
+++ b/authoring-benchmarks.md
@@ -98,12 +98,12 @@ while the advanced callables are invoked exactly twice: once during the
estimation phase, and another time during the execution phase.
{% highlight cpp %}
- NONIUS_BENCHMARK("simple", [] { return long_computation(); });
+NONIUS_BENCHMARK("simple", [] { return long_computation(); });
- NONIUS_BENCHMARK("advanced", [](nonius::chronometer meter) {
- set_up();
- meter.measure([] { return long_computation(); });
- });
+NONIUS_BENCHMARK("advanced", [](nonius::chronometer meter) {
+ set_up();
+ meter.measure([] { return long_computation(); });
+});
{% endhighlight %}
These advanced callables no longer consist entirely of user code to be measured.
@@ -122,7 +122,7 @@ The callable object passed in to `measure` can optionally accept an `int`
parameter.
{% highlight cpp %}
- meter.measure([](int i) { return long_computation(i); });
+meter.measure([](int i) { return long_computation(i); });
{% endhighlight %}
If it accepts an `int` parameter, the sequence number of each run will be passed
@@ -132,9 +132,9 @@ for example. The number of runs can be known beforehand by calling
mutated by each run.
{% highlight cpp %}
- std::vector v(meter.runs());
- std::fill(v.begin(), v.end(), test_string());
- meter.measure([&v](int i) { in_place_escape(v[i]); });
+std::vector v(meter.runs());
+std::fill(v.begin(), v.end(), test_string());
+meter.measure([&v](int i) { in_place_escape(v[i]); });
{% endhighlight %}
Note that it is not possible to simply use the same instance for different runs
From 2f8fb94ee86d0b7b360f46cdd0ae12f025f5fd84 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Sat, 21 Feb 2015 18:32:17 +0100
Subject: [PATCH 12/21] Ignores deps/ folder
---
.gitignore | 1 +
1 file changed, 1 insertion(+)
diff --git a/.gitignore b/.gitignore
index bb44123..56e57dd 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,3 +2,4 @@ _site/
*.swp
.*.swp
+deps/
From 6c72500ca9c3e3594571642b7ce601a2a972a83b Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Sat, 21 Feb 2015 18:58:58 +0100
Subject: [PATCH 13/21] Adds title to authoring guide
---
authoring-benchmarks.md | 72 +++++++++++++++++++++--------------------
1 file changed, 37 insertions(+), 35 deletions(-)
diff --git a/authoring-benchmarks.md b/authoring-benchmarks.md
index 6e76b49..7e22dc6 100644
--- a/authoring-benchmarks.md
+++ b/authoring-benchmarks.md
@@ -3,6 +3,8 @@ title: Nonius - Authoring benchmarks
layout: default
---
+## Authoring benchmarks
+
Writing benchmarks is not easy. Nonius simplifies certain aspects but you'll
always need to take care about various aspects. Understanding a few things about
the way nonius runs your code will be very helpful when writing your benchmarks.
@@ -18,7 +20,7 @@ guide.
measure a single run. All samples for a given benchmark execution are obtained
with the same number of runs.
-## Execution procedure
+### Execution procedure
Now I can explain how a benchmark is executed in nonius. There are three main
steps, though the first does not need to be repeated for every benchmark.
@@ -43,40 +45,7 @@ known beforehand since it depends on the time it takes to execute the code.
User code that cannot be executed repeatedly will lead to bogus results or
crashes.
-## The optimizer
-
-Sometimes the optimizer will optimize away the very code that you want to
-measure. There are several ways to use results that will prevent the optimiser
-from removing them. You can use the `volatile` keyword, or you can output the
-value to standard output or to a file, both of which force the program to
-actually generate the value somehow.
-
-Nonius adds a third option. The values returned by any function provided as user
-code are guaranteed to be evaluated and not optimised out. This means that if
-your user code consists of computing a certain value, you don't need to bother
-with using `volatile` or forcing output. Just `return` it from the function.
-That helps with keeping the code in a natural fashion.
-
-Here's an example:
-
-{% highlight cpp %}
-// may measure nothing at all by skipping the long calculation since its
-// result is not used
-NONIUS_BENCHMARK("no return", [] { long_calculation(); })
-
-// the result of long_calculation() is guaranteed to be computed somehow
-NONIUS_BENCHMARK("with return", [] { return long_calculation(); })
-{% endhighlight %}
-
-However, there's no other form of control over the optimizer whatsoever. It is
-up to you to write a benchmark that actually measures what you want and doesn't
-just measure the time to do a whole bunch of nothing.
-
-To sum up, there are two simple rules: whatever you would do in handwritten code
-to control optimization still works in nonius; and nonius makes return values
-from user code into observable effects that can't be optimized away.
-
-## Benchmark specification
+### Benchmark specification
Nonius includes an imperative interface to specify benchmarks for execution, but
the declarative interface is much simpler. As of this writing the imperative
@@ -184,3 +153,36 @@ it does not destroy anything automatically. Instead, you are required to call
the `nonius::destructable_object::destruct` member function, which is what you
can use to measure the destruction time.
+### The optimizer
+
+Sometimes the optimizer will optimize away the very code that you want to
+measure. There are several ways to use results that will prevent the optimiser
+from removing them. You can use the `volatile` keyword, or you can output the
+value to standard output or to a file, both of which force the program to
+actually generate the value somehow.
+
+Nonius adds a third option. The values returned by any function provided as user
+code are guaranteed to be evaluated and not optimised out. This means that if
+your user code consists of computing a certain value, you don't need to bother
+with using `volatile` or forcing output. Just `return` it from the function.
+That helps with keeping the code in a natural fashion.
+
+Here's an example:
+
+{% highlight cpp %}
+// may measure nothing at all by skipping the long calculation since its
+// result is not used
+NONIUS_BENCHMARK("no return", [] { long_calculation(); })
+
+// the result of long_calculation() is guaranteed to be computed somehow
+NONIUS_BENCHMARK("with return", [] { return long_calculation(); })
+{% endhighlight %}
+
+However, there's no other form of control over the optimizer whatsoever. It is
+up to you to write a benchmark that actually measures what you want and doesn't
+just measure the time to do a whole bunch of nothing.
+
+To sum up, there are two simple rules: whatever you would do in handwritten code
+to control optimization still works in nonius; and nonius makes return values
+from user code into observable effects that can't be optimized away.
+
From 65f2220e6cebce366aa7822743f3f1faae9d24a2 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Sat, 28 Mar 2015 16:38:39 +0100
Subject: [PATCH 14/21] No mention of imperative for now
---
authoring-benchmarks.md | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/authoring-benchmarks.md b/authoring-benchmarks.md
index 7e22dc6..ec53665 100644
--- a/authoring-benchmarks.md
+++ b/authoring-benchmarks.md
@@ -47,11 +47,8 @@ crashes.
### Benchmark specification
-Nonius includes an imperative interface to specify benchmarks for execution, but
-the declarative interface is much simpler. As of this writing the imperative
-interface is still subject to change, so it won't be documented.
-
-The declarative interface consists of the `NONIUS_BENCHMARK` macro. This macro
+Nonius includes a simple declarative interface to specify benchmarks for
+execution. This declarative interface consists of the `NONIUS_BENCHMARK` macro. This macro
expands to some machinery that registers the benchmark in a global registry that
can be accessed by the standard runner.
From a5d0eb33170e706e57e4d0cdabc559a7b8dd58a7 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Wed, 3 Jun 2015 16:59:04 +0200
Subject: [PATCH 15/21] Add information about new feature macros
---
index.md | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/index.md b/index.md
index e9f26df..acad332 100644
--- a/index.md
+++ b/index.md
@@ -31,7 +31,9 @@ are no guarantees. Newer versions of these compilers are also supported.
The library depends on [Boost] for a few mathematical functions, for some
string algorithms, and, in some versions of VC++, for the timing functions as
well. Boost.Chrono is not a header-only library, but since it is only used with
-VC++ everything gets linked automatically without intervention.
+VC++ everything gets linked automatically without intervention. If desired,
+usage of Boost.Chrono can be forced by #defining the macro
+`NONIUS_USE_BOOST_CHRONO`.
[Boost]: http://www.boost.org
@@ -114,6 +116,11 @@ The runner includes all your benchmarks and it comes equipped with four
reporters: plain text, CSV with raw timings, JUnit-compatible XML, and an HTML
file with a scatter plot of the timings. If you execute the runner without
requesting a particular reporter, it will use plain text to report the results.
+When compiling you can selectively disable any or all of the extra reporters
+by #defining some macros before #including the runner.
+`NONIUS_DISABLE_EXTRA_REPORTERS` disables everything but plain text;
+`NONIUS_DISABLE_X_REPORTER`, where `X` is one of `CSV`, `JUNIT`, or `HTML`
+disables a particular reporter.
The first thing that nonius does when running is testing the clock. By default
it uses the clock provided by `std::chrono::high_resolution_clock`. The runner
From e0d320f442f5a776889e2f76f4fdfbed8edc0aba Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 4 Jun 2015 13:10:33 +0200
Subject: [PATCH 16/21] Minor doc tweaks
---
_layouts/default.html | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/_layouts/default.html b/_layouts/default.html
index aa3faa9..f8da822 100644
--- a/_layouts/default.html
+++ b/_layouts/default.html
@@ -10,7 +10,7 @@
- Nonius by rmartinho
+ Nonius
@@ -19,7 +19,7 @@
Nonius
-
A C++ benchmarking framework
+
A C++ micro-benchmarking framework
@@ -29,7 +29,7 @@
A C++ benchmarking framework
From d12bcb6d3f03aa182a010363e36f190caecc4608 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 4 Jun 2015 13:12:13 +0200
Subject: [PATCH 17/21] Docs title change
---
_layouts/default.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/_layouts/default.html b/_layouts/default.html
index f8da822..cb286bf 100644
--- a/_layouts/default.html
+++ b/_layouts/default.html
@@ -10,7 +10,7 @@
- Nonius
+ Nonius: statistics-powered micro-benchmarking framework
From 2ae3deff066748b0c08760e447521a228e4f3d84 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 4 Jun 2015 13:15:00 +0200
Subject: [PATCH 18/21] Mention disabling of html reporter in license section
---
index.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/index.md b/index.md
index acad332..7367398 100644
--- a/index.md
+++ b/index.md
@@ -202,7 +202,8 @@ on it as I am allowed.
However, currently nonius makes use of some code distributed under the
[CC-BY-NC] and the [MIT] licenses. The `html` reporter uses the [Highcharts JS]
and [jQuery] libraries for the interactive charts and the [cpptemplate] library
-for generating HTML from a template.
+for generating HTML from a template. If you want to use only the public domain
+code for whatever reason, you can disable the `html` reporter easily.
[CC-BY-NC]: http://creativecommons.org/licenses/by-nc/3.0/
[MIT]: https://bitbucket.org/ginstrom/cpptemplate/raw/d4263ca998038f7ae18aeb9d2358f0c11f00552d/LICENSE.txt
From 5cc62c338bf928819113d600a4322fc396382446 Mon Sep 17 00:00:00 2001
From: Martinho Fernandes
Date: Thu, 25 Jun 2015 16:36:37 +0200
Subject: [PATCH 19/21] Add favicon to docs
---
_layouts/default.html | 1 +
favicon.png | Bin 0 -> 35150 bytes
2 files changed, 1 insertion(+)
create mode 100644 favicon.png
diff --git a/_layouts/default.html b/_layouts/default.html
index cb286bf..5be426e 100644
--- a/_layouts/default.html
+++ b/_layouts/default.html
@@ -4,6 +4,7 @@
+
diff --git a/favicon.png b/favicon.png
new file mode 100644
index 0000000000000000000000000000000000000000..165e30c011ecc57755937167b1c34206bb4ee041
GIT binary patch
literal 35150
zcmd?QgdK@8Yz+@Ej7AJP)b6i{7NdFlSYvcr5ltM
z5TxU|`91IRSG*ht3^=y?s!yKhdBy1Is8f(Kk^%rgaZf|V005xiuTTIE178k&f1H9Z
zgdR%w9>Bq`0Jv>5_&3QjjYl2;K#IQphK&5TT?+n_!Bf@5)6mV{)5q$$9pK~RBly(S
z$-~C#nVq29^C#JVWEcSe4ct>veBk?LYt}Eo?BOKw?!1TnN6-6{DSz8SYr46=+y2S%
ze5dqPFs`gT9}W$L)7t8ihT3|hz96=HZeKZfH_NHQK>Jr&!6hj|6+y|JUyH^ry`KbOQ
zf3k2m;%&_17$`0iu!Xf;uf6cgC
z-cv2`$-lS^;>#!{!DqT}%g`86D#1ky0R#utohd0KUK<+^m!`*LLrIX@AQqg?kz|sTUHJU^Q*W$IWb^Ynd>W&
zM6tsn#5l%(bva&X2~*zRm`Li>QHGDm8