Skip to content

Commit

Permalink
Merge pull request #6 from jcmgray/jcmgray-benchmarks
Browse files Browse the repository at this point in the history
add some benchmarks to the readme
  • Loading branch information
jcmgray authored Sep 27, 2024
2 parents da16bcb + 13cf165 commit 3c62649
Showing 1 changed file with 45 additions and 0 deletions.
45 changes: 45 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,51 @@ tree.plot_rubberband()
![optimal-8x8-order](https://github.com/jcmgray/cotengrust/assets/8982598/f8e18ff2-5ace-4e46-81e1-06bffaef5e45)


## Benchmarks

The following benchmarks illustrate performance and may be a useful comparison point for other implementations.

---

First, the runtime of the optimal algorithm on random 3-regular graphs,
with all bond sizes set to 2, for different `mimimize` targets:

<img src="https://github.com/user-attachments/assets/e1b906a8-e234-4558-b183-7141c41beb24" width="400">

Taken over 20 instances, lines show mean and bands show standard error on mean. Note how much easier it is
to find optimal paths for the *maximum* intermediate size or cost only (vs. *total* for all contractions).
While the runtime generally scales exponentially, for some specific geometries it might reduce to
polynomial.

---

For very large graphs, the `random_greedy` optimizer is appropriate, and there is a tradeoff between how
long one lets it run (`ntrials`) and the best cost it achieves. Here we plot these for various
$N=L\times L$ square grid graphs, with all bond sizes set to 2, for different `ntrials`
(labelled on each marker):

<img src="https://github.com/user-attachments/assets/e319b5cf-25ea-4273-aa0f-4f3acb3beaa6" width="400">

Again, data is taken over 20 runs, with lines and bands showing mean and standard error on the mean.
In most cases 32-64 trials is sufficient to achieve close to convergence, but for larger or harder
graphs you may need more. The empirical scaling of the random-greedy algorithm is very roughly
$\mathcal{O}(N^{1.5})$ here.

---

The depth 20 sycamore quantum circuit amplitude is a standard benchmark nowadays, it is generally
a harder graph than the 2d lattice. Still, the random-greedy approach can do quite well due to its
sampling of both temperature and `costmod`:

<img src="https://github.com/user-attachments/assets/17b34e11-a755-4ed7-b88b-39b9e5424cfc" width="400">

Again, each point is a `ntrials` setting, and the lines and bands show the mean and error on the mean
respectively, across 20 repeats. The dashed line shows the roughly best known line from other more
advanced methods.

---


## API

The optimize functions follow the api of the python implementations in `cotengra.pathfinders.path_basic.py`.
Expand Down

0 comments on commit 3c62649

Please sign in to comment.