from primate.trace import hutch, hutchpp, xtrace
from primate.random import symmetric
= symmetric(150) ## random symmetric matrix
@@ -548,10 +549,10 @@ A Trace estimation
print(f"Hutch++: {hutchpp(A):6f}") ## Monte-Carlo estimator w/ deflation
print(f"XTrace: {xtrace(A):6f}") ## Epperly's algorithm
Actual trace: 67.787515
-Hutch: 68.217892
-Hutch++: 67.966344
-XTrace: 67.787515
+Actual trace: 72.536677
+Hutch: 74.240444
+Hutch++: 71.987032
+XTrace: 72.536677
Here, A
can be a numpy matrix, a sparse matrix, or a LinearOperator
. If the spectral sum of interest is not just the sum of the eigenvalues, but rather the sum under composition with some spectral function f(\lambda), i.e. the quantity of interest is of the form:
Trace estimation
For example, one might compute the log-determinant of a positive definite matrix as follows:
-from primate.operators import matrix_function, IdentityOperator
= symmetric(150, pd=True) # eigenvalues in (0, 1]
A = matrix_function(A, fun="log") # or fun=np.log
@@ -654,18 +655,18 @@ M Trace estimation
print(f"Hutch++ : {hutchpp(M):6f}")
print(f"XTrace : {xtrace(M):6f}")
logdet(A) : -154.144312
-tr(log(A)): -154.144312
-Hutch : -166.639177
-Hutch++ : -153.737941
-XTrace : -153.978924
+logdet(A) : -184.692737
+tr(log(A)): -184.692737
+Hutch : -193.367585
+Hutch++ : -182.208546
+XTrace : -184.621731
Diagonal estimation
The diagonals of matrices and matrix functions (implicitly or explicitly represented) can also be estimated via nearly identical API used for the trace.
-from primate.diagonal import diag, xdiag
= A.diagonal()
@@ -674,225 +675,181 @@ d1 Diagonal estimationprint(f"Diagonal (true): {d1}")
print(f"Diagonal est : {d2}")
Diagonal (true): [0.54968697 0.48960101 0.49447692 0.47773641 0.48477961 0.46974089
- 0.45448756 0.49278268 0.46308226 0.51857507 0.49184517 0.50710399
- 0.46590264 0.47062913 0.47919964 0.46902484 0.50506395 0.467224
- 0.51894141 0.53466592 0.51714597 0.5279012 0.4794661 0.4813528
- 0.48721224 0.52902752 0.55330945 0.49645548 0.51938593 0.43741348
- 0.53355823 0.49958051 0.50151183 0.44675413 0.50812872 0.52613871
- 0.48945744 0.46393831 0.51136589 0.52520016 0.51533952 0.44135037
- 0.48084077 0.47808502 0.49277022 0.54158404 0.52313368 0.46183489
- 0.48163083 0.42820872 0.50302743 0.50019652 0.50572689 0.4690371
- 0.46773069 0.46568055 0.46782853 0.51038056 0.5419983 0.49355537
- 0.4211999 0.55314169 0.49461266 0.51634604 0.44610457 0.46094316
- 0.48726254 0.5162495 0.46861948 0.51666589 0.50285835 0.50864331
- 0.5056485 0.46355375 0.52290347 0.51154529 0.49792651 0.50105026
- 0.49414167 0.48316386 0.46578555 0.50729763 0.48716965 0.52587961
- 0.48433519 0.4843776 0.53213133 0.53433909 0.5230893 0.44206393
- 0.42943587 0.46864331 0.45874898 0.51964131 0.47283709 0.46225109
- 0.50855052 0.54201372 0.45718963 0.49323674 0.50499993 0.4783244
- 0.48297431 0.50510227 0.52886059 0.49522149 0.46261325 0.5078776
- 0.50785473 0.48086693 0.51157085 0.53961956 0.50951068 0.53137776
- 0.51177774 0.5115731 0.50830074 0.47874391 0.49518595 0.53173225
- 0.48020883 0.47305573 0.5466895 0.48509724 0.57844942 0.50267975
- 0.48053915 0.52886011 0.4531079 0.48519128 0.45842747 0.48260731
- 0.51087804 0.50073223 0.51589332 0.51055631 0.49588501 0.51258133
- 0.48735096 0.53209772 0.46701131 0.49457568 0.48068915 0.44291768
- 0.44864358 0.50165426 0.50940561 0.53303458 0.46230692 0.47826089]
-Diagonal est : [0.53607086 0.49324122 0.49113666 0.47781557 0.50357503 0.43569537
- 0.42325622 0.45447445 0.50733854 0.50885887 0.51762998 0.49740386
- 0.47594739 0.46055631 0.48056577 0.49563572 0.49221762 0.45763859
- 0.52634589 0.54098616 0.5526033 0.49272358 0.49110416 0.51345753
- 0.48522175 0.56418779 0.57545486 0.48830505 0.5014901 0.43650532
- 0.55853053 0.51594561 0.51344101 0.4625863 0.516656 0.5476163
- 0.48168479 0.47644227 0.46513713 0.51763804 0.56007328 0.43658329
- 0.47722926 0.45332959 0.55550188 0.56505926 0.50828515 0.45172424
- 0.46824449 0.43532188 0.46630796 0.49534843 0.56000665 0.48462336
- 0.45903083 0.53867312 0.49287201 0.47615586 0.54132283 0.50968524
- 0.43759258 0.58123051 0.4746877 0.51983819 0.48855547 0.45826756
- 0.48569925 0.5193559 0.50111815 0.49602907 0.50187306 0.51564624
- 0.5044877 0.49516492 0.53699773 0.53619662 0.49522637 0.49021747
- 0.49303352 0.4761422 0.47717333 0.52917669 0.49497692 0.49441117
- 0.4665241 0.47693987 0.56661897 0.54354691 0.52055566 0.45443996
- 0.4442943 0.44803011 0.45358942 0.4946686 0.50283473 0.48740529
- 0.51334192 0.56235233 0.48150723 0.49305866 0.52013675 0.45956274
- 0.50475784 0.47641195 0.52173937 0.53192062 0.4641294 0.52850242
- 0.56247616 0.45048661 0.53799171 0.59090927 0.47109949 0.52749242
- 0.51867995 0.53020794 0.49290483 0.48186812 0.50892381 0.54478249
- 0.46703982 0.48050925 0.48455908 0.48991388 0.56116414 0.48157484
- 0.4350509 0.52571963 0.47811558 0.41423778 0.44215122 0.51006299
- 0.51990018 0.50194132 0.51197482 0.48566303 0.46503789 0.49796135
- 0.45517743 0.47800713 0.47378722 0.48726434 0.49106137 0.43862349
- 0.45365974 0.49619377 0.49942659 0.55166607 0.44318283 0.48603981]
+Diagonal (true): [0.43039105 0.48917618 0.48019971 0.47211575 0.47279553 0.38957475
+ 0.47718335 0.49324081 0.44885169 0.4277269 0.47087562 0.40021082
+ 0.51547132 0.50355377 0.43224247 0.49904148 0.45830626 0.43908647
+ 0.49578801 0.41473005 0.4380252 0.49264303 0.45569725 0.45448873
+ 0.49354691 0.44135131 0.5123524 0.41117827 0.50954408 0.37621029
+ 0.41117917 0.41648082 0.47618701 0.48732248 0.44938215 0.43333856
+ 0.46679709 0.49037643 0.45963049 0.44015421 0.47784954 0.43383009
+ 0.41779562 0.40693143 0.44369976 0.42771624 0.45130083 0.46434983
+ 0.45069401 0.45504158 0.45437503 0.46827701 0.39472713 0.42624312
+ 0.4595601 0.5121464 0.45399921 0.43078203 0.46551555 0.44837789
+ 0.44421661 0.39559263 0.47489302 0.48145368 0.42333797 0.44775148
+ 0.44606398 0.46929713 0.45426927 0.50505132 0.42504741 0.53282389
+ 0.42652887 0.41920312 0.43661854 0.48297299 0.48787196 0.47261286
+ 0.42495842 0.50271214 0.46050455 0.42619204 0.44340256 0.52068483
+ 0.48418565 0.42781388 0.48397916 0.46672302 0.4538436 0.4041382
+ 0.45380718 0.42478812 0.4503791 0.4129976 0.45673551 0.4088903
+ 0.5177648 0.47017075 0.50772493 0.4640203 0.44584979 0.52458109
+ 0.46171403 0.39565151 0.45915515 0.4831754 0.47053768 0.44984129
+ 0.49918333 0.4513592 0.43904331 0.46739028 0.40243068 0.45461889
+ 0.48136249 0.37343887 0.47656894 0.44833618 0.4642718 0.46425367
+ 0.4396633 0.45419251 0.43057853 0.42926385 0.51863643 0.42451582
+ 0.46106745 0.46384575 0.46696267 0.50305207 0.47860004 0.51036545
+ 0.44803493 0.43002539 0.41305307 0.44538271 0.443284 0.41770939
+ 0.4444684 0.48169309 0.4389208 0.45038395 0.43559241 0.50012206
+ 0.54438985 0.41679314 0.44648339 0.4874597 0.49814334 0.39021321]
+Diagonal est : [0.4128793 0.46909619 0.46910046 0.5084607 0.4290224 0.39593945
+ 0.45256075 0.47055374 0.45511556 0.43204271 0.43587872 0.42170364
+ 0.51325711 0.49237296 0.43155544 0.47099311 0.44998781 0.43656665
+ 0.50922468 0.41706541 0.41830947 0.5028076 0.46750451 0.48369103
+ 0.52780259 0.43014709 0.52558716 0.37352875 0.53748579 0.39841121
+ 0.38251536 0.44768673 0.44863359 0.49694396 0.45209826 0.45974035
+ 0.46906822 0.4836988 0.43171736 0.46365202 0.46533754 0.40974034
+ 0.38970675 0.41829271 0.45045844 0.40360921 0.46909385 0.45468882
+ 0.42620199 0.43833108 0.46759341 0.47787056 0.36319605 0.40968787
+ 0.45839489 0.48724121 0.44554116 0.43381381 0.46668268 0.43166525
+ 0.4367249 0.40380281 0.47487089 0.47503138 0.42141051 0.40831931
+ 0.41111861 0.46392608 0.48468158 0.50221113 0.4267977 0.50683815
+ 0.44670193 0.42080076 0.39849286 0.45631293 0.50799767 0.49588713
+ 0.41904142 0.501057 0.43516668 0.42627724 0.41813385 0.45841907
+ 0.48692434 0.43427885 0.45806324 0.47426466 0.49838729 0.37467643
+ 0.43432659 0.41503367 0.42361309 0.4534481 0.45442027 0.39641301
+ 0.49577302 0.46029685 0.48177843 0.49812826 0.47003316 0.46957324
+ 0.4705284 0.39502745 0.44140719 0.47528234 0.50620268 0.4368425
+ 0.49147006 0.46268273 0.46241059 0.46602402 0.39172531 0.42443054
+ 0.44625376 0.37060657 0.48127276 0.42414685 0.48129331 0.48512297
+ 0.47771264 0.45382653 0.45525822 0.44734659 0.53069052 0.42277241
+ 0.47087112 0.49024026 0.45307882 0.50674378 0.47194383 0.52304702
+ 0.41989139 0.45417754 0.36090709 0.44793991 0.44182605 0.39300832
+ 0.45818277 0.49261867 0.45057148 0.49345532 0.44785415 0.48727228
+ 0.56923258 0.4351688 0.42631992 0.43998622 0.46885881 0.39889632]
Matrix function approximation
In primate
, the matrix function f(A) is not constructed explicitly but instead the action v \mapsto f(A)v is approximated with a fixed-degree Krylov expansion. This can be useful when, for example, the matrix A itself is so large that the corresponding (typically dense) matrix function f(A) \in \mathbb{R}^{n \times n} simply is too large to be explicitly represented. If you just want to approximate the action of a matrix function for a single vector v \in \mathbb{R}^n, simply supply the vector and the matrix alongside the matrix_function
call:
from primate.operators import matrix_function
= np.random.uniform(size=A.shape[0])
v = matrix_function(A, fun=np.exp, v=v)
y print(f"f(A)v = {y.T}")
f(A)v = [[ 0.33822871 1.2276779 0.76719528 1.11136963 1.4334131 0.33545821
- 1.12125454 1.40637298 1.66598107 1.62535025 0.52019603 1.25063547
- 0.66670276 1.29393251 0.69282503 1.0287493 0.72214663 -0.09204598
- 1.47707884 1.22172414 0.08029525 0.78770763 -0.75596991 0.70779358
- 1.60077048 0.94272982 0.97505924 1.15440836 0.7374584 0.88640891
- 1.36302058 0.85310954 1.4806497 0.02803763 1.66195592 1.57124059
- 0.42895017 1.40212259 0.69677458 1.73957543 0.34056903 0.11018724
- -0.14044896 1.35214665 1.11838172 0.92102208 0.16351914 1.12927213
- 0.67417896 0.47943645 0.04712505 0.86780522 1.09441989 0.00576696
- 0.95410643 1.24937734 1.00069938 1.34737089 1.34542673 1.17581268
- 0.94485497 0.61413515 1.16895108 -0.05663465 0.34726807 -0.0823824
- 0.66178324 2.03735537 0.28002841 0.36672736 1.33998001 0.71647919
- 0.76080901 0.10740978 0.33567239 1.76869953 0.58288343 1.09524021
- -0.35552142 0.67262108 0.71137472 1.41219314 1.03440015 1.00939543
- 1.44968963 0.32503392 1.06705694 1.55320964 1.4216318 0.95248097
- 0.6225137 1.40062256 0.65766157 1.310364 0.63641147 0.614348
- 0.78440083 1.4886851 0.17951006 1.03270084 1.16250856 1.82769459
- 0.60232387 1.03733921 1.34704035 0.72221557 -0.0858992 1.01855611
- 0.03182451 0.9202459 0.62553324 0.49787513 0.50668105 1.41735634
- 1.56267389 0.32624453 0.63401124 0.3030791 1.35970263 1.06566395
- 1.56303962 1.44498106 1.17646442 0.92645546 0.39534649 0.3284482
- 0.75593762 1.18763865 0.04964799 0.35780967 0.69458988 0.81534473
- 1.2975112 0.54524531 0.63795013 0.13581013 1.65822673 0.53282028
- 0.57330929 0.84964084 -0.39623483 1.48480022 1.63635673 1.35609933
- 0.08125627 0.46667781 1.25768933 0.09536721 0.68150763 0.96698516]]
+f(A)v = [[ 0.50444509 1.24373753 0.66257527 1.87426017 1.35483644 0.32319258
+ 0.11222462 1.51720659 1.32246605 1.27418307 0.47376965 0.687332
+ 0.74285256 1.30279225 0.65919571 1.27426128 0.8169443 0.09650747
+ 1.06342659 1.15395045 0.28096554 1.25793983 -0.31636388 0.31830014
+ 0.92254213 0.76316112 0.11851581 1.4088218 0.34046194 0.58564573
+ 1.56324821 1.03526213 1.51792923 0.28736624 1.50519537 0.82929802
+ 0.36866717 1.23992286 1.01278857 1.36708799 0.49677659 0.45406084
+ -0.15705239 1.24516567 0.75535214 0.94982429 0.64904663 0.59256738
+ 0.34900934 0.66340444 0.34038333 0.94300462 1.07683855 -0.10514469
+ 0.93518472 1.48962457 1.13893832 2.15776093 1.74258434 1.4426845
+ 0.38961754 1.28768415 1.20289649 0.28788456 0.53010087 -0.00470542
+ 1.00184611 1.11400853 -0.04588343 0.23290699 1.29196692 1.26617789
+ 1.11329847 0.05133848 0.28510027 1.42144544 0.21635319 0.88236445
+ 0.07212113 0.63383665 1.03432736 1.33325086 1.33007781 1.45767937
+ 1.29312312 0.12189215 0.99541253 1.38529962 0.6154369 0.70890881
+ 0.47060941 1.22446674 1.29472167 1.40862791 0.98609169 0.98835942
+ 0.91832746 1.72927251 0.0933377 1.11561648 1.13518724 1.15862124
+ 0.9525651 0.90107864 1.6088772 -0.152054 -0.25593814 0.82607775
+ -0.14119817 1.61814449 0.47211876 0.23345096 0.57629949 1.62210853
+ 1.40725064 0.28228201 0.05393534 0.82527403 2.32822925 1.08710937
+ 1.60075676 1.9139616 1.73962763 0.97570428 0.05606992 0.27434433
+ 0.37690065 0.994164 0.29030612 0.3907946 0.49984998 0.98740855
+ 1.62383136 0.67884834 0.03748605 0.27472411 1.26595132 -0.00699944
+ 0.45757148 1.18968276 0.44377037 1.20139756 1.7741653 0.48424526
+ -0.14270435 0.65139214 1.27905184 -0.09324766 1.20138143 -0.01421768]]
Alternatively, if you prefer an object-oriented approach (or you plan on doing multiple matvecs), you can construct a MatrixFunction
instance and use it like any other LinearOperator
:
from primate.operators import MatrixFunction
= MatrixFunction(A, fun=np.exp)
ExpA = ExpA @ v
y print(f"exp(A)v = {y}")
exp(A)v = [ 0.33822871 1.2276779 0.76719528 1.11136963 1.4334131 0.33545821
- 1.12125454 1.40637298 1.66598107 1.62535025 0.52019603 1.25063547
- 0.66670276 1.29393251 0.69282503 1.0287493 0.72214663 -0.09204598
- 1.47707884 1.22172414 0.08029525 0.78770763 -0.75596991 0.70779358
- 1.60077048 0.94272982 0.97505924 1.15440836 0.7374584 0.88640891
- 1.36302058 0.85310954 1.4806497 0.02803763 1.66195592 1.57124059
- 0.42895017 1.40212259 0.69677458 1.73957543 0.34056903 0.11018724
- -0.14044896 1.35214665 1.11838172 0.92102208 0.16351914 1.12927213
- 0.67417896 0.47943645 0.04712505 0.86780522 1.09441989 0.00576696
- 0.95410643 1.24937734 1.00069938 1.34737089 1.34542673 1.17581268
- 0.94485497 0.61413515 1.16895108 -0.05663465 0.34726807 -0.0823824
- 0.66178324 2.03735537 0.28002841 0.36672736 1.33998001 0.71647919
- 0.76080901 0.10740978 0.33567239 1.76869953 0.58288343 1.09524021
- -0.35552142 0.67262108 0.71137472 1.41219314 1.03440015 1.00939543
- 1.44968963 0.32503392 1.06705694 1.55320964 1.4216318 0.95248097
- 0.6225137 1.40062256 0.65766157 1.310364 0.63641147 0.614348
- 0.78440083 1.4886851 0.17951006 1.03270084 1.16250856 1.82769459
- 0.60232387 1.03733921 1.34704035 0.72221557 -0.0858992 1.01855611
- 0.03182451 0.9202459 0.62553324 0.49787513 0.50668105 1.41735634
- 1.56267389 0.32624453 0.63401124 0.3030791 1.35970263 1.06566395
- 1.56303962 1.44498106 1.17646442 0.92645546 0.39534649 0.3284482
- 0.75593762 1.18763865 0.04964799 0.35780967 0.69458988 0.81534473
- 1.2975112 0.54524531 0.63795013 0.13581013 1.65822673 0.53282028
- 0.57330929 0.84964084 -0.39623483 1.48480022 1.63635673 1.35609933
- 0.08125627 0.46667781 1.25768933 0.09536721 0.68150763 0.96698516]
+exp(A)v = [ 0.50444509 1.24373753 0.66257527 1.87426017 1.35483644 0.32319258
+ 0.11222462 1.51720659 1.32246605 1.27418307 0.47376965 0.687332
+ 0.74285256 1.30279225 0.65919571 1.27426128 0.8169443 0.09650747
+ 1.06342659 1.15395045 0.28096554 1.25793983 -0.31636388 0.31830014
+ 0.92254213 0.76316112 0.11851581 1.4088218 0.34046194 0.58564573
+ 1.56324821 1.03526213 1.51792923 0.28736624 1.50519537 0.82929802
+ 0.36866717 1.23992286 1.01278857 1.36708799 0.49677659 0.45406084
+ -0.15705239 1.24516567 0.75535214 0.94982429 0.64904663 0.59256738
+ 0.34900934 0.66340444 0.34038333 0.94300462 1.07683855 -0.10514469
+ 0.93518472 1.48962457 1.13893832 2.15776093 1.74258434 1.4426845
+ 0.38961754 1.28768415 1.20289649 0.28788456 0.53010087 -0.00470542
+ 1.00184611 1.11400853 -0.04588343 0.23290699 1.29196692 1.26617789
+ 1.11329847 0.05133848 0.28510027 1.42144544 0.21635319 0.88236445
+ 0.07212113 0.63383665 1.03432736 1.33325086 1.33007781 1.45767937
+ 1.29312312 0.12189215 0.99541253 1.38529962 0.6154369 0.70890881
+ 0.47060941 1.22446674 1.29472167 1.40862791 0.98609169 0.98835942
+ 0.91832746 1.72927251 0.0933377 1.11561648 1.13518724 1.15862124
+ 0.9525651 0.90107864 1.6088772 -0.152054 -0.25593814 0.82607775
+ -0.14119817 1.61814449 0.47211876 0.23345096 0.57629949 1.62210853
+ 1.40725064 0.28228201 0.05393534 0.82527403 2.32822925 1.08710937
+ 1.60075676 1.9139616 1.73962763 0.97570428 0.05606992 0.27434433
+ 0.37690065 0.994164 0.29030612 0.3907946 0.49984998 0.98740855
+ 1.62383136 0.67884834 0.03748605 0.27472411 1.26595132 -0.00699944
+ 0.45757148 1.18968276 0.44377037 1.20139756 1.7741653 0.48424526
+ -0.14270435 0.65139214 1.27905184 -0.09324766 1.20138143 -0.01421768]
If you don’t supply a vector v
to the matrix_function
call, a MatrixFunction
instance is constructed using whatever additional arguments are passed in and returned. By default, the action of the matrix function is approximated over a fixed-degree Krylov subspace (deg
) using the Lanczos method (without reorthogonalization). Some function specializations are inherently more difficult to approximate and can depend on the smoothness of f and the conditioning of the corresponding operator f(A); in general, the Lanczos method up to degree k can approximate the action v \mapsto f(A)v as well as the operator p(A) can, where p is best-fitting degree 2k-1 polynomial approximation of f.
from scipy.linalg import expm
-print(f"Error from exp(A): {np.linalg.norm((expm(A) @ v) - y)}")
-
-
-from timeit import timeit
-= timeit(lambda: ExpA @ v, number=100)/100
- timeforlan
-= timeit(lambda: expm(A), number=100)/100
- timeforexpmA = expm(A)
- B = timeit(lambda: B @ v, number=100)/100
+ timeforexpmAv = expm(A)
+ ExpA = MatrixFunction(A, fun=np.exp, deg=5, orth=0)
+ ExpA0 = MatrixFunction(A, fun=np.exp, deg=20, orth=0)
+ ExpA1 = MatrixFunction(A, fun=np.exp, deg=50, orth=50)
+ ExpA2
+= ExpA @ v
+ w = ExpA0 @ v
+ x = ExpA1 @ v
+ y = ExpA2 @ v
z
-= 1 + np.arange(30)
- sample_index = timeforexpmA + timeforexpmAv * sample_index
- timeforsp = timeforlan * sample_index
- timeformf
-= figure(width=350, height=200)
- p
-='red')
- p.line(sample_index, timeforsp, color='red', size=1)
- p.scatter(sample_index, timeforsp, color='blue')
- p.line(sample_index, timeformf, color='blue', size=1)
- p.scatter(sample_index, timeformf, color
- show(p)
-= MatrixFunction(A, fun=np.exp, deg=50, orth=50)
- ExpA = ExpA @ v
- z print(f"Error: {np.linalg.norm((expm(A) @ v) - z)}")
Error from exp(A): 3.241183264847419e-14
-Error: 5.638564484707072e-14
+Deg-5 approx error (no reorth.) : 0.00012614623767986385
+Deg-20 approx error (no reorth.) : 4.282655257652451e-14
+Deg-50 approx error (full reorth.) : 1.232476649260152e-13
As you can see, for smoother matrix functions (like \mathrm{exp}(A)), even a low degree Krylov expansion can be more than sufficient for many application purposes, even without any re-orthogonalization.
Configuration
-By default, the various estimators offered by primate
simply return the estimated quantity under reasonable default parameter settings. However, in many applications one would like to have greater control over the computation and the type of information collected during execution.
To get all the information about the computation, simply pass full=True
to obtain an EstimatorResult
along with the estimate itself. This object contains information about estimation process itself, basic properties, as well as convergence information about the computation:
= hutch(A, full=True)
- est, info print(info.message)
By default, the various estimators offered by primate
simply return the estimated quantity under reasonable default parameter settings. However, in many applications one would like to have greater control over both the computation itself and the type of information collected during execution.
To get all the information about the computation, simply pass full=True
to the corresponding estimator call. This will yield EstimatorResult
along with the estimate itself, which contains information about execution itself, convergence information of the estimator, and other status messages, such as the argin of error of a default-constructed confidence interval:
= hutch(A, full=True)
+ est, info print(info.message)
Est: 73.740 +/- 1.173 (95% CI, #S:64)
+Est: 68.180 +/- 1.133 (95% CI, #S:64)
To get a better idea of what the sample values and the corresponding estimate looked like as the sample size increased, you can plot the sequence with the figure_sequence
function:
from primate.plotting import figure_sequence
-
-= hutch(A, full=True, record=True)
- est, info = figure_sequence(info.estimator.values)
- p show(p)