-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NDTensors] Array storage combiner contraction refactor #1237
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1237 +/- ##
===========================================
- Coverage 85.36% 54.56% -30.81%
===========================================
Files 89 88 -1
Lines 8445 8392 -53
===========================================
- Hits 7209 4579 -2630
- Misses 1236 3813 +2577
☔ View full report in Codecov by Sentry. |
With this PR, combining and uncombining of dense and block sparse tensors using the new array storage design work on a wide range of tests. I also refactored the code into two layers, one layer that takes a tensor and unwraps the indices, and another that does the combiner contraction on either
Array
orBlockSparseArray
. I think it is a lot more readable now, and the tensor layer is very minimal.The main limitation is that information about the QNs is not available at the
BlockSparseArray
level so the autofermion code had to be disabled for now. I'm sure we can find a nice solution to that, either with a callback function storing information about the QNs or with anAbstractBlockSparseArray
subtype that stores QN information.