-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ITensors] Optimize directsum
again
#1221
Conversation
@LHerviou I'm not sure why but this change maybe fixed the issue we were discussing in ITensor/ITensorInfiniteMPS.jl#77? At least the |
From what you mention above, you are no longer calling one(EmptyNumber) because you initialize an empty tensor with type. So that probably solve the issue. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1221 +/- ##
===========================================
- Coverage 85.39% 67.46% -17.94%
===========================================
Files 89 88 -1
Lines 8445 8401 -44
===========================================
- Hits 7212 5668 -1544
- Misses 1233 2733 +1500
☔ View full report in Codecov by Sentry. |
This should fix a performance issue in
directsum
that was raised in https://itensor.discourse.group/t/directsum-with-qn-seems-very-inefficient-compared-to-the-c-version/1106/8.In
directsum
, we are constructing projectors to project the tensors being direct summed into the correct subspace. The previous code introduced in #1185 was constructing those in an inefficient way, by creating zero flux tensors by first filling in all blocks consistent with that flux and then setting appropriate elements to 1 to make the projectors. In this PR, in the block sparse case I'm making tensors that initially have no blocks, then those blocks are allocated as needed to make the projectors.I hacked together some specialized constructors for making QN ITensors without any blocks for this purpose (or in the case where there aren't QNs, it makes a zero tensor). There is probably a simpler way to do that, but @kmp5VT is working on a number of improvements to the ITensor storage types, constructors, and operations on unallocated tensors so I think it is ok to leave it for now and replace that code with better constructors that will be introduced soon.