Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ITensors] Optimize directsum again #1221

Merged
merged 6 commits into from
Oct 27, 2023
Merged

Conversation

mtfishman
Copy link
Member

@mtfishman mtfishman commented Oct 27, 2023

This should fix a performance issue in directsum that was raised in https://itensor.discourse.group/t/directsum-with-qn-seems-very-inefficient-compared-to-the-c-version/1106/8.

In directsum, we are constructing projectors to project the tensors being direct summed into the correct subspace. The previous code introduced in #1185 was constructing those in an inefficient way, by creating zero flux tensors by first filling in all blocks consistent with that flux and then setting appropriate elements to 1 to make the projectors. In this PR, in the block sparse case I'm making tensors that initially have no blocks, then those blocks are allocated as needed to make the projectors.

I hacked together some specialized constructors for making QN ITensors without any blocks for this purpose (or in the case where there aren't QNs, it makes a zero tensor). There is probably a simpler way to do that, but @kmp5VT is working on a number of improvements to the ITensor storage types, constructors, and operations on unallocated tensors so I think it is ok to leave it for now and replace that code with better constructors that will be introduced soon.

mtfishman and others added 5 commits October 27, 2023 10:01
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@mtfishman
Copy link
Member Author

@LHerviou I'm not sure why but this change maybe fixed the issue we were discussing in ITensor/ITensorInfiniteMPS.jl#77? At least the test_iMPOConversions.jl tests pass for me locally with this branch.

@LHerviou
Copy link

LHerviou commented Oct 27, 2023

From what you mention above, you are no longer calling one(EmptyNumber) because you initialize an empty tensor with type. So that probably solve the issue.
In any case, that is great.

@codecov-commenter
Copy link

codecov-commenter commented Oct 27, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (bde7da7) 85.39% compared to head (2b60c1f) 67.46%.
Report is 2 commits behind head on main.

❗ Current head 2b60c1f differs from pull request most recent head 92815e9. Consider uploading reports for the commit 92815e9 to get more accurate results

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1221       +/-   ##
===========================================
- Coverage   85.39%   67.46%   -17.94%     
===========================================
  Files          89       88        -1     
  Lines        8445     8401       -44     
===========================================
- Hits         7212     5668     -1544     
- Misses       1233     2733     +1500     
Files Coverage Δ
src/itensor.jl 81.06% <100.00%> (-1.23%) ⬇️
src/qn/qnitensor.jl 68.75% <100.00%> (-16.31%) ⬇️
src/tensor_operations/tensor_algebra.jl 87.27% <100.00%> (-2.73%) ⬇️

... and 33 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mtfishman mtfishman merged commit 6fc5f6f into main Oct 27, 2023
7 checks passed
@mtfishman mtfishman deleted the ITensors_optimize_directsum_again branch October 27, 2023 16:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants