Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add randomized SVD. #137

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft

Add randomized SVD. #137

wants to merge 3 commits into from

Conversation

b-kloss
Copy link
Contributor

@b-kloss b-kloss commented Feb 12, 2024

This PR adds randomized SVD routines, in both a non-iterative version (rsvd) and an iterative version (rsvd_iterative), which can be applied to single ITensors and collections thereof (currently limited to Vectors{ITensor}).

Ultimately, the code design should have separate layers of specialization:

  1. An rsvd function independent of ITensor, that just accepts a function or generic map which is the action of a matrix-like object (like the interface of KrylovKit.jl solvers).
  2. A specialized version of rsvd for ITensors.jl in ITensors.jl to handle specialized things like QNs, but still mostly agnostic about the linear map needed.
  3. A version of rsvd in ITensorNetworks.jl which then makes use of contraction sequence optimization for bigger networks.

Currently, the design is relying on ITensors functionality (2.) and is quite verbose in parts. The purpose of this draft is to discuss how to move towards the goals outlined below and streamline the implementation.

@codecov-commenter
Copy link

Codecov Report

Attention: 50 lines in your changes are missing coverage. Please review.

Comparison is base (da7636e) 73.24% compared to head (1930c8c) 73.50%.

Files Patch % Lines
src/linalg/rsvd_aux.jl 70.93% 50 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #137      +/-   ##
==========================================
+ Coverage   73.24%   73.50%   +0.25%     
==========================================
  Files          68       70       +2     
  Lines        4033     4261     +228     
==========================================
+ Hits         2954     3132     +178     
- Misses       1079     1129      +50     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@emstoudenmire
Copy link
Contributor

Good to see this – it will be great to have for many purposes!

It does seem like a good goal would be if the is_converged! function could be made shorter. In some sense, isn't convergence just defined by whether the product of the returned factors (meaning U*S*V) is sufficiently close to the original tensor? So then is the is_converged! method trying to measure that in a less expensive way?

I do also see quite a few ways the code could be compressed more by using generic code patterns. One key example is that the code body for increment_guess_sizes are identical in the two cases, so you can just make one definition of this function and simply leave off the type constraint on ndict (and the resulting code will still be just as fast, as I believe you know).

Overall, a lot of the code length seems to be (understandably) dealing with the differences between the dense and QN cases. So while the code patterns are rather similar, the data structures and some extra logic are just different enough that the two cases don't quite merge together completely yet, thus leading to many extra lines of code and two versions of various helper functions. It would be good to think through whether there is a kind of common concept or data structure the two cases could share. (Easy for me to say, harder to do!)

It would be good to remove the prefix ITensors. from as many places as possible. I think you don't need it on any of the code here?

@b-kloss
Copy link
Contributor Author

b-kloss commented Feb 12, 2024

Yes, is_converged could in principle be replaced by a comparison of the contracted factorization with the contracted vector of input tensors --- but then the scaling is determined by the cost of contracting the input vector.

I'll merge increment_guess_sizes.

Yes, there is probably a better way to handle QN / non-QN ITensors and also generally more compact code-patterns for most of the routines.

@mtfishman
Copy link
Member

Can this be closed in favor of #160?

@b-kloss
Copy link
Contributor Author

b-kloss commented Apr 26, 2024

Can this be closed in favor of #160?

I suggest keeping them separate and removing the randomized SVD related code in #160 , since they are conceptually not related (although the randomized SVD is crucial to reduce the scaling of subspace expansion).

@mtfishman
Copy link
Member

Can this be closed in favor of #160?

I suggest keeping them separate and removing the randomized SVD related code in #160 , since they are conceptually not related (although the randomized SVD is crucial to reduce the scaling of subspace expansion).

Sounds good, I would prefer to keep it separate as well, it will be easier to review.

@mtfishman mtfishman mentioned this pull request Apr 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants