Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BlockSparseArrays] Towards block merging #1512

Merged
merged 2 commits into from
Jun 26, 2024

Conversation

mtfishman
Copy link
Member

@mtfishman mtfishman commented Jun 26, 2024

This adds some functionality that will be needed for slicing operations that merge blocks, one of the feature requests listed in ITensor/BlockSparseArrays.jl#2. These operations will be useful for fusion operations of symmetric tensors, removing symmetries from symmetric tensors, among other operations.

Given this block sparse array:

using BlockArrays: Block, BlockedVector, blockedrange
using NDTensors.BlockSparseArrays: BlockSparseArray
a = BlockSparseArray{Float64}([2, 2, 2, 2], [2, 2, 2, 2])
@views for I in [
  Block(1, 1),
  Block(2, 2),
  Block(3, 3),
  Block(4, 4),
]
  a[I] = randn(size(a[I]))
end

this PR enables the following operations:

julia> a
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.

4×4-blocked 8×8 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
 0.765348  -1.157120.0        0.00.0       0.00.0        0.0     
 0.176078  -0.04213530.0        0.00.0       0.00.0        0.0     
 ──────────────────────┼────────────────────────┼───────────────────────┼──────────────────────
 0.0        0.0-1.48538   -1.400370.0       0.00.0        0.0     
 0.0        0.0-0.718883  -0.0863520.0       0.00.0        0.0     
 ──────────────────────┼────────────────────────┼───────────────────────┼──────────────────────
 0.0        0.00.0        0.0-0.671303  0.7730090.0        0.0     
 0.0        0.00.0        0.00.367588  0.3799230.0        0.0     
 ──────────────────────┼────────────────────────┼───────────────────────┼──────────────────────
 0.0        0.00.0        0.00.0       0.0-0.610198  -0.397977
 0.0        0.00.0        0.00.0       0.00.887972  -0.850992

julia> I = blockedrange([4, 4])
2-blocked 8-element BlockArrays.BlockedOneTo{Int64, Vector{Int64}}:
 1
 2
 3
 45
 6
 7
 8

julia> @view a[I, I]
8×8 view(::BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}, BlockArrays.BlockedOneTo([4, 8]), BlockArrays.BlockedOneTo([4, 8])) with eltype Float64 with indices BlockArrays.BlockedOneTo([4, 8])×BlockArrays.BlockedOneTo([4, 8]):
 0.765348  -1.15712     0.0        0.00.0       0.0        0.0        0.0     
 0.176078  -0.0421353   0.0        0.00.0       0.0        0.0        0.0     
 0.0        0.0        -1.48538   -1.400370.0       0.0        0.0        0.0     
 0.0        0.0        -0.718883  -0.0863520.0       0.0        0.0        0.0     
 ────────────────────────────────────────────┼───────────────────────────────────────────
 0.0        0.0         0.0        0.0-0.671303  0.773009   0.0        0.0     
 0.0        0.0         0.0        0.00.367588  0.379923   0.0        0.0     
 0.0        0.0         0.0        0.00.0       0.0       -0.610198  -0.397977
 0.0        0.0         0.0        0.00.0       0.0        0.887972  -0.850992

julia> I = BlockedVector(Block.(1:4), [2, 2])
2-blocked 4-element BlockedVector{Block{1, Int64}, BlockArrays.BlockRange{1, Tuple{UnitRange{Int64}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
 Block(1)
 Block(2)
 ────────
 Block(3)
 Block(4)

julia> @view a[I, I]
8×8 view(::BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}, BlockArrays.BlockedOneTo([4, 8]), BlockArrays.BlockedOneTo([4, 8])) with eltype Float64 with indices BlockArrays.BlockedOneTo([4, 8])×BlockArrays.BlockedOneTo([4, 8]):
 0.765348  -1.15712     0.0        0.00.0       0.0        0.0        0.0     
 0.176078  -0.0421353   0.0        0.00.0       0.0        0.0        0.0     
 0.0        0.0        -1.48538   -1.400370.0       0.0        0.0        0.0     
 0.0        0.0        -0.718883  -0.0863520.0       0.0        0.0        0.0     
 ────────────────────────────────────────────┼───────────────────────────────────────────
 0.0        0.0         0.0        0.0-0.671303  0.773009   0.0        0.0     
 0.0        0.0         0.0        0.00.367588  0.379923   0.0        0.0     
 0.0        0.0         0.0        0.00.0       0.0       -0.610198  -0.397977
 0.0        0.0         0.0        0.00.0       0.0        0.887972  -0.850992

julia> I = BlockedVector([Block(4), Block(3), Block(2), Block(1)], [2, 2])
2-blocked 4-element BlockedVector{Block{1, Int64}}:
 Block(4)
 Block(3)
 ────────
 Block(2)
 Block(1)

julia> @view a[I, I]
8×8 view(::BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}, [7, 8, 5, 6, 3, 4, 1, 2], [7, 8, 5, 6, 3, 4, 1, 2]) with eltype Float64 with indices BlockArrays.BlockedOneTo([4, 8])×BlockArrays.BlockedOneTo([4, 8]):
 -0.610198  -0.397977   0.0       0.00.0        0.0       0.0        0.0      
  0.887972  -0.850992   0.0       0.00.0        0.0       0.0        0.0      
  0.0        0.0       -0.671303  0.7730090.0        0.0       0.0        0.0      
  0.0        0.0        0.367588  0.3799230.0        0.0       0.0        0.0      
 ───────────────────────────────────────────┼────────────────────────────────────────────
  0.0        0.0        0.0       0.0-1.48538   -1.40037   0.0        0.0      
  0.0        0.0        0.0       0.0-0.718883  -0.086352  0.0        0.0      
  0.0        0.0        0.0       0.00.0        0.0       0.765348  -1.15712  
  0.0        0.0        0.0       0.00.0        0.0       0.176078  -0.0421353

However, the implementation isn't complete yet, and basic operations like copy(b) and b[Block(1, 1)] don't work right now. I'll leave that for future PRs.

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 43.51%. Comparing base (82cfd76) to head (8da6a14).
Report is 22 commits behind head on main.

Current head 8da6a14 differs from pull request most recent head 398ce48

Please upload reports for the commit 398ce48 to get more accurate results.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

❗ There is a different number of reports uploaded between BASE (82cfd76) and HEAD (8da6a14). Click for more details.

HEAD has 2 uploads less than BASE | Flag | BASE (82cfd76) | HEAD (8da6a14) | |------|------|------| ||3|1|
Additional details and impacted files
@@             Coverage Diff             @@
##             main    ITensor/ITensors.jl#1512       +/-   ##
===========================================
- Coverage   78.05%   43.51%   -34.54%     
===========================================
  Files         148      136       -12     
  Lines        9679     8783      -896     
===========================================
- Hits         7555     3822     -3733     
- Misses       2124     4961     +2837     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mtfishman mtfishman merged commit 9c22961 into main Jun 26, 2024
16 checks passed
@mtfishman mtfishman deleted the BlockSparseArrays_towards_merging branch June 26, 2024 18:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants