-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BlockSparseArrays] Permute and merge blocks #1514
Conversation
…SparseArrays_merge_blocks_2
This PR now supports permuting and merging blocks in a single slicing operation: julia> using BlockArrays: Block, BlockedVector
julia> using NDTensors.BlockSparseArrays: BlockSparseArray, block_stored_indices
julia> a = BlockSparseArray{Float64}([2, 2, 2, 2], [2, 2, 2, 2]);
julia> @views for I in [Block(1, 1), Block(2, 2), Block(3, 3), Block(4, 4)]
a[I] = randn(size(a[I]))
end
julia> a
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}
Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.
4×4-blocked 8×8 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
0.35991 -0.363998 │ 0.0 0.0 │ 0.0 0.0 │ 0.0 0.0
-1.0964 0.478964 │ 0.0 0.0 │ 0.0 0.0 │ 0.0 0.0
─────────────────────┼──────────────────────┼────────────────────────┼─────────────────────
0.0 0.0 │ 1.36241 1.66477 │ 0.0 0.0 │ 0.0 0.0
0.0 0.0 │ -1.44313 -1.43454 │ 0.0 0.0 │ 0.0 0.0
─────────────────────┼──────────────────────┼────────────────────────┼─────────────────────
0.0 0.0 │ 0.0 0.0 │ 0.253089 1.26423 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0 │ -0.581412 0.0754013 │ 0.0 0.0
─────────────────────┼──────────────────────┼────────────────────────┼─────────────────────
0.0 0.0 │ 0.0 0.0 │ 0.0 0.0 │ 0.459771 0.227871
0.0 0.0 │ 0.0 0.0 │ 0.0 0.0 │ -0.966115 0.223543
julia> I = BlockedVector(Block.(reverse(1:4)), [2, 2])
2-blocked 4-element BlockedVector{Block{1, Int64}}:
Block(4)
Block(3)
────────
Block(2)
Block(1)
julia> a[I, I]
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}
Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.
2×2-blocked 8×8 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
0.459771 0.227871 0.0 0.0 │ 0.0 0.0 0.0 0.0
-0.966115 0.223543 0.0 0.0 │ 0.0 0.0 0.0 0.0
0.0 0.0 0.253089 1.26423 │ 0.0 0.0 0.0 0.0
0.0 0.0 -0.581412 0.0754013 │ 0.0 0.0 0.0 0.0
───────────────────────────────────────────┼─────────────────────────────────────────
0.0 0.0 0.0 0.0 │ 1.36241 1.66477 0.0 0.0
0.0 0.0 0.0 0.0 │ -1.44313 -1.43454 0.0 0.0
0.0 0.0 0.0 0.0 │ 0.0 0.0 0.35991 -0.363998
0.0 0.0 0.0 0.0 │ 0.0 0.0 -1.0964 0.478964 This is useful for abelian fusion, it is a key step in fusing multiple dimensions/axes/indices into a single one. I'll demonstrate that in the next post. |
Here is a demonstration of abelian fusion, specifically matricizing an n-dimensional abelian symmetric tensor, as of this PR (before it was permuting the blocks, but not merging adjacent blocks that are in the same symmetry sectors): julia> using BlockArrays: Block
julia> using NDTensors.BlockSparseArrays: BlockSparseArray
julia> using NDTensors.GradedAxes: GradedAxes, gradedrange
julia> using NDTensors.TensorAlgebra: fusedims, splitdims
julia> struct U1
n::Int
end
julia> GradedAxes.dual(c::U1) = U1(-c.n)
julia> GradedAxes.fuse_labels(c1::U1, c2::U1) = U1(c1.n + c2.n)
julia> Base.isless(c1::U1, c2::U1) = isless(c1.n, c2.n)
julia> r = gradedrange([U1(0) => 1, U1(1) => 1])
2-blocked 2-element BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}:
NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}(1, U1(0))
──────────────────────────────────────────────────────────────
NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}(2, U1(1))
julia> a = BlockSparseArray{Float64}(r, r, r, r);
julia> @views for I in [
Block(1, 1, 1, 1),
Block(2, 1, 2, 1),
Block(2, 1, 1, 2),
Block(1, 2, 2, 1),
Block(1, 2, 1, 2),
Block(2, 2, 2, 2),
]
a[I] = randn(size(a[I]))
end
julia> a
typeof(axes) = NTuple{4, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}
Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.
2×2×2×2-blocked 2×2×2×2 BlockSparseArray{Float64, 4, Array{Float64, 4}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Array{Float64, 4}, 4, NDTensors.BlockSparseArrays.BlockZero{NTuple{4, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}}}, NTuple{4, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}}:
[:, :, 1, 1] =
0.0958877 0.0
0.0 0.0
[:, :, 2, 1] =
0.0 -0.111154
1.75521 0.0
[:, :, 1, 2] =
0.0 -0.440967
-0.550559 0.0
[:, :, 2, 2] =
0.0 0.0
0.0 -1.13633
julia> m = fusedims(a, (1, 2), (3, 4))
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}
Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.
3×3-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}}}, Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}}:
0.0958877 │ 0.0 0.0 │ 0.0
───────────┼────────────────────────┼──────────
0.0 │ 1.75521 -0.550559 │ 0.0
0.0 │ -0.111154 -0.440967 │ 0.0
───────────┼────────────────────────┼──────────
0.0 │ 0.0 0.0 │ -1.13633 We can also split the dimensions to recover the original n-dimensional tensor with julia> splitdims(m, (r, r), (r, r))
typeof(axes) = NTuple{4, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}
Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.
2×2×2×2-blocked 2×2×2×2 BlockSparseArray{Float64, 4, Array{Float64, 4}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Array{Float64, 4}, 4, NDTensors.BlockSparseArrays.BlockZero{NTuple{4, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}}}, NTuple{4, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1}}}}}:
[:, :, 1, 1] =
0.0958877 0.0
0.0 0.0
[:, :, 2, 1] =
0.0 -0.111154
1.75521 0.0
[:, :, 1, 2] =
0.0 -0.440967
-0.550559 0.0
[:, :, 2, 2] =
0.0 0.0
0.0 -1.13633 @emstoudenmire to translate this back to the existing NDTensors.j/ITensors.jl code, this is performing two combiner operations (one combining the first two indices, and one combining the last two indices) in a single operation. The code implementing
This PR brings us one step closer to replacing the current |
@ogauthe this involves a lot of changes to the design of slicing of BlockSparseArrays, but doesn't change the interface (it just adds support for new slicing operations). Some of the output types of slicing operations are different after this PR, but in general they should be simpler and easier to work with. Once tests pass I'll merge and register this, let me know if you have any issues with it in your code. Note that |
Nice! I did not observe any change in I confirm that I get the same data matrix as obtained from ft = FusionTensor(a, GradedAxes.dual.((r,r)), (r,r))
data_matrix(ft) ≈ m # true |
This PR adds this functionality for merging blocks while preserving the block sparse structure:
It also adds support for setting a new blocking at a more fine-grained level while preserving the block sparse structure: