Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDTensors] [BUG] S and V do not contract after SVD #1199

Closed
ArtemStrashko opened this issue Sep 21, 2023 · 8 comments · Fixed by #1242
Closed

[NDTensors] [BUG] S and V do not contract after SVD #1199

ArtemStrashko opened this issue Sep 21, 2023 · 8 comments · Fixed by #1242
Labels
bug Something isn't working NDTensors Requires changes to the NDTensors.jl library.

Comments

@ArtemStrashko
Copy link

Description of bug

Contractions of some NDTensors seem to be not implemented.

Minimal code demonstrating the bug or unexpected behavior

Minimal runnable code

using ITensors
indices = (Index(2), Index(3), Index(4))
a = NDTensors.randomTensor(indices)
U, S, V, spec = svd(a, (1,), (2,3))

labels_u, labels_s = ITensors.compute_contraction_labels(inds(U), inds(S))
contract(U, labels_u, S, labels_s) # works well

labels_s, labels_v = ITensors.compute_contraction_labels(inds(S), inds(V))
contract(S, labels_s, V, labels_v) # does not work

Expected output or behavior

Working contraction.

Actual output or behavior

Error.

Output of minimal runnable code

Setting the type parameter of the type `AbstractVector{Float64}` at position `NDTensors.SetParameters.Position{1}()` to `Float64` is not currently defined. Either that type parameter position doesn't exist in the type, or `set_parameter` has not been overloaded for this type.

Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] set_parameter(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
    @ NDTensors.SetParameters ~/.julia/packages/NDTensors/pUJ6b/src/SetParameters/src/interface.jl:10
  [3] set_parameters(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
    @ NDTensors.SetParameters ~/.julia/packages/NDTensors/pUJ6b/src/SetParameters/src/set_parameters.jl:20
  [4] set_eltype(arraytype::Type{AbstractVector{Float64}}, eltype::Type)
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/abstractarray/set_types.jl:8
  [5] similartype(::Type{SimpleTraits.Not{NDTensors.IsWrappedArray{AbstractVector{Float64}}}}, arraytype::Type{AbstractVector{Float64}}, eltype::Type)
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/abstractarray/similar.jl:150
  [6] similartype
    @ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
  [7] promote_rule(#unused#::Type{NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, #unused#::Type{NDTensors.Dense{Float64, Vector{Float64}}})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/dense/dense.jl:126
  [8] promote_type
    @ ./promotion.jl:307 [inlined]
  [9] promote_rule(#unused#::Type{NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, #unused#::Type{NDTensors.Diag{Float64, Vector{Float64}}})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/diag/diag.jl:123
 [10] promote_type
    @ ./promotion.jl:307 [inlined]
 [11] promote_rule(#unused#::Type{NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}}, #unused#::Type{NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/tensor/tensor.jl:255
 [12] promote_type
    @ ./promotion.jl:307 [inlined]
 [13] contraction_output_type(tensortype1::Type{NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}}, tensortype2::Type{NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}}, indsR::Tuple{Index{Int64}, Index{Int64}, Index{Int64}})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/diag/tensoralgebra/contract.jl:6
 [14] zero_contraction_output(T1::NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}, T2::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, indsR::Tuple{Index{Int64}, Index{Int64}, Index{Int64}})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/tensor/tensor.jl:393
 [15] contraction_output(T1::NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}, T2::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, indsR::Tuple{Index{Int64}, Index{Int64}, Index{Int64}})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/diag/tensoralgebra/contract.jl:33
 [16] contraction_output(tensor1::NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}, labelstensor1::Tuple{Int64, Int64}, tensor2::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, labelstensor2::Tuple{Int64, Int64, Int64}, labelsoutput_tensor::Tuple{Int64, Int64, Int64})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/tensoralgebra/generic_tensor_operations.jl:47
 [17] contract(tensor1::NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}, labelstensor1::Tuple{Int64, Int64}, tensor2::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, labelstensor2::Tuple{Int64, Int64, Int64}, labelsoutput_tensor::Tuple{Int64, Int64, Int64})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/tensoralgebra/generic_tensor_operations.jl:93
 [18] contract(::Type{NDTensors.CanContract{NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}, NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}}}, tensor1::NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}, labels_tensor1::Tuple{Int64, Int64}, tensor2::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, labels_tensor2::Tuple{Int64, Int64, Int64})
    @ NDTensors ~/.julia/packages/NDTensors/pUJ6b/src/tensoralgebra/generic_tensor_operations.jl:76
 [19] contract(tensor1::NDTensors.DiagTensor{Float64, 2, Tuple{Index{Int64}, Index{Int64}}, NDTensors.Diag{Float64, Vector{Float64}}}, labels_tensor1::Tuple{Int64, Int64}, tensor2::NDTensors.DenseTensor{Float64, 3, Tuple{Index{Int64}, Index{Int64}, Index{Int64}}, NDTensors.Dense{Float64, Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}}, labels_tensor2::Tuple{Int64, Int64, Int64})
    @ NDTensors ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331
 [20] top-level scope
    @ In[9]:2

Version information

  • Output from versioninfo():
julia> versioninfo()
Julia Version 1.9.3
Commit bed2cd540a1 (2023-08-24 14:43 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: macOS (arm64-apple-darwin22.4.0)
  CPU: 10 × Apple M1 Max
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, apple-m1)
  Threads: 2 on 8 virtual cores
Environment:
  JULIA_PKG_USE_CLI_GIT = true
  • Output from using Pkg; Pkg.status("ITensors"):
julia> using Pkg; Pkg.status("ITensors")
Status `~/.julia/environments/v1.9/Project.toml`
  [9136182c] ITensors v0.3.43 `~/Documents/ITensors.jl`
@ArtemStrashko ArtemStrashko added bug Something isn't working NDTensors Requires changes to the NDTensors.jl library. labels Sep 21, 2023
@emstoudenmire
Copy link
Collaborator

emstoudenmire commented Sep 21, 2023

Thanks, Artem, this is helpful to know about.

@kmp5VT @mtfishman the issue seems to be:
contraction_output is running, and further down the stack calling set_eltype(arraytype::Type{AbstractVector{Float64}}, eltype::Type) to set the type parameter to Float64. But as the error message says, setting the type parameter of AbstractVector{Float64} is not currently defined, and "Either that type parameter position doesn't exist in the type, or set_parameter has not been overloaded for this type.".

Perhaps the contraction output type is wrong? I notice that the data type of V is coming out to be:
Base.ReshapedArray{Float64, 1, Adjoint{Float64, Matrix{Float64}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}}

@kmp5VT
Copy link
Collaborator

kmp5VT commented Sep 21, 2023

@emstoudenmire @mtfishman it looks like the error is related to promoting the ReshapedArray data type. Whats trying to happen is NDTensors is trying to create a new tensor based on the inputs S and V.

DataT1 = typeof(data(V)) 
  Base.ReshapedArray{Float64, 1, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, 
  Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}
DataT2 = typeof(data(S)) 
  Vector{Float64}
promote_type(DataT1, DataT2)
  AbstractVector{Float64}

AbstractVector is not defined for the set_parameter system. We can "solve" the issue by unwrapping the ReshapedArray type by forcing U to transpose with something like V = copy(V) however I don't know that this is the right answer.

@ArtemStrashko
Copy link
Author

@kmp5VT, this is a "solution" we also had in mind, but we hoped for a better one as this solution leads to unnecessary allocations.

@mtfishman
Copy link
Member

mtfishman commented Sep 21, 2023

Thanks for the report @ArtemStrashko.

A quick workaround could be to unwrap any wrappers (like ReshapedArray and Adjoint) that are wrapping the data type before calling promote_type, for example with similartype or leaf_parenttype (so that wouldn't lead to copying of any data, it just helps promote_type along to figure out the data type we want).

The root cause of this is that we are pushing the limits for what promote_type is meant to be used for. It's a Julia function I hijacked as part of our system for determining the output type of tensor contractions and other operations based on the inputs, but it wasn't really meant for that purpose so has slightly different rules than what we need. It was initially designed for coming up with common supertypes when collecting objects into a single container like a Vector where you need a uniform type and want to avoid copying data, which is why it is outputting AbstractVector here. See this discussion for more context: https://discourse.julialang.org/t/type-promototion-of-vectors/30227.

So a deeper fix would be to design our own version of promote_type with similar behavior but specifically made for our purpose of finding output types of tensor contractions. contraction_output_type is a partial solution but it still makes too much use of promote_type, as we see here.

Another thing this issue points to is a problem with how we store the tensor data. It looks like this wrapper is being created because we are forcing the data to be stored as a vector, while it would be better to have it stored as an array with the same dimensions as the outer tensor. That would avoid the ReshapedArray wrapper, which is just there because we sprinkle vec in a few places to force the data to be an AbstractVector. Another thing to consider would be making use of functionality in StridedViews.jl (used by Strided.jl) for flattening nested wrapped arrays into a single StridedView that just stores the resulting equivalent strides. We could make use of that to more aggressively try to remove layers of wrappers, which can lead to issues like this as well as issues related to hitting slow generic code instead of being dispatched properly to backends that expect certain stride layouts like BLAS.

A simple example:

julia> using StridedViews

julia> using LinearAlgebra

julia> function main(d)
         x = reshape((@view randn(d, d)[:, :])', d, d)
         @show typeof(x)
         y = reshape((@view randn(d, d)[:, :])', d, d)
         z = @time x * y
         sx = StridedView(x)
         @show typeof(sx)
         sy = StridedView(y)
         sz = @time sx * sy
         @show norm(z - sz)
         return nothing
       end
main (generic function with 1 method)

julia> main(1000);
typeof(x) = Base.ReshapedArray{Float64, 2, Adjoint{Float64, SubArray{Float64, 2, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}}, true}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}
  0.333445 seconds (5 allocations: 7.659 MiB)
typeof(sx) = StridedView{Float64, 2, Matrix{Float64}, typeof(identity)}
  0.009434 seconds (2 allocations: 7.629 MiB)
norm(z - sz) = 1.4808846061464978e-11

Not saying this particular code will be slow, once we fix this output type inference/promotion issue, but it's an issue we see elsewhere caused by complicated wrapper types.

@ArtemStrashko
Copy link
Author

Thanks a lot for a comprehensive response, @mtfishman. Probably for now we will see how far we can go with simply creating a copy V = copy(V) after SVD. If I understand it correctly, your simple workaround suggests taking something like V.parent after SVD. I think this would then require fixing indices and doing complex conjugation. @brian-dellabetta

@mtfishman
Copy link
Member

My suggestion would be to try changing this line:

VecR = promote_type(DataT1, DataT2)

to:

  VecR = promote_type(leaf_parenttype(DataT1), leaf_parenttype(DataT2))

since I believe that is the line that is erroneously outputting AbstractVector as the inferred output data type instead of a concrete type. leaf_parenttype unwraps the wrapper types so hopefully fixes it, though I haven't tried it.

@ArtemStrashko
Copy link
Author

Thanks, this works indeed! @brian-dellabetta

@mtfishman
Copy link
Member

Great, could one of you make a PR with that fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working NDTensors Requires changes to the NDTensors.jl library.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants