-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NDTensors] [BUG] S and V do not contract after SVD #1199
Comments
Thanks, Artem, this is helpful to know about. @kmp5VT @mtfishman the issue seems to be: Perhaps the contraction output type is wrong? I notice that the data type of V is coming out to be: |
@emstoudenmire @mtfishman it looks like the error is related to promoting the
AbstractVector is not defined for the |
@kmp5VT, this is a "solution" we also had in mind, but we hoped for a better one as this solution leads to unnecessary allocations. |
Thanks for the report @ArtemStrashko. A quick workaround could be to unwrap any wrappers (like The root cause of this is that we are pushing the limits for what So a deeper fix would be to design our own version of Another thing this issue points to is a problem with how we store the tensor data. It looks like this wrapper is being created because we are forcing the data to be stored as a vector, while it would be better to have it stored as an array with the same dimensions as the outer tensor. That would avoid the A simple example: julia> using StridedViews
julia> using LinearAlgebra
julia> function main(d)
x = reshape((@view randn(d, d)[:, :])', d, d)
@show typeof(x)
y = reshape((@view randn(d, d)[:, :])', d, d)
z = @time x * y
sx = StridedView(x)
@show typeof(sx)
sy = StridedView(y)
sz = @time sx * sy
@show norm(z - sz)
return nothing
end
main (generic function with 1 method)
julia> main(1000);
typeof(x) = Base.ReshapedArray{Float64, 2, Adjoint{Float64, SubArray{Float64, 2, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Base.Slice{Base.OneTo{Int64}}}, true}}, Tuple{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64}}}
0.333445 seconds (5 allocations: 7.659 MiB)
typeof(sx) = StridedView{Float64, 2, Matrix{Float64}, typeof(identity)}
0.009434 seconds (2 allocations: 7.629 MiB)
norm(z - sz) = 1.4808846061464978e-11 Not saying this particular code will be slow, once we fix this output type inference/promotion issue, but it's an issue we see elsewhere caused by complicated wrapper types. |
Thanks a lot for a comprehensive response, @mtfishman. Probably for now we will see how far we can go with simply creating a copy |
My suggestion would be to try changing this line: ITensors.jl/NDTensors/src/dense/dense.jl Line 125 in b04a973
to: VecR = promote_type(leaf_parenttype(DataT1), leaf_parenttype(DataT2)) since I believe that is the line that is erroneously outputting |
Thanks, this works indeed! @brian-dellabetta |
Great, could one of you make a PR with that fix? |
Description of bug
Contractions of some NDTensors seem to be not implemented.
Minimal code demonstrating the bug or unexpected behavior
Minimal runnable code
Expected output or behavior
Working contraction.
Actual output or behavior
Error.
Output of minimal runnable code
Version information
versioninfo()
:using Pkg; Pkg.status("ITensors")
:The text was updated successfully, but these errors were encountered: