-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for SVD on BlockArray
#426
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #426 +/- ##
==========================================
+ Coverage 93.67% 93.73% +0.06%
==========================================
Files 18 19 +1
Lines 1643 1661 +18
==========================================
+ Hits 1539 1557 +18
Misses 104 104 ☔ View full report in Codecov by Sentry. |
Co-authored-by: Matt Fishman <[email protected]>
Maybe it would be better to define a |
Ive come across the issue of LinearAlgebra imposing different components have the same type many times. My solution has been to copy the code with an extra templated variable, see eg LazyBandedMatrices.Tridiagonal and LazyBandedMatrices.SymTridiagonal If we follow this pattern we could create a BlockArrays.SVD (or possibly MatrixFactorizations.SVD) with an extra templated variable. To do this is actually pretty easy: copy the code and test from LinearAlgebra and just modify any of the templates |
Regarding this point, I don't think it is a good idea to make BlockArrays.jl/src/blockarrayinterface.jl Lines 14 to 16 in 1e5feaa
Diagonal wrapping an AbstractBlockVector with a BlockDiagonal , I raised an issue here: #428. Probably best to fix that in conjunction with this PR.
|
This PR implements support for
LinearAlgebra.svd
on block arrays, which tries to retain the block structure as much as possible, without making choices for structures that are not canonical.In particular, this means that for
U, S, V = svd(A)
, the rows ofU
should have the same structure as the rows ofA
, which allowsU' * A
to behave as expected, and similarly the rows ofV
should have the same structure as the columns ofA
to supportA * V
. There is no real block structure that carries over to the rows and columns ofS
however, resulting in a single block.As far as I know, there is no real way of implementing this efficiently by making use of the block structure, so the implementation first maps the arrays to a
BlockedArray
, which is then used to perform the decomposition. Similarly, the resultingU
,S
andVt
are alwaysBlockedArray
, to reflect this.For
BlockDiagonal
matrices however, these can be considered as linear maps that conserve the block structure, and in particular the SVD can efficiently be implemented block-by-block. For these cases, a specialized implementation is provided, which does carry over the block structure toS
.Questions and comments
LinearAlgebra
defines theSVD
struct,U
andVt
must have the same structure. In particular, this means that the original block matrix should have the same type of blocksizes for its rows and columns. There is currently nothing handles this, or attempts to promote the types when they are not the same.blocksizes(y, i)
#425BlockDiagonal
matrices, I think it is quite natural thatU
andV
areBlockDiagonal
, but I am not so sure how to defineS
. Currently, it is aBlockVector
, which is in line with howLinearAlgebra
requires this to be a vector, but it does make it a bit cumbersome to work with:U * Diagonal(S) * Vt
does not actually work, asBlockArray(Diagonal)
is not the same asDiagonal(BlockVector)
. It might be reasonable to specifically defineDiagonal(::BlockVector)
to return aBlockDiagonal
ofDiagonal
seigencopy_oftype
, to avoid unnecessary copies, and to improve compatibility with GPU arrays.