-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BigFloat issue #152
Comments
You are right, the problem arises when the entries are not 'isbits' types, in which case they are initialized as undef. In these cases, the implementation of Strided indeed tries to access the data first, instead of just assigning, throwing an error. As a small side-note, there will probably be other things that are not fully compatible with BigFloat entries too. I don't think we have any LAPACK fallbacks, so factorisations will probably not work either, and that is less straightforward to fix. |
I played around with some fixes (progress here), but it does not seem like it is too straightforward. I cannot say I have enough understanding of the inner workings of Strided to completely fix the problem (@Jutho might know more?), and I am not entirely sure this way of fixing it is ideal, as it requires explicitly initializing all of the BigFloat arrays with zeros. |
I see, it is fine, there is no reason to hurry, thank you very much! The package works without problems with DoubleFloats.jl. For me, this is a more suitable number class anyway. As for the decompositions, I added the following workaround into the MatrixAlgebra module:
It seems to work, though I have not yet tested it very well. Could you please tell me if you see any immediate issues with this approach? P.S. If at any moment support of non-isbits types will be important, I noticed that the same problem persists for addition and, I guess, for any operation that uses |
Thanks for also looking into it. Indeed, that looks like a good solution (which might automatically get incorporated in the near future, as a similar thing is required for CUDA anyways, see https://github.com/Jutho/TensorKit.jl/tree/ld-cuda). I would guess a similar solution is necessary/exists for QR, LQ, etc, for which you could take inspiration from there. I am definitely interested in the extended precision things, and have not tried the GenericLinearAlgebra myself. If there are any more issues that pop up, feel free to let me know, I would like to keep this issue open and revisit this once I get the CUDA support and the new version up and running. |
Hi!
First of all, thank you for this wonderful package. It is a pleasure to use it.
I have noticed that the @tensor macro fails to perform a contraction when tensors contain BigFloat numbers. Here is a minimal example:
In this example, the first contraction goes well and the second throws:
I also checked if the problem persists for plain arrays of BigFloat.
This works. I use Julia 1.10.5.
Update
Here is what causes the problem. The function
similar
, when applied to tensors with BigFloat entries, gives a tensor with undefined entries. This leads to the following code failure with analogous error message:It seems that
_mapreduce_kernel!
tries to use tensor elements of C for something.The text was updated successfully, but these errors were encountered: