You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use float for the preconditioner part. As far as my understanding goes, constructing the preconditioner is done on CPU, but why do we need to specify backend type for amg precondtioner? Or am I doing it in a wrong way:
The above code works fine for built-in or vexcl, but has the following error for CUDA build.
/data2/wlan/MLPCG/cxx_src/amgcl/amgcl/backend/interface.hpp(321): error: class "amgcl::backend::spmv_impl<double, amgcl::backend::cuda_matrix<float>, thrust::device_vector<T, thrust::device_allocator<T>>, double, thrust::device_vector<T, thrust::device_allocator<T>>, void>" has no member "apply"
spmv_impl<Alpha, Matrix, Vector1, Beta, Vector2>::apply(alpha, A, x, beta, y);
The text was updated successfully, but these errors were encountered:
The CUDA backend does not support mixed precision (it needs things like mixed precision matrix-vector product, and CuSparse did not have those when the backend was implemented, not sure about now).
You can use the VexCL backend for amgcl with CUDA as backend for VexCL.
I am trying to use float for the preconditioner part. As far as my understanding goes, constructing the preconditioner is done on CPU, but why do we need to specify backend type for amg precondtioner? Or am I doing it in a wrong way:
The above code works fine for built-in or vexcl, but has the following error for CUDA build.
The text was updated successfully, but these errors were encountered: