You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As briefly discussed with Nader at the last meeting, it would be helpful if we could use a single workunit/kernel on both GPU and CPU, i.e., agnostic of execution space.
If we take a simple example workunit that runs just fine on the host:
--- a/test.py+++ b/test.py@@ -15,6 +15,6 @@ def main():
if __name__ == "__main__":
- space = pk.ExecutionSpace.OpenMP+ space = pk.ExecutionSpace.Cuda
pk.set_default_space(space)
main()
we get this error:
Traceback (most recent call last):
File "test.py", line 20, in <module>
main()
File "test.py", line 14, in main
pk.parallel_for(1, sin_impl, view=v)
File "/vast/home/treddy/github_projects/pykokkos/pykokkos/interface/parallel_dispatch.py", line 179, in parallel_for
func(**args)
RuntimeError: Unable to cast Python instance of type <class 'kokkos.libpykokkos.KokkosView_float64_CudaSpace_LayoutLeft_1'> to C++ type 'Kokkos::View<double*, Kokkos::LayoutLeft, Kokkos::HostSpace, Kokkos::Experimental::EmptyViewHooks>'
We can apparently circumvent it by writing a GPU-specific workunit with a distinct decorator, or not using NumPy to initialize the data per gh-186, but obviously this isn't quite in the spirit of "write code once, use everywhere." We could perhaps also use pk.function (not sure) to only write the code once and reuse it in workunits targeted at different execution spaces--still, this isn't quite as pleasant as a single workunit "write once" paradigm.
Apparently from_numpy puts us in Cuda space, but the workunit expects host space, so it fails the cast. While that particular detail should likely be resolved for consistency in gh-186, I think my suggestion here is that both should be handled--perhaps one choice might be more efficient in terms of memory copies/zero-copy workflows, but it seems like the workunit should be able to move/cast the data around as needed.
Even if the design decision is that we won't support both types of input, I still think that, at a minimum, the error message should suggest what we should do, and perhaps provide a suitable function name that can convert back to host space. With a CuPy array you call arr.get() to return it to host, etc. Not sure we have that level of convenience for views just yet.
The text was updated successfully, but these errors were encountered:
As briefly discussed with Nader at the last meeting, it would be helpful if we could use a single workunit/kernel on both GPU and CPU, i.e., agnostic of execution space.
If we take a simple example workunit that runs just fine on the host:
and then switch the execution space to
Cuda
:we get this error:
We can apparently circumvent it by writing a GPU-specific workunit with a distinct decorator, or not using NumPy to initialize the data per gh-186, but obviously this isn't quite in the spirit of "write code once, use everywhere." We could perhaps also use
pk.function
(not sure) to only write the code once and reuse it in workunits targeted at different execution spaces--still, this isn't quite as pleasant as a single workunit "write once" paradigm.Apparently
from_numpy
puts us inCuda
space, but the workunit expects host space, so it fails the cast. While that particular detail should likely be resolved for consistency in gh-186, I think my suggestion here is that both should be handled--perhaps one choice might be more efficient in terms of memory copies/zero-copy workflows, but it seems like the workunit should be able to move/cast the data around as needed.Even if the design decision is that we won't support both types of input, I still think that, at a minimum, the error message should suggest what we should do, and perhaps provide a suitable function name that can convert back to host space. With a CuPy array you call
arr.get()
to return it to host, etc. Not sure we have that level of convenience for views just yet.The text was updated successfully, but these errors were encountered: