-
I looked around but could not find an easy way to take advantage of the GPU in Scala. There is the Parallel Collections library but that runs on the CPU as far as I know. Is it possible to use Storch for non-deep learning, non-numerical, more general parallel computation purposes? Like using the GPU to run a parallel blur filter that can run thousands of times faster than on the CPU, for example. This would be fantastic for educational purposes. Are there any other "general parallel programming on the GPU" libraries for Scala that are easy to use and "teachable"? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 13 replies
-
I will defer this to @sbrunk, but to my knowledge I would say no because storch "just" acts as a wrapper to the JavaCCP wrapper of the Pytorch library. I am assuming here that one would need to program the kernels directly for such an exercise. A quick search also shows image blur is only available in PytorchVision. I also did not find any JavaCPP preset for torchvision, but I may have missed it. |
Beta Was this translation helpful? Give feedback.
-
I found this! https://github.com/ComputeNode/scalag Did any of you know about this? Seems promising! Says SPIR-V and run through Vulkan, not OpenCL or CUDA. That can work on APUs, hopefully! It's pre-alpha so I'll wait. |
Beta Was this translation helpful? Give feedback.
-
Not me. Seems interesting. And that image does not seem to be just ray-tracing. Sift shadows reminds me of the [Radiosity](https://en.wikipedia.org/wiki/Radiosity_(computer_graphics). |
Beta Was this translation helpful? Give feedback.
Tornado doesn't work unfortunately 😢
Running the examples just stalls with 0% CPU usage.
Needs newer hardware I think!
(Although the guy in the video runs it on a laptop. It's Nvidia... 😠 )
@sbrunk rocm is only supported on Instinct GPUs.
I think I give up on this dream 😢
GPU compute is reserved for the rich, no low-end peasants allowed.