Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I use this as a library? #157

Open
ct2034 opened this issue May 14, 2023 · 4 comments
Open

How can I use this as a library? #157

ct2034 opened this issue May 14, 2023 · 4 comments

Comments

@ct2034
Copy link

ct2034 commented May 14, 2023

Firstly: I am SUPER happy having finally found this. It is absolutely amazing.
Can i use this as a library? I am mostly interested in the Gray-Scott Model. How could I use this in another cpp Cmake file?

@timhutton
Copy link
Member

timhutton commented May 15, 2023 via email

@ct2034
Copy link
Author

ct2034 commented May 15, 2023

Hey Tim
Thanks so much for the detailed instructions.
I actually have my own implementation. One thing I noticed about your implementation, though is the use of dx. It seems to improve the performance a lot.
For example in https://github.com/GollyGang/ready/blob/gh-pages/Patterns/GrayScott1984/Pearson1993.vti
How is dx used? And why are the D values so much smaller?
https://github.com/GollyGang/ready/blob/gh-pages/src/readybase/GrayScottImageRD.cpp does not seem to use dx anywhere.
Is it some kind of spatial resolution? And how is the laplacian calculated, then?

@timhutton
Copy link
Member

timhutton commented May 15, 2023 via email

@danwills
Copy link
Contributor

danwills commented Sep 16, 2023

While dx/h and diffusion-rates can certainly make a difference to how fast things appear to 'evolve' (per-timestep) .. the compute-amount per-frame is mostly down to how many timesteps are being done before displaying the result again (in Ready it's labeled "Timesteps Per Render".

I've generally found that since OpenCL can go very fast indeed, and the VTK rendering framework perhaps has a bit of overhead in it, that you can often turn up "Timesteps Per Render" quite a lot without 'feeling it' in terms of framerate that much. Of course your mileage there will vary based on how capable/fast your GPU/video-ram is. (mine are old and crappy and I still find there's a range of timesteps where I hardly feel it!)

The fact Ready uses OpenCL for the computation of the next timestep essentially allows it to do something like "compute the next values for all cells at once (on the GPU)" instead of "compute them one-at-a-time (on the CPU)" and I'd say is likely to be the main reason for the improvement in apparent speed that you're seeing.

I think Tim's comment above suggesting the 'rdy' command source code is very on-point, and there would possibly be some bits you could dig-up and examine from our prior Houdini plugin where it would run the simulation (specified by VTI file) using the Ready backend/libs, and then retrieve the resulting fields into per-frame voxel data in Houdini. Let me know if you want any pointers to find that code if you're keen on that idea!

The newer incarnation of the plugin doesn't binary-link Ready at all.. instead it imports a VTI as houdini-native nodes (at it's core it's bringing/translating the kernels into something that works in the "Gas OpenCL" Dop). This was working fairly well in houdini18.5-ish but now we're onto 19.5 which is python3-only it needed a reasonably hefty update, and I've done all the py3-the update but everything still isn't quite working yet, so I haven't yet committed the current state of things. Let me know if this interests you though and I'd be happy to hurry-things-up-a-bit :).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants