-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to customize cpy.py for my own backend? #38
Comments
The dynamo torch.compile side needs some work. The end state were heading to is to only have one "turbine" torch compile backend and have it get its runtime device and compile flags based on looking at the devices that its arguments are on. The is how the other parts of turbine work, but the torch.compile side is old code that was never retrofitted. I have someone who was going to start refactoring this. Let me check with them tomorrow and see where it is in the queue. For your reference, the newer code is getting a device instance (which includes compile flags) from here: iree-turbine/shark_turbine/runtime/device.py Line 419 in b489d8b
There are some hard coded mappings in there for known device types that would need to be extended. That is the right thing to be doing if you have an iree device that is also natively supported by pytorch. I think you might be asking for something slightly different: supporting offload to a device not otherwise supported by pytorch. For that, indeed, you would follow the example of the CPU.py and create your own. You will need to build iree python bindings that have compiled in support for your device and use that. If you get past this just being a toy, there are ways to support a completely out of tree device in the iree Python API. There are a couple of options but I don't think anyone outside of projects under active development have crossed that bridge yet. |
Very thanks, @stellaraccident. |
See the discord link on the main iree page for an invite to the server. The devs hang out there and try to answer questions. With the caveat that we will be working in this area again at some point and may reorganize some things: it's just a torch thing done through setup.py entrypoints. For example: Line 103 in 3e678c7
That is how torch associates a backend name with an implementation. |
Thanks a lot. I will have a try. |
Hello, someone. I have another question: what should I do if I want to execute runtime on two devices? The deviceState looks like below now |
I notice there is some description like below in dynamo/backends/cpu.py:
# IREE runtime globals. For the CPU right now, there is no device selection, so it is easy. @functools.lru_cache(maxsize=None) def _get_device_state() -> DeviceState: return DeviceState(driver="local-task")
,I know the "local-task" means llvm-cpu backend.
What should I do if I want use my own customized backend abm? Create a abm.py like cpu.py?
In complile stage, I can use same api in cpu.py and add some flags of my own backend.
but in runtime stage, I am not very clear how to do that:
becauese I have write a specific custom-module-run to take place of iree-run-module, the custom-module-run's main.cc contains some specific device created named abm and some other function like instance create, etc. How to customize the code here to fit my backend, right? But if I only imitate to construct a DeviceState.
Really need some guide, thx.
The text was updated successfully, but these errors were encountered: