Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error creating vm context with modules... on RX 580 #2152

Open
Bleach665 opened this issue May 31, 2024 · 2 comments
Open

Error creating vm context with modules... on RX 580 #2152

Bleach665 opened this issue May 31, 2024 · 2 comments

Comments

@Bleach665
Copy link

Win 10, Python 3.10.6, Radeon RX 580. Vulkan SDK, runtime installed.
nodai_shark_studio_20240430_1250.exe break with error:

RuntimeError: Error creating vm context with modules: <vm>:0: UNAVAILABLE; none of the executable binaries in the module are supported by the runtime;
[ 0] bytecode compiled_vae.__init:1706 f:\StableDiffusion\nod.ai.SHARK\shark_tmp\vae_decode.torch.tempfile:578:12
      at f:\StableDiffusion\nod.ai.SHARK\shark_tmp\vae_decode.torch.tempfile:252:10

Full output:

f:\StableDiffusion\nod.ai.SHARK>nodai_shark_studio_20240430_1250.exe
vulkan devices are available.
metal devices are not available.
cuda devices are not available.
rocm devices are not available.
local-sync devices are available.
local-task devices are available.
Clearing .mlir temporary files from a prior run. This may take some time...
Clearing .mlir temporary files took 0.0010 seconds.
gradio temporary image cache located at f:\StableDiffusion\nod.ai.SHARK\shark_tmp\gradio. You may change this by setting the GRADIO_TEMP_DIR environment variable.
Clearing gradio UI temporary image files from a prior run. This may take some time...
Clearing gradio UI temporary image files took 0.0000 seconds.
gradio temporary image cache located at f:\StableDiffusion\nod.ai.SHARK\shark_tmp\gradio. You may change this by setting the GRADIO_TEMP_DIR environment variable.
No temporary images files to clear.
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
Running on local URL:  http://0.0.0.0:8080

IMPORTANT: You are using gradio version 4.19.2, however version 4.29.0 is available, please upgrade.

--------

IMPORTANT: You are using gradio version 4.19.2, however version 4.29.0 is available, please upgrade.

--------

IMPORTANT: You are using gradio version 4.19.2, however version 4.29.0 is available, please upgrade.

--------


To create a public link, set `share=True` in `launch()`.
To create a public link, set `share=True` in `launch()`.




[LOG] Performing Stable Diffusion Pipeline setup...


[LOG] Initializing new pipeline...

huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
Configuring for device:AMD Radeon RX 580 2048SP => vulkan://0

Found vulkan device AMD Radeon RX 580 2048SP. Using target triple rdna2-unknown-windows


[LOG] Pipeline initialized with pipe_id: stabilityai_stable_diffusion_2_1_base_1_77_512x512_fp16_vulkan.


[LOG] Preparing pipeline...


[LOG] Initializing schedulers from model id: stabilityai/stable-diffusion-2-1-base


[LOG] Gathering any pre-compiled artifacts....


[LOG] Tempfile for vae_decode not found. Fetching torch IR...

Saved params to f:\StableDiffusion\nod.ai.SHARK\models\stabilityai_stable_diffusion_2_1_base\vae_decode.safetensors

Configuring for device:vulkan

Found vulkan device AMD Radeon RX 580 2048SP. Using target triple rdna2-unknown-windows

Loading module f:\StableDiffusion\nod.ai.SHARK\models\stabilityai_stable_diffusion_2_1_base_1_77_512x512_fp16_vulkan\vae_decode.vmfb...

        Compiling Vulkan shaders. This may take a few minutes.

Traceback (most recent call last):
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\queueing.py", line 495, in call_prediction
    output = await route_utils.call_process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\route_utils.py", line 235, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\blocks.py", line 1627, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\blocks.py", line 1185, in call_function
    prediction = await utils.async_iteration(iterator)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\utils.py", line 514, in async_iteration
    return await iterator.__anext__()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\utils.py", line 507, in __anext__
    return await anyio.to_thread.run_sync(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "anyio\to_thread.py", line 56, in run_sync
  File "anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
  File "anyio\_backends\_asyncio.py", line 851, in run
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\utils.py", line 490, in run_sync_iterator_async
    return next(iterator)
           ^^^^^^^^^^^^^^
  File "C:\Users\Bleach\AppData\Local\Temp\_MEI153402\gradio\utils.py", line 673, in gen_wrapper
    response = next(iterator)
               ^^^^^^^^^^^^^^
  File "apps\shark_studio\api\sd.py", line 441, in shark_sd_fn_dict_input
  File "apps\shark_studio\api\sd.py", line 563, in shark_sd_fn
  File "apps\shark_studio\api\sd.py", line 166, in prepare_pipe
  File "apps\shark_studio\modules\pipeline.py", line 63, in get_compiled_map
  File "apps\shark_studio\modules\pipeline.py", line 82, in get_compiled_map
  File "apps\shark_studio\modules\pipeline.py", line 94, in get_compiled_map
  File "shark\iree_utils\compile_utils.py", line 534, in get_iree_compiled_module
    vmfb, config, temp_file_to_unlink = load_vmfb_using_mmap(
                                        ^^^^^^^^^^^^^^^^^^^^^
  File "shark\iree_utils\compile_utils.py", line 484, in load_vmfb_using_mmap
    ctx = ireert.SystemContext(config=config, vm_modules=vm_modules)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "iree\runtime\system_api.py", line 191, in __init__
    self._vm_context = _binding.VmContext(
                       ^^^^^^^^^^^^^^^^^^^
RuntimeError: Error creating vm context with modules: <vm>:0: UNAVAILABLE; none of the executable binaries in the module are supported by the runtime;
[ 0] bytecode compiled_vae.__init:1706 f:\StableDiffusion\nod.ai.SHARK\shark_tmp\vae_decode.torch.tempfile:578:12
      at f:\StableDiffusion\nod.ai.SHARK\shark_tmp\vae_decode.torch.tempfile:252:10
@HerrFrodo
Copy link

I have encountered the same error. Did you find a solution?

@Bleach665
Copy link
Author

@HerrFrodo , I dont remember(
And cant reproduce now, because migrated to NVidia.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants