Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add WD14 v3 models #101

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Conversation

catboxanon
Copy link

Closes #100

@Todokete
Copy link

Todokete commented Mar 8, 2024

Making just this change causes this error for me.
Loading WD14 ViT v3 model file from SmilingWolf/wd-vit-tagger-v3, model.onnx
*** Error completing request
*** Arguments: (<PIL.Image.Image image mode=RGB size=488x488 at 0x2EA61D05570>, 'WD14 ViT v3', '', '', '', '', '', '') {}
Traceback (most recent call last):
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\ui.py", line 113, in on_interrogate_image_submit
interrogator.interrogate_image(image)
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 150, in interrogate_image
data = ('', '', fi_key) + self.interrogate(image)
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 448, in interrogate
self.load()
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 433, in load
self.model = ort.InferenceSession(model_path,
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 452, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\models\interrogators\models--SmilingWolf--wd-vit-tagger-v3\snapshots\7ece29807f5b36b2221944db26c09b3fff25c3d7\model.onnx failed:D:\a_work\1\s\onnxruntime\core/graph/model_load_utils.h:56 onnxruntime::model_load_utils::ValidateOpsetForDomain ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 4 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx.ml is till opset 3.


onnxruntime needs to be added to requirements.

@catboxanon
Copy link
Author

catboxanon commented Mar 8, 2024

It already is part of it.

if not is_installed('onnxruntime'):
if system() == "Darwin":
package_name = "onnxruntime-silicon"
else:
package_name = "onnxruntime-gpu"
package = os.environ.get(
'ONNXRUNTIME_PACKAGE',
package_name
)
run_pip(f'install {package}', 'onnxruntime')

I don't have any issue using the WD14 ViT v3 model on a fresh install, so I'm not sure what the root cause of your issue might be. Theoretically if you're getting that error, none of the models would work, meaning it's unrelated.

@LaikaSa
Copy link

LaikaSa commented Mar 16, 2024

Making just this change causes this error for me. Loading WD14 ViT v3 model file from SmilingWolf/wd-vit-tagger-v3, model.onnx *** Error completing request *** Arguments: (<PIL.Image.Image image mode=RGB size=488x488 at 0x2EA61D05570>, 'WD14 ViT v3', '', '', '', '', '', '') {} Traceback (most recent call last): File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\ui.py", line 113, in on_interrogate_image_submit interrogator.interrogate_image(image) File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 150, in interrogate_image data = ('', '', fi_key) + self.interrogate(image) File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 448, in interrogate self.load() File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\extensions\stable-diffusion-webui-wd14-tagger\tagger\interrogator.py", line 433, in load self.model = ort.InferenceSession(model_path, File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 452, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from G:\Stable Diffusion\Data\Packages\StableDiffusion WebUI\models\interrogators\models--SmilingWolf--wd-vit-tagger-v3\snapshots\7ece29807f5b36b2221944db26c09b3fff25c3d7\model.onnx failed:D:\a_work\1\s\onnxruntime\core/graph/model_load_utils.h:56 onnxruntime::model_load_utils::ValidateOpsetForDomain ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 4 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx.ml is till opset 3.

onnxruntime needs to be added to requirements.

@Todokete Hey, in case you still need a solution, this work for me:

  1. Open window powershell in A1111 SD webui root folder, type: .\venv\Scripts\activate
  2. type: pip uninstall onnxruntime
  3. type: pip install onnxruntime
  4. when its finished, type: deactivate , then close the terminal
    It should work when you reopen A1111 webUI now, the issue is just simply onnxruntime version is old and you need to upgrade, by the time I install the working version is [v1.17.1]

@LaikaSa
Copy link

LaikaSa commented Mar 17, 2024

Hello. Having problems getting wd14 v3 to run. Installed latest version of ONNX and copied the text into utils.py and A1111 downloaded the models just fine when ran. But when trying to use them. I get this error. Older models work just fine.

Traceback (most recent call last): File "D:\Programs\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\Programs\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "D:\Programs\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data self.validate_outputs(fn_index, predictions) # type: ignore File "D:\Programs\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs raise ValueError( ValueError: An event handler (on_interrogate_image_submit) didn't receive enough output values (needed: 7, received: 3). Wanted outputs: [state, html, html, label, label, label, html] Received outputs: [None, "", "
Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from D:\Programs\stable-diffusion-webui\models\interrogators\models--SmilingWolf--wd-convnext-tagger-v3\snapshots\d39e46de298d27340111b64965e20b8185c407e6\model.onnx failed:C:\a_work\1\s\onnxruntime\core/graph/model_load_utils.h:56 onnxruntime::model_load_utils::ValidateOpsetForDomain ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 4 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx.ml is till opset 3.

Time taken: 0.6 sec.

A: 6.72 GB, R: 7.01 GB, Sys: 8.6/23.9883 GB (36.0%)
"]

ONNX and onnxruntime are 2 different things which the later are more important, are you sure you installed the right one?

@LaikaSa
Copy link

LaikaSa commented Mar 18, 2024

ONNX and onnxruntime are 2 different things which the later are more important, are you sure you installed the right one?

Yes, did pip uninstall onnxruntime pip install onnxruntime in .\venv\Scripts\activate

Please enter the environment again with .\venv\Scripts\activate, then type: pip show onnxruntime
and check which version it show, if it is v1.17.1 then I'm out of ideas because that works for me

@GiusTex
Copy link

GiusTex commented Apr 23, 2024

  1. Open window powershell in A1111 SD webui root folder, type: .\venv\Scripts\activate
  2. type: pip uninstall onnxruntime
  3. type: pip install onnxruntime
  4. when its finished, type: deactivate , then close the terminal
    It should work when you reopen A1111 webUI now, the issue is just simply onnxruntime version is old and you need to upgrade, by the time I install the working version is [v1.17.1]

It worked for me 👍🏻, I had the same error and it solved it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for Smiling Wolf new V3 models
4 participants