Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DepthAnythingV2Preprocessor #492

Open
sanhedasheng opened this issue Dec 17, 2024 · 1 comment
Open

DepthAnythingV2Preprocessor #492

sanhedasheng opened this issue Dec 17, 2024 · 1 comment

Comments

@sanhedasheng
Copy link

I am using intel A770 GPU Card.

Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

ComfyUI Error Report

Error Details

  • Node ID: 2
  • Node Type: DepthAnythingV2Preprocessor
  • Exception Type: RuntimeError
  • Exception Message: Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

Stack Trace

  File "D:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\depth_anything_v2.py", line 24, in execute
    out = common_annotator_call(model, image, resolution=resolution, max_depth=1)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\utils.py", line 85, in common_annotator_call
    np_result = model(np_image, output_type="np", detect_resolution=detect_resolution, **kwargs)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\__init__.py", line 44, in __call__
    depth = self.model.infer_image(cv2.cvtColor(input_image, cv2.COLOR_RGB2BGR), input_size=518, max_depth=max_depth)

  File "D:\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dpt.py", line 189, in infer_image
    depth = self.forward(image, max_depth)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dpt.py", line 179, in forward
    features = self.pretrained.get_intermediate_layers(x, self.intermediate_layer_idx[self.encoder], return_class_token=True)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 308, in get_intermediate_layers
    outputs = self._get_intermediate_layers_not_chunked(x, n)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 272, in _get_intermediate_layers_not_chunked
    x = self.prepare_tokens_with_masks(x)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 214, in prepare_tokens_with_masks
    x = self.patch_embed(x)

  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2_layers\patch_embed.py", line 76, in forward
    x = self.proj(x)  # B C H W

  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)

  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,

System Information

  • ComfyUI Version: v0.3.7-13-g44db9785
  • Arguments: D:\ComfyUI\main.py --force-fp16 --fp8_e4m3fn-text-enc --fp8_e4m3fn-unet --bf16-vae --preview-method none --normalvram --disable-smart-memory --disable-cuda-malloc
  • OS: nt
  • Python Version: 3.10.14 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:44:50) [MSC v.1916 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.1.0.post2+cxx11.abi

Devices

  • Name: xpu
    • Type: xpu
    • VRAM Total: 16704737280
    • VRAM Free: 16704737280
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2024-12-17T09:07:11.152232 -  2024-12-17T09:07:11.152232 - 2024-12-17T09:07:19.714414 - 2024-12-17T09:07:19.714414 -  2024-12-17T09:07:19.714414 - Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl, https://pypi.oystermercury.top/os
2024-12-17T09:07:19.714414 - 2024-12-17T09:07:19.715484 - 2024-12-17T09:07:19.715484 -  2024-12-17T09:07:19.715484 - 2024-12-17T09:07:19.770925 - 2024-12-17T09:07:19.770925 -  2024-12-17T09:07:19.770925 - 2024-12-17T09:07:19.771924 - 2024-12-17T09:07:19.771924 -  2024-12-17T09:07:19.771924 - 2024-12-17T09:07:19.772920 - 2024-12-17T09:07:19.772920 -  2024-12-17T09:07:19.772920 - 2024-12-17T09:07:19.774887 - 2024-12-17T09:07:19.774887 -  2024-12-17T09:07:19.774887 - 2024-12-17T09:07:19.775884 - 2024-12-17T09:07:19.775884 -  2024-12-17T09:07:19.775884 - 2024-12-17T09:07:19.776910 - 2024-12-17T09:07:19.776910 -  2024-12-17T09:07:19.776910 - 2024-12-17T09:07:19.777879 - 2024-12-17T09:07:19.777879 -  2024-12-17T09:07:19.777879 - 2024-12-17T09:07:19.778878 - 2024-12-17T09:07:19.778878 -  2024-12-17T09:07:19.778878 - 2024-12-17T09:07:19.779902 - 2024-12-17T09:07:19.779902 -  2024-12-17T09:07:19.779902 - 2024-12-17T09:07:19.780872 - 2024-12-17T09:07:19.780872 -  2024-12-17T09:07:19.780872 - 2024-12-17T09:07:19.782865 - 2024-12-17T09:07:19.782865 -  2024-12-17T09:07:19.782865 - 2024-12-17T09:07:19.783863 - 2024-12-17T09:07:19.783863 -  2024-12-17T09:07:19.783863 - 2024-12-17T09:07:19.784860 - 2024-12-17T09:07:19.784860 -  2024-12-17T09:07:19.784860 - 2024-12-17T09:07:19.785857 - 2024-12-17T09:07:19.785857 -  2024-12-17T09:07:19.785857 - 2024-12-17T09:07:19.786855 - 2024-12-17T09:07:19.786855 -  2024-12-17T09:07:19.786855 - 2024-12-17T09:07:19.787852 - 2024-12-17T09:07:19.787852 -  2024-12-17T09:07:19.787852 - 2024-12-17T09:07:19.788849 - 2024-12-17T09:07:19.788849 -  2024-12-17T09:07:19.788849 - 2024-12-17T09:07:19.789847 - 2024-12-17T09:07:19.789847 -  2024-12-17T09:07:19.789847 - 2024-12-17T09:07:19.791841 - 2024-12-17T09:07:19.791841 -  2024-12-17T09:07:19.791841 - 2024-12-17T09:07:19.817324 - 2024-12-17T09:07:19.817324 -  2024-12-17T09:07:19.817324 - 2024-12-17T09:07:19.831286 - 2024-12-17T09:07:19.831286 -  2024-12-17T09:07:19.831286 - 2024-12-17T09:07:19.831286 - 2024-12-17T09:07:19.831286 -  2024-12-17T09:07:19.831286 - 2024-12-17T09:07:19.832283 - 2024-12-17T09:07:19.832283 -  2024-12-17T09:07:19.832283 - 2024-12-17T09:07:19.833284 - 2024-12-17T09:07:19.833284 -  2024-12-17T09:07:19.833284 - 2024-12-17T09:07:19.833284 - 2024-12-17T09:07:19.833284 -  2024-12-17T09:07:19.833284 - 2024-12-17T09:07:19.841260 - 2024-12-17T09:07:19.841260 -  2024-12-17T09:07:19.841260 - 2024-12-17T09:07:19.879159 - 2024-12-17T09:07:19.879159 -  2024-12-17T09:07:19.879159 - 2024-12-17T09:07:19.881154 - 2024-12-17T09:07:19.881154 -  2024-12-17T09:07:19.881154 - 2024-12-17T09:07:19.882153 - 2024-12-17T09:07:19.882153 -  2024-12-17T09:07:19.882153 - 2024-12-17T09:07:19.883150 - 2024-12-17T09:07:19.883150 -  2024-12-17T09:07:19.883150 - 2024-12-17T09:07:19.978401 - 2024-12-17T09:07:19.978401 -  2024-12-17T09:07:19.978401 - 2024-12-17T09:07:20.130192 - 2024-12-17T09:07:20.130192 -  2024-12-17T09:07:20.130192 - 2024-12-17T09:07:20.161110 - 2024-12-17T09:07:20.161110 -  2024-12-17T09:07:20.161110 - 2024-12-17T09:07:20.162079 - 2024-12-17T09:07:20.162079 -  2024-12-17T09:07:20.162079 - 2024-12-17T09:07:23.811760 - 
[ComfyUI-Manager] Startup script completed.2024-12-17T09:07:23.811760 - 
2024-12-17T09:07:23.811760 - #######################################################################
2024-12-17T09:07:23.811760 - 
2024-12-17T09:07:23.818646 - 
Prestartup times for custom nodes:2024-12-17T09:07:23.818646 - 
2024-12-17T09:07:23.818646 -    0.0 seconds:2024-12-17T09:07:23.819670 -  2024-12-17T09:07:23.819670 - D:\ComfyUI\custom_nodes\rgthree-comfy2024-12-17T09:07:23.819670 - 
2024-12-17T09:07:23.819670 -    0.0 seconds:2024-12-17T09:07:23.819670 -  2024-12-17T09:07:23.819670 - D:\ComfyUI\custom_nodes\ComfyUI-Easy-Use2024-12-17T09:07:23.819670 - 
2024-12-17T09:07:23.819670 -   48.7 seconds:2024-12-17T09:07:23.819670 -  2024-12-17T09:07:23.819670 - D:\ComfyUI\custom_nodes\comfy-ui-manager2024-12-17T09:07:23.819670 - 
2024-12-17T09:07:23.819670 - 
2024-12-17T09:07:35.130171 - Total VRAM 15931 MB, total RAM 32718 MB
2024-12-17T09:07:35.130171 - pytorch version: 2.1.0.post2+cxx11.abi
2024-12-17T09:07:35.131196 - Forcing FP16.
2024-12-17T09:07:35.131196 - Set vram state to: NORMAL_VRAM
2024-12-17T09:07:35.131196 - Disabling smart memory management
2024-12-17T09:07:35.131196 - Device: xpu
2024-12-17T09:07:35.170041 - Using pytorch cross attention
2024-12-17T09:07:39.173932 - [Prompt Server] web root: D:\ComfyUI\web
2024-12-17T09:07:42.700520 - installed topaz.js to D:\ComfyUI\web\extensions\topaz2024-12-17T09:07:42.700520 - 
2024-12-17T09:07:42.706288 - ### Loading: ComfyUI-Manager (V2.50.1)2024-12-17T09:07:42.707289 - 
2024-12-17T09:07:43.164660 - ### ComfyUI Revision: 2903 [44db9785] | Released on '2024-12-10'2024-12-17T09:07:43.164660 - 
2024-12-17T09:07:43.887692 - [ComfyUI-Manager] default cache updated: https://gitee.com/cunkai/comfy-ui-manager/raw/main/alter-list.json2024-12-17T09:07:43.887692 - 
2024-12-17T09:07:43.995055 - [ComfyUI-Manager] default cache updated: https://gitee.com/cunkai/comfy-ui-manager/raw/main/model-list.json2024-12-17T09:07:43.995055 - 
2024-12-17T09:07:44.229361 - [ComfyUI-Manager] default cache updated: https://gitee.com/cunkai/comfy-ui-manager/raw/main/github-stats.json2024-12-17T09:07:44.230358 - 
2024-12-17T09:07:44.289854 - [ComfyUI-Manager] default cache updated: https://gitee.com/cunkai/comfy-ui-manager/raw/main/extension-node-map.json2024-12-17T09:07:44.289854 - 
2024-12-17T09:07:44.338639 - [ComfyUI-Manager] default cache updated: https://gitee.com/cunkai/comfy-ui-manager/raw/main/custom-node-list.json2024-12-17T09:07:44.338639 - 
2024-12-17T09:07:44.984897 - [Crystools �[0;32mINFO�[0m] Crystools version: 1.21.0
2024-12-17T09:07:45.011628 - [Crystools �[0;32mINFO�[0m] CPU: Intel(R) Xeon(R) CPU E3-1240 v5 @ 3.50GHz - Arch: AMD64 - OS: Windows 10
2024-12-17T09:07:45.011628 - [Crystools �[0;31mERROR�[0m] Could not init pynvml (Nvidia).NVML Shared Library Not Found
2024-12-17T09:07:45.011628 - [Crystools �[0;33mWARNING�[0m] No GPU with CUDA detected.
2024-12-17T09:07:49.580265 - �[34m[ComfyUI-Easy-Use] server: �[0mv1.2.5 �[92mLoaded�[0m2024-12-17T09:07:49.580265 - 
2024-12-17T09:07:49.581233 - �[34m[ComfyUI-Easy-Use] web root: �[0mD:\ComfyUI\custom_nodes\ComfyUI-Easy-Use\web_version/v2 �[92mLoaded�[0m2024-12-17T09:07:49.581233 - 
2024-12-17T09:07:50.495906 - ### Loading: ComfyUI-Impact-Pack (V7.14)2024-12-17T09:07:50.495906 - 
2024-12-17T09:07:50.899447 - ### Loading: ComfyUI-Impact-Pack (Subpack: V0.8)2024-12-17T09:07:50.899447 - 
2024-12-17T09:07:51.044669 - ### Loading: ComfyUI-Inspire-Pack (V1.9)2024-12-17T09:07:51.044669 - 
2024-12-17T09:07:51.184018 - Total VRAM 15931 MB, total RAM 32718 MB
2024-12-17T09:07:51.184018 - pytorch version: 2.1.0.post2+cxx11.abi
2024-12-17T09:07:51.185992 - Forcing FP16.
2024-12-17T09:07:51.185992 - Set vram state to: NORMAL_VRAM
2024-12-17T09:07:51.185992 - Disabling smart memory management
2024-12-17T09:07:51.185992 - Device: xpu
2024-12-17T09:07:53.804103 - [Impact Pack] Wildcards loading done.2024-12-17T09:07:53.804103 - 
2024-12-17T09:07:54.471552 - --------------
2024-12-17T09:07:54.471552 - �[91m ### Mixlab Nodes: �[93mLoaded
2024-12-17T09:07:54.485177 - json_repair## OK2024-12-17T09:07:54.485177 - 
2024-12-17T09:07:54.492042 - ChatGPT.available True
2024-12-17T09:07:54.494407 - edit_mask.available True
2024-12-17T09:07:56.361904 - ## clip_interrogator_model not found: D:\ComfyUI\models\clip_interrogator\Salesforce\blip-image-captioning-base, pls download from https://huggingface.co/Salesforce/blip-image-captioning-base2024-12-17T09:07:56.361904 - 
2024-12-17T09:07:56.361904 - ClipInterrogator.available True
2024-12-17T09:07:56.363898 - ## text_generator_model not found: D:\ComfyUI\models\prompt_generator\text2image-prompt-generator, pls download from https://huggingface.co/succinctly/text2image-prompt-generator/tree/main2024-12-17T09:07:56.363898 - 
2024-12-17T09:07:56.363898 - ## zh_en_model not found: D:\ComfyUI\models\prompt_generator\opus-mt-zh-en, pls download from https://huggingface.co/Helsinki-NLP/opus-mt-zh-en/tree/main2024-12-17T09:07:56.363898 - 
2024-12-17T09:07:56.364897 - PromptGenerate.available True
2024-12-17T09:07:56.364897 - ChinesePrompt.available True
2024-12-17T09:07:56.364897 - RembgNode_.available True
2024-12-17T09:07:56.369882 - ffmpeg could not be found. Using ffmpeg from imageio-ffmpeg.2024-12-17T09:07:56.369882 - 
2024-12-17T09:07:58.105567 - TripoSR.available
2024-12-17T09:07:58.105567 - MiniCPMNode.available
2024-12-17T09:07:58.263604 - Scenedetect.available
2024-12-17T09:07:58.294850 - FishSpeech.available False
2024-12-17T09:07:58.370258 - SenseVoice.available
2024-12-17T09:07:58.838940 - Whisper.available False
2024-12-17T09:07:58.840938 - fal-client## OK2024-12-17T09:07:58.840938 - 
2024-12-17T09:07:58.859136 - FalVideo.available
2024-12-17T09:07:58.860138 - �[93m -------------- �[0m
2024-12-17T09:07:58.941320 - ------------------------------------------2024-12-17T09:07:58.941320 - 
2024-12-17T09:07:58.941320 - �[34mComfyroll Studio v1.76 : �[92m 175 Nodes Loaded�[0m2024-12-17T09:07:58.941320 - 
2024-12-17T09:07:58.941320 - ------------------------------------------2024-12-17T09:07:58.941320 - 
2024-12-17T09:07:58.942319 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2024-12-17T09:07:58.942319 - 
2024-12-17T09:07:58.942319 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2024-12-17T09:07:58.942319 - 
2024-12-17T09:07:58.942319 - ------------------------------------------2024-12-17T09:07:58.942319 - 
2024-12-17T09:07:58.955038 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts�[0m
2024-12-17T09:07:58.955038 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False�[0m
2024-12-17T09:07:58.955038 - �[36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']�[0m
2024-12-17T09:07:58.999143 - DWPose: Onnxruntime with acceleration providers detected2024-12-17T09:07:59.000145 - 
2024-12-17T09:07:59.134603 - �[92m[tinyterraNodes] �[32mLoaded�[0m2024-12-17T09:07:59.134603 - 
2024-12-17T09:07:59.741487 - �[36;20m[comfy_mtb] | INFO -> loaded �[96m87�[0m nodes successfuly�[0m
2024-12-17T09:07:59.741487 - �[36;20m[comfy_mtb] | INFO -> Some nodes (5) could not be loaded. This can be ignored, but go to http://127.0.0.1:8188/mtb if you want more information.�[0m
2024-12-17T09:07:59.775516 - 
�[36mEfficiency Nodes:�[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...�[92mSuccess!�[0m2024-12-17T09:07:59.775516 - 
2024-12-17T09:07:59.893888 - 
2024-12-17T09:07:59.893888 - �[92m[rgthree-comfy] Loaded 43 magnificent nodes. 🎉�[00m2024-12-17T09:07:59.893888 - 
2024-12-17T09:07:59.893888 - 
2024-12-17T09:08:04.748667 - �[34mWAS Node Suite: �[0mOpenCV Python FFMPEG support is enabled�[0m2024-12-17T09:08:04.748667 - 
2024-12-17T09:08:04.748667 - �[34mWAS Node Suite �[93mWarning: �[0m`ffmpeg_bin_path` is not set in `D:\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.�[0m2024-12-17T09:08:04.748667 - 
2024-12-17T09:08:09.607016 - �[34mWAS Node Suite: �[0mFinished.�[0m �[32mLoaded�[0m �[0m218�[0m �[32mnodes successfully.�[0m2024-12-17T09:08:09.607016 - 
2024-12-17T09:08:09.607016 - 
	�[3m�[93m"Art is the bridge that connects imagination to reality."�[0m�[3m - Unknown�[0m
2024-12-17T09:08:09.607016 - 
2024-12-17T09:08:09.617422 - 
Import times for custom nodes:
2024-12-17T09:08:09.617422 -    0.0 seconds: D:\ComfyUI\custom_nodes\sdxl-recommended-res-calc
2024-12-17T09:08:09.617422 -    0.0 seconds: D:\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-17T09:08:09.617422 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Miaoshouai-Tagger
2024-12-17T09:08:09.617422 -    0.0 seconds: D:\ComfyUI\custom_nodes\Skimmed_CFG
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_Noise
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\Comfyui_TTP_Toolset
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\cg-use-everywhere
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-SDXL-Style-Preview
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\AuraSR-ComfyUI
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\comfyui-inpaint-nodes
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\comfy-image-saver
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\Comfy-Topaz
2024-12-17T09:08:09.618423 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_essentials
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyMath
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Florence2
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-GGUF
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_Load_Image_With_Metadata
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet
2024-12-17T09:08:09.619420 -    0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-KJNodes
2024-12-17T09:08:09.619420 -    0.1 seconds: D:\ComfyUI\custom_nodes\OneButtonPrompt
2024-12-17T09:08:09.619420 -    0.1 seconds: D:\ComfyUI\custom_nodes\rgthree-comfy
2024-12-17T09:08:09.619420 -    0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2024-12-17T09:08:09.620417 -    0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes
2024-12-17T09:08:09.620417 -    0.1 seconds: D:\ComfyUI\custom_nodes\comfyui_controlnet_aux
2024-12-17T09:08:09.620417 -    0.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
2024-12-17T09:08:09.620417 -    0.5 seconds: D:\ComfyUI\custom_nodes\comfy_mtb
2024-12-17T09:08:09.620417 -    0.5 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2024-12-17T09:08:09.620417 -    0.7 seconds: D:\ComfyUI\custom_nodes\comfy-ui-manager
2024-12-17T09:08:09.620417 -    0.8 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Gemini
2024-12-17T09:08:09.620417 -    0.9 seconds: D:\ComfyUI\custom_nodes\comfyui-dynamicprompts
2024-12-17T09:08:09.620417 -    1.1 seconds: D:\ComfyUI\custom_nodes\clipseg.py
2024-12-17T09:08:09.620417 -    1.5 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Crystools
2024-12-17T09:08:09.620417 -    3.7 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Easy-Use
2024-12-17T09:08:09.620417 -    7.6 seconds: D:\ComfyUI\custom_nodes\comfyui-mixlab-nodes
2024-12-17T09:08:09.620417 -    9.7 seconds: D:\ComfyUI\custom_nodes\was-node-suite-comfyui
2024-12-17T09:08:09.621414 - 
2024-12-17T09:08:09.647392 - Starting server

2024-12-17T09:08:09.648393 - To see the GUI go to: http://127.0.0.1:8188
2024-12-17T09:08:16.164793 - got prompt
2024-12-17T09:08:16.374683 - xFormers not available
2024-12-17T09:08:16.376753 - xFormers not available
2024-12-17T09:08:16.376753 - model_path is D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Small\depth_anything_v2_vits.pth2024-12-17T09:08:16.376753 - 
2024-12-17T09:08:16.376753 - using MLP layer as FFN
2024-12-17T09:08:17.070248 - !!! Exception during processing !!! Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
2024-12-17T09:08:17.120171 - Traceback (most recent call last):
  File "D:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\depth_anything_v2.py", line 24, in execute
    out = common_annotator_call(model, image, resolution=resolution, max_depth=1)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\utils.py", line 85, in common_annotator_call
    np_result = model(np_image, output_type="np", detect_resolution=detect_resolution, **kwargs)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\__init__.py", line 44, in __call__
    depth = self.model.infer_image(cv2.cvtColor(input_image, cv2.COLOR_RGB2BGR), input_size=518, max_depth=max_depth)
  File "D:\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dpt.py", line 189, in infer_image
    depth = self.forward(image, max_depth)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dpt.py", line 179, in forward
    features = self.pretrained.get_intermediate_layers(x, self.intermediate_layer_idx[self.encoder], return_class_token=True)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 308, in get_intermediate_layers
    outputs = self._get_intermediate_layers_not_chunked(x, n)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 272, in _get_intermediate_layers_not_chunked
    x = self.prepare_tokens_with_masks(x)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 214, in prepare_tokens_with_masks
    x = self.patch_embed(x)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2_layers\patch_embed.py", line 76, in forward
    x = self.proj(x)  # B C H W
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

2024-12-17T09:08:17.122166 - Prompt executed in 0.92 seconds
2024-12-17T09:14:31.940801 - got prompt
2024-12-17T09:14:31.998644 - model_path is D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Small\depth_anything_v2_vits.pth2024-12-17T09:14:31.999651 - 
2024-12-17T09:14:32.015612 - using MLP layer as FFN
2024-12-17T09:14:32.687398 - !!! Exception during processing !!! Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
2024-12-17T09:14:32.687398 - Traceback (most recent call last):
  File "D:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\depth_anything_v2.py", line 24, in execute
    out = common_annotator_call(model, image, resolution=resolution, max_depth=1)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\utils.py", line 85, in common_annotator_call
    np_result = model(np_image, output_type="np", detect_resolution=detect_resolution, **kwargs)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\__init__.py", line 44, in __call__
    depth = self.model.infer_image(cv2.cvtColor(input_image, cv2.COLOR_RGB2BGR), input_size=518, max_depth=max_depth)
  File "D:\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dpt.py", line 189, in infer_image
    depth = self.forward(image, max_depth)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dpt.py", line 179, in forward
    features = self.pretrained.get_intermediate_layers(x, self.intermediate_layer_idx[self.encoder], return_class_token=True)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 308, in get_intermediate_layers
    outputs = self._get_intermediate_layers_not_chunked(x, n)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 272, in _get_intermediate_layers_not_chunked
    x = self.prepare_tokens_with_masks(x)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2.py", line 214, in prepare_tokens_with_masks
    x = self.patch_embed(x)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux\depth_anything_v2\dinov2_layers\patch_embed.py", line 76, in forward
    x = self.proj(x)  # B C H W
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (XPUFloatType) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

2024-12-17T09:14:32.689392 - Prompt executed in 0.72 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":3,"last_link_id":2,"nodes":[{"id":3,"type":"PreviewImage","pos":[415,93],"size":[210,246],"flags":{},"order":2,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":2}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":1,"type":"LoadImage","pos":[52,44],"size":[315,314],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[1],"slot_index":0,"shape":3},{"name":"MASK","type":"MASK","links":null,"shape":3}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["flux_depth_lora_example.png","image"]},{"id":2,"type":"DepthAnythingV2Preprocessor","pos":[393,-48],"size":[315,82],"flags":{},"order":1,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":1}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[2],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DepthAnythingV2Preprocessor"},"widgets_values":["depth_anything_v2_vits.pth",512]}],"links":[[1,1,0,2,0,"IMAGE"],[2,2,0,3,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.1671841070450009,"offset":[739.2545037886358,354.8026728678643]},"ue_links":[]},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@huongng105
Copy link

i have same problem, please help
DepthAnythingV2Preprocessor
Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same
2024-12-18T15:28:27.386212 - # 😺dzNodes: LayerStyle -> �[1;32mBrightness & Contrast Processed 1 image(s).�[m2024-12-18T15:28:27.386329 -
2024-12-18T15:28:27.400495 - # 😺dzNodes: LayerStyle -> �[1;32mBrightness & Contrast Processed 1 image(s).�[m2024-12-18T15:28:27.400581 -
2024-12-18T15:28:32.403894 - model_path is /opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts/depth-anything/Depth-Anything-V2-Large/depth_anything_v2_vitl.pth2024-12-18T15:28:32.404074 -
2024-12-18T15:28:32.406949 - using MLP layer as FFN
2024-12-18T15:28:34.659123 - !!! Exception during processing !!! Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same
2024-12-18T15:28:34.663847 - Traceback (most recent call last):
File "/opt/anaconda3/envs/ChangeX/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/opt/anaconda3/envs/ChangeX/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/depth_anything_v2.py", line 24, in execute
out = common_annotator_call(model, image, resolution=resolution, max_depth=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/utils.py", line 85, in common_annotator_call
np_result = model(np_image, output_type="np", detect_resolution=detect_resolution, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_controlnet_aux/depth_anything_v2/init.py", line 44, in call
depth = self.model.infer_image(cv2.cvtColor(input_image, cv2.COLOR_RGB2BGR), input_size=518, max_depth=max_depth)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_controlnet_aux/depth_anything_v2/dpt.py", line 189, in infer_image
depth = self.forward(image, max_depth)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_controlnet_aux/depth_anything_v2/dpt.py", line 179, in forward
features = self.pretrained.get_intermediate_layers(x, self.intermediate_layer_idx[self.encoder], return_class_token=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_controlnet_aux/depth_anything_v2/dinov2.py", line 308, in get_intermediate_layers
outputs = self._get_intermediate_layers_not_chunked(x, n)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_controlnet_aux/depth_anything_v2/dinov2.py", line 272, in _get_intermediate_layers_not_chunked
x = self.prepare_tokens_with_masks(x)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_controlnet_aux/depth_anything_v2/dinov2.py", line 214, in prepare_tokens_with_masks
x = self.patch_embed(x)
^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_controlnet_aux/depth_anything_v2/dinov2_layers/patch_embed.py", line 76, in forward
x = self.proj(x) # B C H W
^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/envs/ChangeX/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
return F.conv2d(
^^^^^^^^^
RuntimeError: Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same

2024-12-18T15:28:34.666136 - Prompt executed in 9.03 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants