Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: API call for SD1.5 with animatediff failed after a simple txt2img API call for SDXL without animatediff #544

Open
2 tasks done
Ralphhtt opened this issue Jul 24, 2024 · 0 comments

Comments

@Ralphhtt
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

After an API call of txt2img without animatediff for SDXL model, then apply API call of txt2img with animatediff for SD1.5 model.
AssertionError: Motion module incompatible with SD. You are using SDXL with MotionModuleType.AnimateDiffV3.
Gif returned is static and not an animation.
Try again, API call of txt2img with animatediff for SD1.5 model. It works properly.

Steps to reproduce the problem

  1. api call of txt2img without animdatediff for a SDXL model (e.g. sd_xl_base_1.0.safetensors [31e35c80fc]), with parms:
    {
    "params": {
    "override_settings": {
    "sd_model_checkpoint": "sd_xl_base_1.0.safetensors",
    },
    "override_settings_restore_afterwards": true,
    "prompt": [
    "..."
    ],
    "negative_prompt": [
    "..."
    ],
    "steps": 25,
    "count": 1,
    "sampler_name": "DPM++ 2M Karras",
    "seed": "-1",
    "cfg_scale": 8,
    "restore_faces": false,
    "width": 768,
    "height": 1024
    }
    }
    a pic generated by SDXL returned properly.

  2. api call of txt2img with animdatediff for a SD1.5 model (e.g. v1-5-pruned.ckpt [e1441589a6]), with parms:
    {
    "params": {
    "override_settings": {
    "sd_model_checkpoint": "v1-5-pruned.ckpt",
    },
    "prompt": [
    "..."
    ],
    "negative_prompt": [
    "..."
    ],
    "steps": 25,
    "count": 1,
    "sampler_name": "DPM++ 2M Karras",
    "seed": "-1",
    "cfg_scale": 8,
    "restore_faces": false,
    "width": 512,
    "height": 512
    alwayson_scripts: {
    "AnimateDiff": {"args": [{
    'enable': true, // enable AnimateDiff
    'video_length': 16, // video frame number, 0-24 for v1 and 0-32 for v2
    'format': ['GIF'], // 'GIF' | 'MP4' | 'PNG' | 'TXT'
    'loop_number': 0, // 0 = infinite loop
    'fps': 8, // frames per second
    'model': 'mm_sd15_v3.safetensors', // motion module name

         }]} }
    

    }
    }
    AttributeError: Motion module incompatible with SD. You are using SDXL with MotionModuleType.AnimateDiffV3.

a static pic generated by SD1.5 returned not a animation.

  1. Try again api call of txt2img with animdatediff for a SD1.5 model, with same params as step 2
    an animation gif returned.

What should have happened?

step2 should generate animation properly.

Commit where the problem happens

webui: v1.8.0
extension: animatediff v2.0.1-a

What browsers do you use to access the UI ?

No response

Command Line Arguments

API call of txt2img, /sdapi/v1/txt2img

Console logs

Loading weights [31e35c80fc] from /workspace/stable-diffusion-webui/models/Stable-diffusion/sd_xl_base_1.0.safetensors
Creating model from config: /workspace/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Applying attention optimization: sdp... done.
Model loaded in 2.8s (create model: 0.6s, apply weights to model: 1.8s).
Couldn't find VAE named ; using None instead
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00,  3.69it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00,  3.69it/s]

================ HERE IS THE End of Step 1 (Added by reporter)

2024-07-24 06:18:27,189 - AnimateDiff - INFO - AnimateDiff process start.████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00,  3.76it/s]
*** Error running before_process: /workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py
    Traceback (most recent call last):
      File "/workspace/stable-diffusion-webui/modules/scripts.py", line 776, in before_process
        script.before_process(p, *script_args)
      File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 75, in before_process
        motion_module.inject(p.sd_model, params.model)
      File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_mm.py", line 71, in inject
        assert sd_model.is_sdxl == self.mm.is_xl, f"Motion module incompatible with SD. You are using {sd_ver} with {self.mm.mm_type}."
    AssertionError: Motion module incompatible with SD. You are using SDXL with MotionModuleType.AnimateDiffV3.

---
Reusing loaded model sd_xl_base_1.0.safetensors [31e35c80fc] to load v1-5-pruned.ckpt [e1441589a6]
Loading weights [e1441589a6] from /workspace/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt
Creating model from config: /workspace/stable-diffusion-webui/configs/v1-inference.yaml
Applying attention optimization: sdp... done.
Model loaded in 2.1s (create model: 0.8s, apply weights to model: 1.0s).
Loading VAE weights specified in settings: /workspace/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: sdp... done.
VAE weights loaded.
  0%|                                                                                                                                                                                         | 0/25 [00:00<?, ?it/s]2024-07-24 06:18:35,187 - AnimateDiff - WARNING - No motion module detected, falling back to the original forward. You are most likely using !Adetailer. !Adetailer post-process your outputs sequentially, and there will NOT be motion module in your UNet, so there might be NO temporal consistency within the inpainted face. Use at your own risk. If you really want to pursue inpainting with AnimateDiff inserted into UNet, use Segment Anything to generate masks for each frame and inpaint them with AnimateDiff + ControlNet. Note that my proposal might be good or bad, do your own research to figure out the best way.
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:01<00:00, 14.18it/s]
*** Error running postprocess_batch_list: /workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py████████████████████████████████████████████▌      | 24/25 [00:01<00:00, 14.51it/s]
    Traceback (most recent call last):
      File "/workspace/stable-diffusion-webui/modules/scripts.py", line 832, in postprocess_batch_list
        script.postprocess_batch_list(p, pp, *script_args, **kwargs)
      File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 95, in postprocess_batch_list
        params.prompt_scheduler.save_infotext_img(p)
    AttributeError: 'NoneType' object has no attribute 'save_infotext_img'

---
*** Error running postprocess: /workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py
    Traceback (most recent call last):
      File "/workspace/stable-diffusion-webui/modules/scripts.py", line 816, in postprocess
        script.postprocess(p, processed, *script_args)
      File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 105, in postprocess
        params.prompt_scheduler.save_infotext_txt(res)
    AttributeError: 'NoneType' object has no attribute 'save_infotext_txt'

---
Restoring base VAE
Applying attention optimization: sdp... done.
VAE weights loaded.
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00,  9.34it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00, 14.51it/s]

================ HERE IS THE End of Step 2 (Added by reporter)

2024-07-24 07:21:47,647 - AnimateDiff - INFO - AnimateDiff process start.████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:02<00:00, 14.51it/s]
2024-07-24 07:21:47,683 - AnimateDiff - INFO - Injecting motion module mm_sd15_v3.safetensors into SD1.5 UNet input blocks.
2024-07-24 07:21:47,683 - AnimateDiff - INFO - Injecting motion module mm_sd15_v3.safetensors into SD1.5 UNet output blocks.
2024-07-24 07:21:47,684 - AnimateDiff - INFO - Setting DDIM alpha.
2024-07-24 07:21:47,707 - AnimateDiff - INFO - Injection finished.
2024-07-24 07:21:47,708 - AnimateDiff - INFO - AnimateDiff + ControlNet will generate 16 frames.
Loading VAE weights specified in settings: /workspace/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: sdp... done.
VAE weights loaded.
  0%|                                                                                                                                                                                         | 0/25 [00:00<?, ?it/s]2024-07-24 07:21:49,734 - AnimateDiff - INFO - inner model forward hooked
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:24<00:00,  1.02it/s]
2024-07-24 07:22:18,096 - AnimateDiff - INFO - Restoring DDIM alpha.█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:23<00:00,  1.01it/s]
2024-07-24 07:22:18,096 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
2024-07-24 07:22:18,097 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2024-07-24 07:22:18,097 - AnimateDiff - INFO - Removal finished.
2024-07-24 07:22:18,098 - AnimateDiff - INFO - Saving output formats: GIF
2024-07-24 07:22:21,903 - AnimateDiff - INFO - AnimateDiff process end.
Restoring base VAE
Applying attention optimization: sdp... done.
VAE weights loaded.
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:32<00:00,  1.28s/it]
Total progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:32<00:00,  1.01it/s]

================ HERE IS THE End of Step 3 (Added by reporter)

Additional information

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant