Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: IndexError: index 25 is out of bounds for dimension 0 with size 16 #527

Open
2 tasks done
mxtilx opened this issue May 26, 2024 · 0 comments
Open
2 tasks done

Comments

@mxtilx
Copy link

mxtilx commented May 26, 2024

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

IndexError: index 46 is out of bounds for dimension 0 with size 24

Steps to reproduce the problem

  1. 使用文生图生成一张图
  2. 发送至图生图
  3. 启用animatediff
    4.开始生成

What should have happened?

没成功过我也不知道应该发生什么

Commit where the problem happens

webui:
AUTOMATIC1111/stable-diffusion-webui@1c0a0c4
extension:
image

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

Launching Web UI with arguments: --use-cpu all --skip-torch-cuda-test --theme dark --precision full --no-half --no-half-vae --disable-nan-check --api --autolaunch --allow-code --skip-version-check --skip-python-version-check

Console logs

IIB Database file has been successfully backed up to the backup folder.
Startup time: 113.3s (prepare environment: 75.7s, import torch: 13.1s, import gradio: 2.0s, setup paths: 1.2s, initialize shared: 1.0s, other imports: 1.4s, load scripts: 9.2s, create ui: 4.9s, gradio launch: 2.8s, add APIs: 1.5s, app_started_callback: 0.3s).

0: 640x640 1 face, 179.3ms
Speed: 7.1ms preprocess, 179.3ms inference, 5.0ms postprocess per image at shape (1, 3, 640, 640)
2024-05-26 23:10:44,886 - AnimateDiff - INFO - AnimateDiff process start.
2024-05-26 23:10:44,887 - AnimateDiff - INFO - Loading motion module mm_sd15_v2.safetensors from C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\extensions\sd-webui-animatediff\model\mm_sd15_v2.safetensors
2024-05-26 23:10:44,924 - AnimateDiff - INFO - Guessed mm_sd15_v2.safetensors architecture: MotionModuleType.AnimateDiffV2
2024-05-26 23:10:48,749 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet middle block.
2024-05-26 23:10:48,750 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet input blocks.
2024-05-26 23:10:48,750 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet output blocks.
2024-05-26 23:10:48,750 - AnimateDiff - INFO - Setting DDIM alpha.
2024-05-26 23:10:48,755 - AnimateDiff - INFO - Injection finished.
2024-05-26 23:10:48,756 - AnimateDiff - INFO - AnimateDiff + ControlNet will generate 24 frames.
2024-05-26 23:11:58,345 - AnimateDiff - INFO - Randomizing init_latent according to [1.0, 0.96875, 0.9375, 0.90625, 0.875, 0.84375, 0.8125, 0.78125, 0.75, 0.71875, 0.6875, 0.65625, 0.625, 0.59375, 0.5625, 0.53125, 0.5, 0.46875, 0.4375, 0.40625, 0.375, 0.34375, 0.3125, 0.28125].
2024-05-26 23:11:58,780 - AnimateDiff - INFO - inner model forward hooked
*** Error completing request
*** Arguments: ('task(c2rp4suemj3g7t2)', <gradio.routes.Request object at 0x00000250D7E0DD20>, 0, 'a girl,animal ears,blue eyes,hair ornament,looking at viewer,long hair,bangs,hair between eyes,off shoulder,hair flower,animal ear fluff,blush,white shirt,bare shoulders,shirt,upper body,blurry background,multiple views,jewelry,earrings,hairclip,closed mouth,sailor collar,blue hair,blurry,flower,white hair,looking back,extra ears,cat ears,parted lips,outdoors,', 'nsfw,sketches,worst quality,low quality,normal quality,lowres,watermark,monochrome,grayscale,ugly,blurry,Tan skin,dark skin,black skin,skin spots,skin blemishes,age spot,glans,disabled,distorted,bad anatomy,morbid,malformation,amputation,bad proportions,twins,missing body,fused body,extra head,poorly drawn face,bad eyes,deformed eye,unclear eyes,cross-eyed,long neck,malformed limbs,extra limbs,extra arms,missing arms,bad tongue,strange fingers,mutated hands,missing hands,poorly drawn hands,extra hands,fused hands,connected hand,bad hands,wrong fingers,missing fingers,extra fingers,4 fingers,3 fingers,deformed hands,extra legs,bad legs,many legs,more than two legs,bad feet,wrong feet,extra feets,', [], <PIL.Image.Image image mode=RGBA size=512x512 at 0x250D7DA4F70>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 0, False, 1, 0.5, 4, 0, 0.5, 2, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tap_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 'DemoFusion', False, 128, 64, 4, 2, False, 10, 1, 1, 64, False, True, 3, 1, 1, True, 0.85, 0.6, 4, False, False, 3072, 192, True, True, True, False, <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000250D7E0F910>, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=False, module='none', model='None', weight=1.0, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=-1, threshold_a=-1.0, threshold_b=-1.0, guidance_start=0.0, guidance_end=1.0, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), '从 modules.processing import process_images\n\np.宽度 = 768\np.高度 = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
    Traceback (most recent call last):
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\img2img.py", line 232, in img2img
        processed = process_images(p)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 48, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\processing.py", line 1741, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\sd_samplers_kdiffusion.py", line 172, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\sd_samplers_kdiffusion.py", line 172, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\Administrator\Desktop\sd-webui-aki-v4.3\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 164, in mm_sd_forward
        x_in[_context], sigma_in[_context],
    IndexError: index 46 is out of bounds for dimension 0 with size 24

Additional information

sysinfo-2024-05-26-14-58.json

@mxtilx mxtilx changed the title [Bug]: [Bug]: IndexError: index 25 is out of bounds for dimension 0 with size 16 May 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant