You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when i tried image to video with v4 it is getting killed i have rtx 3060 gpu
sathvik@sathvik:/mnt/c/Users/pasun/OneDrive/Desktop/AI/EasyAnimate$ python3 app.py
/home/sathvik/.local/lib/python3.10/site-packages/gradio/components/dropdown.py:188: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: none or set allow_custom_value=True.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Update diffusion transformer
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.11s/it]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████| 7/7 [00:08<00:00, 1.18s/it]
You have disabled the safety checker for <class 'easyanimate.pipeline.pipeline_easyanimate_multi_text_encoder_inpaint.EasyAnimatePipeline_Multi_Text_Encoder_Inpaint'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
Update diffusion transformer done
/home/sathvik/.local/lib/python3.10/site-packages/diffusers/configuration_utils.py:140: FutureWarning: Accessing config attribute vae_latent_channels directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_latent_channels' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_latent_channels'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
100%|███████████████████████████████████████████████████████████████████████████████████| 30/30 [01:25<00:00, 2.87s/it]
100%|███████████████████████████████████████████████████████████████████████████████████| 30/30 [26:54<00:00, 53.80s/it]
/home/sathvik/.local/lib/python3.10/site-packages/diffusers/configuration_utils.py:140: FutureWarning: Accessing config attribute vae_latent_channels directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_latent_channels' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_latent_channels'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
Killed
The text was updated successfully, but these errors were encountered:
when i tried image to video with v4 it is getting killed i have rtx 3060 gpu
sathvik@sathvik:/mnt/c/Users/pasun/OneDrive/Desktop/AI/EasyAnimate$ python3 app.py
/home/sathvik/.local/lib/python3.10/site-packages/gradio/components/dropdown.py:188: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: none or set allow_custom_value=True.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Update diffusion transformer
missing keys: 0;
unexpected keys: 56;
[] ['loss.discriminator.main.0.bias', 'loss.discriminator.main.0.weight', 'loss.discriminator.main.11.bias', 'loss.discriminator.main.11.weight', 'loss.discriminator.main.2.weight', 'loss.discriminator.main.3.bias', 'loss.discriminator.main.3.num_batches_tracked', 'loss.discriminator.main.3.running_mean', 'loss.discriminator.main.3.running_var', 'loss.discriminator.main.3.weight', 'loss.discriminator.main.5.weight', 'loss.discriminator.main.6.bias', 'loss.discriminator.main.6.num_batches_tracked', 'loss.discriminator.main.6.running_mean', 'loss.discriminator.main.6.running_var', 'loss.discriminator.main.6.weight', 'loss.discriminator.main.8.weight', 'loss.discriminator.main.9.bias', 'loss.discriminator.main.9.num_batches_tracked', 'loss.discriminator.main.9.running_mean', 'loss.discriminator.main.9.running_var', 'loss.discriminator.main.9.weight', 'loss.logvar', 'loss.perceptual_loss.lin0.model.1.weight', 'loss.perceptual_loss.lin1.model.1.weight', 'loss.perceptual_loss.lin2.model.1.weight', 'loss.perceptual_loss.lin3.model.1.weight', 'loss.perceptual_loss.lin4.model.1.weight', 'loss.perceptual_loss.net.slice1.0.bias', 'loss.perceptual_loss.net.slice1.0.weight', 'loss.perceptual_loss.net.slice1.2.bias', 'loss.perceptual_loss.net.slice1.2.weight', 'loss.perceptual_loss.net.slice2.5.bias', 'loss.perceptual_loss.net.slice2.5.weight', 'loss.perceptual_loss.net.slice2.7.bias', 'loss.perceptual_loss.net.slice2.7.weight', 'loss.perceptual_loss.net.slice3.10.bias', 'loss.perceptual_loss.net.slice3.10.weight', 'loss.perceptual_loss.net.slice3.12.bias', 'loss.perceptual_loss.net.slice3.12.weight', 'loss.perceptual_loss.net.slice3.14.bias', 'loss.perceptual_loss.net.slice3.14.weight', 'loss.perceptual_loss.net.slice4.17.bias', 'loss.perceptual_loss.net.slice4.17.weight', 'loss.perceptual_loss.net.slice4.19.bias', 'loss.perceptual_loss.net.slice4.19.weight', 'loss.perceptual_loss.net.slice4.21.bias', 'loss.perceptual_loss.net.slice4.21.weight', 'loss.perceptual_loss.net.slice5.24.bias', 'loss.perceptual_loss.net.slice5.24.weight', 'loss.perceptual_loss.net.slice5.26.bias', 'loss.perceptual_loss.net.slice5.26.weight', 'loss.perceptual_loss.net.slice5.28.bias', 'loss.perceptual_loss.net.slice5.28.weight', 'loss.perceptual_loss.scaling_layer.scale', 'loss.perceptual_loss.scaling_layer.shift']
loaded 3D transformer's pretrained weights from /mnt/c/Users/pasun/OneDrive/Desktop/AI/EasyAnimate/models/Diffusion_Transformer/EasyAnimateV4-XL-2-InP/transformer ...
missing keys: 0;
unexpected keys: 0;
[]
Mamba Parameters: 0.0 M
attn1 Parameters: 317.4336 M
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.11s/it]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████| 7/7 [00:08<00:00, 1.18s/it]
You have disabled the safety checker for <class 'easyanimate.pipeline.pipeline_easyanimate_multi_text_encoder_inpaint.EasyAnimatePipeline_Multi_Text_Encoder_Inpaint'> by passing
safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .Update diffusion transformer done
/home/sathvik/.local/lib/python3.10/site-packages/diffusers/configuration_utils.py:140: FutureWarning: Accessing config attribute
vae_latent_channels
directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_latent_channels' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_latent_channels'.deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
100%|███████████████████████████████████████████████████████████████████████████████████| 30/30 [01:25<00:00, 2.87s/it]
100%|███████████████████████████████████████████████████████████████████████████████████| 30/30 [26:54<00:00, 53.80s/it]
/home/sathvik/.local/lib/python3.10/site-packages/diffusers/configuration_utils.py:140: FutureWarning: Accessing config attribute
vae_latent_channels
directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_latent_channels' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_latent_channels'.deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
Killed
The text was updated successfully, but these errors were encountered: