Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #162

Merged
merged 31 commits into from
Jun 19, 2024
Merged
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
b1a2c0d
Expand Single File support in SD3 Pipeline (#8517)
DN6 Jun 13, 2024
614d0c6
remove the deprecated prepare_mask_and_masked_image function (#8512)
yiyixuxu Jun 13, 2024
8bea943
Update requirements_sd3.txt (#8521)
haofanwang Jun 13, 2024
2e4841e
post release 0.29.0 (#8492)
sayakpaul Jun 13, 2024
9c6e968
Refactor `StableDiffusion3Img2ImgPipeline` to remove redundant code (…
tolgacangoz Jun 13, 2024
f96e4a1
pin transformers to the latest (#8522)
sayakpaul Jun 13, 2024
a899e42
add `sentencepiece` to requirements.txt for SD3 dreambooth (#8538)
jorahn Jun 14, 2024
130dd93
pin accelerate to 0.31.0 (#8563)
sayakpaul Jun 16, 2024
6946fac
Implement SD3 loss weighting (#8528)
Slickytail Jun 16, 2024
8e1b7a0
Fix the deletion of SD3 text encoders for Dreambooth/LoRA training if…
spacepxl Jun 16, 2024
a6375d4
Image processor latent (#8513)
yiyixuxu Jun 17, 2024
c4a4750
Temporarily pin Numpy in the CI (#8603)
DN6 Jun 17, 2024
eeb7003
Syntax error in readme example "pipe" -> "pipeline" (#8601)
AmosDinh Jun 17, 2024
6bfd13f
[SD3 Training] T5 token limit (#8564)
asomoza Jun 17, 2024
074a7cc
SD3: update default training timestep / loss weighting distribution t…
bghira Jun 18, 2024
4edde13
[SD3 training] refactor the density and weighting utilities. (#8591)
sayakpaul Jun 18, 2024
23a2cd3
[LoRA] training fix the position of param casting when loading them (…
sayakpaul Jun 18, 2024
d2b10b1
[SD3] TAESD3 docs (#8607)
asomoza Jun 18, 2024
f69511e
[Single File Loading] Handle unexpected keys in CLIP models when `acc…
DN6 Jun 18, 2024
10d3220
A backslash is missing from the run command (#8471)
MaoXianXin Jun 18, 2024
96399c3
Fix sharding when no device_map is passed (#8531)
SunMarc Jun 18, 2024
f3209b5
[SD3 Inference] T5 Token limit (#8506)
asomoza Jun 18, 2024
cd30820
[Core] Add `shift_factor` to SD3 tiny autoencoder (#8618)
sayakpaul Jun 18, 2024
d2e7a19
Remove underlines between badges (#8484)
novialriptide Jun 18, 2024
298ce67
[LoRA] text encoder: read the ranks for all the attn modules (#8324)
elias-gaeros Jun 18, 2024
34fab8b
[SD3 Docs] Corrected title about loading model with T5 "without" -> "…
RamosCSV Jun 18, 2024
4408047
self.upsample = Upsample1D (#8580)
kkj15dk Jun 18, 2024
16170c6
add sd1.5 compatibility to controlnet-xs and fix unused_parameters er…
SamMaoYS Jun 18, 2024
3376252
Fix gradient checkpointing issue for Stable Diffusion 3 (#8542)
RefractAI Jun 18, 2024
2921a20
[SD3] Fix mis-matched shape when num_images_per_prompt > 1 using with…
Dalanke Jun 18, 2024
e5564d4
Support SD3 ControlNet and Multi-ControlNet. (#8566)
wangqixun Jun 19, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 5 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -20,21 +20,11 @@ limitations under the License.
<br>
<p>
<p align="center">
<a href="https://github.com/huggingface/diffusers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue">
</a>
<a href="https://github.com/huggingface/diffusers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg">
</a>
<a href="https://pepy.tech/project/diffusers">
<img alt="GitHub release" src="https://static.pepy.tech/badge/diffusers/month">
</a>
<a href="CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg">
</a>
<a href="https://twitter.com/diffuserslib">
<img alt="X account" src="https://img.shields.io/twitter/url/https/twitter.com/diffuserslib.svg?style=social&label=Follow%20%40diffuserslib">
</a>
<a href="https://github.com/huggingface/diffusers/blob/main/LICENSE"><img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue"></a>
<a href="https://github.com/huggingface/diffusers/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg"></a>
<a href="https://pepy.tech/project/diffusers"><img alt="GitHub release" src="https://static.pepy.tech/badge/diffusers/month"></a>
<a href="CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg"></a>
<a href="https://twitter.com/diffuserslib"><img alt="X account" src="https://img.shields.io/twitter/url/https/twitter.com/diffuserslib.svg?style=social&label=Follow%20%40diffuserslib"></a>
</p>

🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](https://huggingface.co/docs/diffusers/conceptual/philosophy#usability-over-performance), [simple over easy](https://huggingface.co/docs/diffusers/conceptual/philosophy#simple-over-easy), and [customizability over abstractions](https://huggingface.co/docs/diffusers/conceptual/philosophy#tweakable-contributorfriendly-over-abstraction).
2 changes: 1 addition & 1 deletion docker/diffusers-doc-builder/Dockerfile
Original file line number Diff line number Diff line change
@@ -42,7 +42,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers \
2 changes: 1 addition & 1 deletion docker/diffusers-flax-cpu/Dockerfile
Original file line number Diff line number Diff line change
@@ -40,7 +40,7 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers
4 changes: 2 additions & 2 deletions docker/diffusers-flax-tpu/Dockerfile
Original file line number Diff line number Diff line change
@@ -41,8 +41,8 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
hf-doc-builder \
huggingface-hub \
Jinja2 \
librosa \
numpy \
librosa \
numpy==1.26.4 \
scipy \
tensorboard \
transformers
2 changes: 1 addition & 1 deletion docker/diffusers-onnxruntime-cpu/Dockerfile
Original file line number Diff line number Diff line change
@@ -40,7 +40,7 @@ RUN python3 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers
2 changes: 1 addition & 1 deletion docker/diffusers-onnxruntime-cuda/Dockerfile
Original file line number Diff line number Diff line change
@@ -40,7 +40,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers
2 changes: 1 addition & 1 deletion docker/diffusers-pytorch-compile-cuda/Dockerfile
Original file line number Diff line number Diff line change
@@ -39,7 +39,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers
2 changes: 1 addition & 1 deletion docker/diffusers-pytorch-cpu/Dockerfile
Original file line number Diff line number Diff line change
@@ -40,7 +40,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers matplotlib
2 changes: 1 addition & 1 deletion docker/diffusers-pytorch-cuda/Dockerfile
Original file line number Diff line number Diff line change
@@ -39,7 +39,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers \
2 changes: 1 addition & 1 deletion docker/diffusers-pytorch-xformers-cuda/Dockerfile
Original file line number Diff line number Diff line change
@@ -39,7 +39,7 @@ RUN python3.10 -m pip install --no-cache-dir --upgrade pip uv==0.1.11 && \
huggingface-hub \
Jinja2 \
librosa \
numpy \
numpy==1.26.4 \
scipy \
tensorboard \
transformers \
4 changes: 4 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
@@ -253,6 +253,8 @@
title: PriorTransformer
- local: api/models/controlnet
title: ControlNetModel
- local: api/models/controlnet_sd3
title: SD3ControlNetModel
title: Models
- isExpanded: false
sections:
@@ -276,6 +278,8 @@
title: Consistency Models
- local: api/pipelines/controlnet
title: ControlNet
- local: api/pipelines/controlnet_sd3
title: ControlNet with Stable Diffusion 3
- local: api/pipelines/controlnet_sdxl
title: ControlNet with Stable Diffusion XL
- local: api/pipelines/controlnetxs
2 changes: 2 additions & 0 deletions docs/source/en/api/loaders/single_file.md
Original file line number Diff line number Diff line change
@@ -35,6 +35,7 @@ The [`~loaders.FromSingleFileMixin.from_single_file`] method allows you to load:
- [`StableDiffusionXLInstructPix2PixPipeline`]
- [`StableDiffusionXLControlNetPipeline`]
- [`StableDiffusionXLKDiffusionPipeline`]
- [`StableDiffusion3Pipeline`]
- [`LatentConsistencyModelPipeline`]
- [`LatentConsistencyModelImg2ImgPipeline`]
- [`StableDiffusionControlNetXSPipeline`]
@@ -49,6 +50,7 @@ The [`~loaders.FromSingleFileMixin.from_single_file`] method allows you to load:
- [`StableCascadeUNet`]
- [`AutoencoderKL`]
- [`ControlNetModel`]
- [`SD3Transformer2DModel`]

## FromSingleFileMixin

42 changes: 42 additions & 0 deletions docs/source/en/api/models/controlnet_sd3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
<!--Copyright 2024 The HuggingFace Team and The InstantX Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# SD3ControlNetModel

SD3ControlNetModel is an implementation of ControlNet for Stable Diffusion 3.

The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

## Loading from the original format

By default the [`SD3ControlNetModel`] should be loaded with [`~ModelMixin.from_pretrained`].

```py
from diffusers import StableDiffusion3ControlNetPipeline
from diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel

controlnet = SD3ControlNetModel.from_pretrained("InstantX/SD3-Controlnet-Canny")
pipe = StableDiffusion3ControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", controlnet=controlnet)
```

## SD3ControlNetModel

[[autodoc]] SD3ControlNetModel

## SD3ControlNetOutput

[[autodoc]] models.controlnet_sd3.SD3ControlNetOutput

39 changes: 39 additions & 0 deletions docs/source/en/api/pipelines/controlnet_sd3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
<!--Copyright 2023 The HuggingFace Team and The InstantX Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# ControlNet with Stable Diffusion 3

StableDiffusion3ControlNetPipeline is an implementation of ControlNet for Stable Diffusion 3.

ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

*We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.*

This code is implemented by [The InstantX Team](https://huggingface.co/InstantX). You can find pre-trained checkpoints for SD3-ControlNet on [The InstantX Team](https://huggingface.co/InstantX) Hub profile.

<Tip>

Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.

</Tip>

## StableDiffusion3ControlNetPipeline
[[autodoc]] StableDiffusion3ControlNetPipeline
- all
- __call__

## StableDiffusion3PipelineOutput
[[autodoc]] pipelines.stable_diffusion_3.pipeline_output.StableDiffusion3PipelineOutput
Original file line number Diff line number Diff line change
@@ -21,9 +21,9 @@ The abstract from the paper is:

## Usage Example

_As the model is gated, before using it with diffusers you first need to go to the [Stable Diffusion 3 Medium Hugging Face page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._
_As the model is gated, before using it with diffusers you first need to go to the [Stable Diffusion 3 Medium Hugging Face page](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate._

Use the command below to log in:
Use the command below to log in:

```bash
huggingface-cli login
@@ -197,6 +197,27 @@ image.save("sd3_hello_world.png")

Check out the full script [here](https://gist.github.com/sayakpaul/508d89d7aad4f454900813da5d42ca97).

## Tiny AutoEncoder for Stable Diffusion 3

Tiny AutoEncoder for Stable Diffusion (TAESD3) is a tiny distilled version of Stable Diffusion 3's VAE by [Ollin Boer Bohan](https://github.com/madebyollin/taesd) that can decode [`StableDiffusion3Pipeline`] latents almost instantly.

To use with Stable Diffusion 3:

```python
import torch
from diffusers import StableDiffusion3Pipeline, AutoencoderTiny

pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16
)
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd3", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "slice of delicious New York-style berry cheesecake"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("cheesecake.png")
```

## Loading the original checkpoints via `from_single_file`

The `SD3Transformer2DModel` and `StableDiffusion3Pipeline` classes support loading the original checkpoints via the `from_single_file` method. This method allows you to load the original checkpoint files that were used to train the models.
@@ -211,17 +232,38 @@ model = SD3Transformer2DModel.from_single_file("https://huggingface.co/stability

## Loading the single checkpoint for the `StableDiffusion3Pipeline`

### Loading the single file checkpoint without T5

```python
import torch
from diffusers import StableDiffusion3Pipeline
from transformers import T5EncoderModel

text_encoder_3 = T5EncoderModel.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", subfolder="text_encoder_3", torch_dtype=torch.float16)
pipe = StableDiffusion3Pipeline.from_single_file("https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors", torch_dtype=torch.float16, text_encoder_3=text_encoder_3)
pipe = StableDiffusion3Pipeline.from_single_file(
"https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors",
torch_dtype=torch.float16,
text_encoder_3=None
)
pipe.enable_model_cpu_offload()

image = pipe("a picture of a cat holding a sign that says hello world").images[0]
image.save('sd3-single-file.png')
```

<Tip>
`from_single_file` support for the `fp8` version of the checkpoints is coming soon. Watch this space.
</Tip>
### Loading the single file checkpoint with T5

```python
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_single_file(
"https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips_t5xxlfp8.safetensors",
torch_dtype=torch.float16,
)
pipe.enable_model_cpu_offload()

image = pipe("a picture of a cat holding a sign that says hello world").images[0]
image.save('sd3-single-file-t5-fp8.png')
```

## StableDiffusion3Pipeline

4 changes: 2 additions & 2 deletions docs/source/en/training/controlnet.md
Original file line number Diff line number Diff line change
@@ -349,7 +349,7 @@ control_image = load_image("./conditioning_image_1.png")
prompt = "pale golden rod circle with old lace background"

generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0]
image = pipeline(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0]
image.save("./output.png")
```

@@ -363,4 +363,4 @@ The SDXL training script is discussed in more detail in the [SDXL training](sdxl

Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful:

- Learn how to [use a ControlNet](../using-diffusers/controlnet) for inference on a variety of tasks.
- Learn how to [use a ControlNet](../using-diffusers/controlnet) for inference on a variety of tasks.
2 changes: 1 addition & 1 deletion docs/source/en/training/text2image.md
Original file line number Diff line number Diff line change
@@ -181,7 +181,7 @@ accelerate launch --mixed_precision="fp16" train_text_to_image.py \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--enable_xformers_memory_efficient_attention
--enable_xformers_memory_efficient_attention \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-naruto-model" \
--push_to_hub
Original file line number Diff line number Diff line change
@@ -71,7 +71,7 @@


# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.29.0.dev0")
check_min_version("0.30.0.dev0")

logger = get_logger(__name__)

Original file line number Diff line number Diff line change
@@ -78,7 +78,7 @@


# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.29.0.dev0")
check_min_version("0.30.0.dev0")

logger = get_logger(__name__)

2 changes: 1 addition & 1 deletion examples/community/marigold_depth_estimation.py
Original file line number Diff line number Diff line change
@@ -43,7 +43,7 @@


# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.29.0.dev0")
check_min_version("0.30.0.dev0")


class MarigoldDepthOutput(BaseOutput):
Original file line number Diff line number Diff line change
@@ -73,7 +73,7 @@
import wandb

# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.29.0.dev0")
check_min_version("0.30.0.dev0")

logger = get_logger(__name__)

Original file line number Diff line number Diff line change
@@ -66,7 +66,7 @@
import wandb

# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.29.0.dev0")
check_min_version("0.30.0.dev0")

logger = get_logger(__name__)

Original file line number Diff line number Diff line change
@@ -79,7 +79,7 @@
import wandb

# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.29.0.dev0")
check_min_version("0.30.0.dev0")

logger = get_logger(__name__)

Original file line number Diff line number Diff line change
@@ -72,7 +72,7 @@
import wandb

# Will error if the minimal version of diffusers is not installed. Remove at your own risks.
check_min_version("0.29.0.dev0")
check_min_version("0.30.0.dev0")

logger = get_logger(__name__)

Loading