OneDiff is an out-of-the-box acceleration library for diffusion models, it provides:
- PyTorch Module compilation tools and strong optimized GPU Kernels for diffusion models
- Out-of-the-box acceleration for popular UIs/libs
For example:
OneDiff is the abbreviation of "one line of code to accelerate diffusion models".
The latest news:
- πOneDiff 1.0 is out! (Acceleration of SD & SVD with one line of code)
- πAccelerating Stable Video Diffusion 3x faster with OneDiff DeepCache + Int8
- πAccelerating SDXL 3x faster with DeepCache and OneDiff
Here is the introduction of OneDiff Community.
- Create an issue
- Chat in Discord:
- Email for Enterprise Edition or other business inquiries: [email protected]
- Linux
- If you want to use OneDiff on Windows, please use it under WSL.
- NVIDIA GPUs
The Full Introduction of OneDiff:
OneDiff interfaces with various front-end sd frameworks upward, and uses a custom virtual machine mixed with PyTorch as the inference engine downward.
- Model stabilityai/stable-diffusion-xl-base-1.0;
- Image size 1024*1024, batch size 1, steps 30;
- NVIDIA A100 80G SXM4;
- Model stabilityai/stable-video-diffusion-img2vid-xt;
- Image size 576*1024, batch size 1, steps 25, decoder chunk size 5;
- NVIDIA A100 80G SXM4;
Note that we haven't got the way to run SVD with TensorRT on Feb 29 2024.
Main Function | Details |
---|---|
Compiling Time | About 1 minute (SDXL) |
Deployment Methods | Plug and Play |
Dynamic Image Size Support | Support with no overhead |
Model Support | SD1.5~2.1, SDXL, SDXL Turbo, etc. |
Algorithm Support | SD standard workflow, LoRA, ControlNet, SVD, InstantID, SDXL Lightning, etc. |
SD Framework Support | ComfyUI, Diffusers, SD-webui |
Save & Load Accelerated Models | Yes |
Time of LoRA Switching | Hundreds of milliseconds |
LoRA Occupancy | Tens of MB to hundreds of MB. |
Device Support | NVIDIA GPU 3090 RTX/4090 RTX/A100/A800/A10 etc. (Compatibility with Ascend in progress) |
OneDiff supports the acceleration for SOTA models.
- stable: release for public usage, and has long-term support;
- beta: release for professional usage, and has long-term support;
- alpha: early release for expert usage, and should be careful to use;
AIGC Type | Models | HF diffusers | ComfyUI | SD web UI | |||
---|---|---|---|---|---|---|---|
Community | Enterprise | Community | Enterprise | Community | Enterprise | ||
Image | SD 1.5 | stable | stable | stable | stable | stable | stable |
SD 2.1 | stable | stable | stable | stable | stable | stable | |
SDXL | stable | stable | stable | stable | stable | stable | |
LoRA | stable | stable | stable | ||||
ControlNet | stable | stable | |||||
SDXL Turbo | stable | stable | |||||
LCM | stable | stable | |||||
SDXL DeepCache | alpha | alpha | alpha | alpha | |||
InstantID | beta | beta | |||||
Video | SVD(stable Video Diffusion) | stable | stable | stable | stable | ||
SVD DeepCache | alpha | alpha | alpha | alpha |
Note: Enterprise Edition contains all the functionality in Community Edition.
Compile and save the compiled result offline, then load it online for serving
- Save and Load the compiled graph
- Change device of the compiled graph to do multi-process serving
- Compile at one device(such as device 0), then use the compiled result to other device(such as device 1~7).
We also maintain a repository for benchmarking the quality of generation after acceleration using OneDiff: OneDiffGenMetrics
If you need Enterprise-level Support for your system or business, you can
- subscribe to Enterprise Edition online and get all support after the order: https://siliconflow.com/onediff.html
- or send an email to [email protected] and tell us about your user case, deployment scale, and requirements.
OneDiff Enterprise Edition can be subscripted for one month and one GPU and the cost is low.
Β | OneDiff Enterprise Edition | OneDiff Community Edition |
---|---|---|
Multiple Resolutions | Yes(No time cost for most of the cases) | Yes(No time cost for most of the cases) |
More Extreme and Dedicated optimization(usually another 20~100% performance gain) for the most used model | Yes | |
Tools for specific(very large scale) server side deployment | Yes | |
Technical Support for deployment | High priority support | Community |
Get the experimental features | Yes |
NOTE: We have updated OneFlow frequently for OneDiff, so please install OneFlow by the links below.
-
CUDA 11.8
For NA/EU users
python3 -m pip install -U --pre oneflow -f https://github.com/siliconflow/oneflow_releases/releases/expanded_assets/community_cu118
For CN users
python3 -m pip install -U --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu118
Click to get OneFlow packages for other CUDA versions.
-
CUDA 12.1
For NA/EU users
python3 -m pip install -U --pre oneflow -f https://github.com/siliconflow/oneflow_releases/releases/expanded_assets/community_cu121
For CN users
python3 -m pip install -U --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu121
-
CUDA 12.2
For NA/EU users
python3 -m pip install -U --pre oneflow -f https://github.com/siliconflow/oneflow_releases/releases/expanded_assets/community_cu122
For CN users
python3 -m pip install -U --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu122
Note: You can choose the latest versions you want for diffusers or transformers.
python3 -m pip install "torch" "transformers==4.27.1" "diffusers[torch]==0.19.3"
- From PyPI
python3 -m pip install --pre onediff
- From source
git clone https://github.com/siliconflow/onediff.git
cd onediff && python3 -m pip install -e .
NOTE: If you intend to utilize plugins for ComfyUI/StableDiffusion-WebUI, we highly recommend installing OneDiff from the source rather than PyPI. This is necessary as you'll need to manually copy (or create a soft link) for the relevant code into the extension folder of these UIs/Libs.
python3 -m pip install huggingface_hub
~/.local/bin/huggingface-cli login
-
run examples to check it works
cd onediff_diffusers_extensions python3 examples/text_to_image.py
-
bump version in these files:
.github/workflows/pub.yml src/onediff/__init__.py
-
install build package
python3 -m pip install build
-
build wheel
rm -rf dist python3 -m build
-
upload to pypi
twine upload dist/*