From c044c1736e4efba4590e90091dbdc1b92da60090 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Thu, 21 Nov 2024 19:32:17 +0000 Subject: [PATCH] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- generation/maisi/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/generation/maisi/README.md b/generation/maisi/README.md index d7dedb363..a86a47329 100644 --- a/generation/maisi/README.md +++ b/generation/maisi/README.md @@ -76,11 +76,11 @@ We retrained several state-of-the-art diffusion model-based methods using our da **Table 3:** Inference Time Cost and GPU Memory Usage. `DM Time` refers to the time required for diffusion model inference. `VAE Time` refers to the time required for VAE decoder inference. The total inference time is the sum of `DM Time` and `VAE Time`. The experiment was conducted on an A100 80G GPU. -During inference, the peak GPU memory usage occurs during the VAE's decoding of latent features. -To reduce GPU memory usage, we can either increase `autoencoder_tp_num_splits` or reduce `autoencoder_sliding_window_infer_size`. +During inference, the peak GPU memory usage occurs during the VAE's decoding of latent features. +To reduce GPU memory usage, we can either increase `autoencoder_tp_num_splits` or reduce `autoencoder_sliding_window_infer_size`. Increasing `autoencoder_tp_num_splits` has a smaller impact on the generated image quality, while reducing `autoencoder_sliding_window_infer_size` may introduce stitching artifacts and has a larger impact on the generated image quality. -When `autoencoder_sliding_window_infer_size` is equal to or larger than the latent feature size, the sliding window will not be used, and the time and memory costs remain the same. +When `autoencoder_sliding_window_infer_size` is equal to or larger than the latent feature size, the sliding window will not be used, and the time and memory costs remain the same. ### Training GPU Memory Usage