Skip to content

Commit

Permalink
format
Browse files Browse the repository at this point in the history
Signed-off-by: Can-Zhao <[email protected]>
  • Loading branch information
Can-Zhao committed Nov 15, 2024
2 parents c802a73 + 01374e5 commit aaf5710
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 2 deletions.
1 change: 0 additions & 1 deletion generation/maisi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ We retrained several state-of-the-art diffusion model-based methods using our da
| 512x512x768 | [80,80,112], 8 patches | 4 | 55G | 904s | 48s |



The experiment was tested on A100 80G GPU.

During inference, the peak GPU memory usage happens during the autoencoder decoding latent features.
Expand Down
2 changes: 1 addition & 1 deletion generation/maisi/scripts/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,5 +231,5 @@ def main():
)
torch.cuda.reset_peak_memory_stats()
main()
peak_memory_gb = torch.cuda.max_memory_allocated() / (1024 ** 3) # Convert to GB
peak_memory_gb = torch.cuda.max_memory_allocated() / (1024**3) # Convert to GB
print(f"Peak GPU memory usage: {peak_memory_gb:.2f} GB")

0 comments on commit aaf5710

Please sign in to comment.