You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run inference script with fp 16 version and fp 8 version, and results are very great.
But, size of generated videos are different for each version and wonder why ??
Upper one was generated with fp16 and lower one was generated with fp8 model. As seen, fp8 generated one's size is halfed.
The text was updated successfully, but these errors were encountered:
Hi, thank you for trying out our fp8 version.
As for your question, I have checked the output tensors from both fp16 and fp8 for 256x256x129f video generation. The generated tensors have the same shape and type of "torch.Size([1, 3, 129, 256, 256]) torch.float32".
The generated videos have similar size:
Please give your inference commands and more info, so that I can reproduce your situation.
A possible reason is that the fp8 ckpt filters out some high-frequency details that can not observed visually, but the video compress algorithm gives a much smaller file, when saving the video.
This is a just guess ~
Hi!
When I run inference script with fp 16 version and fp 8 version, and results are very great.
But, size of generated videos are different for each version and wonder why ??
The text was updated successfully, but these errors were encountered: