You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It sees my 4 audio files, then I go into wandb, then it says
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA GeForce RTX 4090') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | diffusion | ConditionedDiffusionModelWrapper | 1.2 B
1 | diffusion_ema | EMA | 1.1 B
2 | losses | MultiLoss | 0
1.1 B Trainable params
1.2 B Non-trainable params
2.3 B Total params
9,080.665 Total estimated model params size (MB)
venv\lib\site-packages\pytorch_lightning\utilities\data.py:104: Total length of DataLoader across ranks is zero. Please make sure this was your intention.
venv\lib\site-packages\pytorch_lightning\utilities\data.py:104: Total length of CombinedLoader across ranks is zero. Please make sure this was your intention. Trainer.fit stopped: No training batches.
wandb: Waiting for W&B process to finish... (success).
wandb: View run proud-feather-1 at: https://wandb.ai/XXXX/harmonai_train/runs/waq1kjin
wandb: Synced 5 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s)
wandb: Find logs at: .\wandb\run-20241223_034540-waq1kjin\logs
No training batches?
The text was updated successfully, but these errors were encountered:
It sees my 4 audio files, then I go into wandb, then it says
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
You are using a CUDA device ('NVIDIA GeForce RTX 4090') that has Tensor Cores. To properly utilize them, you should set
torch.set_float32_matmul_precision('medium' | 'high')
which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precisionLOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | diffusion | ConditionedDiffusionModelWrapper | 1.2 B
1 | diffusion_ema | EMA | 1.1 B
2 | losses | MultiLoss | 0
1.1 B Trainable params
1.2 B Non-trainable params
2.3 B Total params
9,080.665 Total estimated model params size (MB)
venv\lib\site-packages\pytorch_lightning\utilities\data.py:104: Total length of
DataLoader
across ranks is zero. Please make sure this was your intention.venv\lib\site-packages\pytorch_lightning\utilities\data.py:104: Total length of
CombinedLoader
across ranks is zero. Please make sure this was your intention.Trainer.fit
stopped: No training batches.wandb: Waiting for W&B process to finish... (success).
wandb: View run proud-feather-1 at: https://wandb.ai/XXXX/harmonai_train/runs/waq1kjin
wandb: Synced 5 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s)
wandb: Find logs at: .\wandb\run-20241223_034540-waq1kjin\logs
No training batches?
The text was updated successfully, but these errors were encountered: