☑️ CNN
☑️ GAN
☑️ Transformer
☑️ SSL
☑️ USL
☑️ CPU or GPU
☑️ Epochs
☑️ Learning rate
☑️ Batch size
☑️ Optimizer
☑️ Scheduler
☑️ Log save for loss metric curves.
☑️ Strategy 1: Two-step pipeline. For instance, coarse segmentation (ROI detection) --> fine segmentation.
☑️ Strategy 2: Compound loss functions. Main loss and auxiliary loss with weights. For instance, Dice+CE, Dice+Focal.
☑️ Strategy 3: Think if a gradient is needed in some parts.
☑️ Strategy 4: Training time augmentation (TTA).
☑️ test-time-training (TTT) if necessary
☑️ Metrics
💠 Solution1: retain_graph=True
💠 Solution2: check torch.no_grad()
and var.detach()
💠 Solution: object.ToTensor()
💠 Solution: output_copy = output.clone() # avoid in-place opration
💠 Solution: check data shape
💠 Solution: check data shape convert to a_array = np.array(a); print(a_array.shape) # (3,)
total_loss.backward()
💠 Solution: check the loss definition, it should have a gradient.
logits = torch.squeeze(logits,dim=1)
squeeze() received an invalid combination of arguments - got (tuple, dim=int), but expected one of:
- (Tensor input)
- (Tensor input, int dim) didn't match because some of the arguments have invalid types: (!tuple of (Tensor, Tensor)!, dim=int)
- (Tensor input, tuple of ints dim) didn't match because some of the arguments have invalid types: (!tuple of (Tensor, Tensor)!, !dim=int!)
- (Tensor input, name dim) didn't match because some of the arguments have invalid types: (!tuple of (Tensor, Tensor)!, !dim=int!)
💠 Solution: check datatype and dimension