Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Méthode "change_num_class_for_finetuning" inexistante dans la classe "PyGrandLANet" #129

Open
AurelienChauveheid opened this issue May 17, 2024 · 3 comments

Comments

@AurelienChauveheid
Copy link

Bonjour, j'essaie de finetunner un modèle que j'ai déjà entrainer avec myria3d, après avoir paramétrer les fichiers de callbacks et model j'obtiens l'erreur suivante en lancant run.py :

Traceback (most recent call last): File "/home/achauveheid/Documents/LIDAR/myria3d_predict/myria3d/run.py", line 57, in launch_train return train(config) File "/home/achauveheid/Documents/LIDAR/myria3d_predict/myria3d/myria3d/train.py", line 145, in train trainer.fit(model=model, datamodule=datamodule, ckpt_path=config.model.ckpt_path) File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit call._call_and_handle_interrupt( File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 949, in _run call._call_setup_hook(self) # allow user to set up LightningModule in accelerator environment File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 93, in _call_setup_hook _call_callback_hooks(trainer, "setup", stage=fn) File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 208, in _call_callback_hooks fn(trainer, trainer.lightning_module, *args, **kwargs) File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/pytorch_lightning/callbacks/finetuning.py", line 277, in setup self.freeze_before_training(pl_module) File "/home/achauveheid/Documents/LIDAR/myria3d_predict/myria3d/myria3d/callbacks/finetuning_callbacks.py", line 23, in freeze_before_training pl_module.model.change_num_class_for_finetuning(self._num_classes) File "/home/achauveheid/anaconda3/envs/myria3d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'PyGRandLANet' object has no attribute 'change_num_class_for_finetuning'

Je n'ai pas vu de référence à cette méthode dans la classe 'PyGRandLANet' et non plus dans la documentation de torch.nn.Module. Est-ce que vous pouvez m'aider/éclairer ?

@leavauchier
Copy link
Collaborator

Bonjour @AurelienChauveheid,

cette erreur ne me parle pas, j'ai essayé de chercher aussi où elle est référencée mais je ne la trouve pas non plus.

@CharlesGaydon, à tout hasard, est-ce que ça te rappelle quelque chose ?

@AurelienChauveheid
Copy link
Author

AurelienChauveheid commented Aug 26, 2024

En modifiant la classe FinetuningFreezeUnfreeze, nous avons pu finetuner un modèle déjà entraîné par myria3D. Par contre je ne suis pas sur que nous respectons correctement les étapes de dégel.

class FinetuningFreezeUnfreeze(BaseFinetuning):
    def __init__(
        self,
        d_in: int = 9,
        num_classes: int = 6,
        unfreeze_fc_end_epoch: int = 3,
        unfreeze_decoder_train_epoch: int = 6,
    ):
        super().__init__()

        self._d_in = d_in
        self._num_classes = num_classes
        self._unfreeze_decoder_epoch = unfreeze_decoder_train_epoch
        self._unfreeze_fc_end_epoch = unfreeze_fc_end_epoch
        self.fc_end = self._unfreeze_fc_end_epoch
 
    def freeze_before_training(self, pl_module):
        """Update in and out dimensions, and freeze everything at start."""
 
        # here we could both load the model weights and update its dim afterward
        # TODO: check change_num_class_for_finetuning
        # pl_module.model.change_num_class_for_finetuning(self._num_classes)
        self.freeze(pl_module.model)
 
    def finetune_function(self, pl_module, current_epoch, optimizer, optimizer_idx: int=0):
        import copy
 
        """Unfreeze layers sequentially, starting from the end of the architecture."""
        pl_module.model.fc_end = copy.deepcopy(pl_module.model.fc_classif)
        if current_epoch == 0:
            self.unfreeze_and_add_param_group(
                modules=pl_module.model.fc_classif,
                optimizer=optimizer,
                train_bn=True,
                initial_denom_lr=100,
            )
        if current_epoch == self._unfreeze_fc_end_epoch:
            self.unfreeze_and_add_param_group(
                # TODO:
                modules=pl_module.model.mlp_classif,
                # modules=pl_module.model.fc_end,
                optimizer=optimizer,
                train_bn=True,
                initial_denom_lr=100,
            )
        if current_epoch == self._unfreeze_decoder_epoch:
            self.unfreeze_and_add_param_group(
                modules=pl_module.model.decoder,
                optimizer=optimizer,
                train_bn=True,
                initial_denom_lr=100,
            )

@CharlesGaydon
Copy link
Collaborator

J'apporte le fin mot de l'histoire : effectivement la méthode change_num_class_for_finetuning n'existe plus. Je l'avais documenté dans cette issue : #97.

Additionnaly, the required method change_num_class_for_finetuning, that used to be part of the NN module, is not there anymore. This means that finetuning for a different number of classes is not supported (cf. finetuning callbacks)
Ca ne pose pas de soucis si le nb de classes pour le finetuning reste inchangé. Et j'avais aussi documenté le fait qu'il fallait renommer fc_end: #96

@AurelienChauveheid, une PR qui corrige ces deux petites régressions serait grandement appréciée !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants