Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to load trained weights for Head layers #114

Open
wants to merge 1 commit into
base: divya/add-config-wandb
Choose a base branch
from

Conversation

gitttt-1234
Copy link
Contributor

@gitttt-1234 gitttt-1234 commented Oct 31, 2024

This PR adds an option to load trained weights for head layers right before training (instead of random initialization) for transfer learning. The .ckpt paths can be passed to ModelTrainer.train(backbone_trained_ckpts_path=<your_path>, head_trained_ckpts_path=<your_path>).

Summary by CodeRabbit

  • New Features

    • Enhanced configuration options for learning rate scheduling with new scheduler types and parameters.
    • Introduced chunk_size and user_instances_only parameters for improved data handling.
    • Added bin_files_path for specifying binary file paths in training configurations.
  • Bug Fixes

    • Improved error handling for invalid scheduler types in training processes.
  • Tests

    • Added tests for loading trained weights and validating learning rate scheduler configurations.

Copy link
Contributor

coderabbitai bot commented Oct 31, 2024

Walkthrough

The pull request introduces significant enhancements to the configuration documentation and files related to the sleap_nn.ModelTrainer class and its associated components. Key updates include the addition of new parameters for learning rate scheduling, early stopping, and data handling. The trainer_config has been restructured to accommodate these changes, allowing for more granular control over training processes. Additionally, several configuration files have been modified to reflect these new settings, ensuring consistency across the training pipeline.

Changes

File Path Change Summary
docs/config.md Added new parameters: scheduler, step_lr, reduce_lr_on_plateau, and early_stopping with detailed sub-parameters for each in trainer_config. Updated documentation to reflect these changes.
docs/config_bottomup.yaml Added chunk_size under data_config. Updated lr_scheduler to scheduler: ReduceLROnPlateau with nested parameters under reduce_lr_on_plateau.
docs/config_centroid.yaml Added chunk_size under data_config. Updated optimizer to include scheduler: ReduceLROnPlateau and nested parameters under reduce_lr_on_plateau. Removed log_params from wandb.
docs/config_topdown_centered_instance.yaml Added chunk_size and min_crop_size. Updated lr_scheduler to scheduler: ReduceLROnPlateau with nested parameters under reduce_lr_on_plateau.
initial_config.yaml Introduced a comprehensive configuration structure with sections for data_config, model_config, and trainer_config, detailing various parameters for training and validation setups.
sleap_nn/data/streaming_datasets.py Modified CenteredInstanceStreamingDataset to recalculate crop_hw in __getitem__ instead of the constructor, affecting cropping dimensions during data retrieval.
sleap_nn/training/model_trainer.py Updated method signatures in ModelTrainer and TrainingModel to accept backbone_trained_ckpts_path and head_trained_ckpts_path. Enhanced learning rate scheduler configuration to support multiple types.
tests/assets/minimal_instance/initial_config.yaml Added user_instances_only, chunk_size, min_crop_size, and bin_files_path. Updated optimizer configuration to use scheduler: ReduceLROnPlateau with nested parameters. Removed log_params.
tests/assets/minimal_instance/training_config.yaml Added user_instances_only, chunk_size, min_crop_size, and bin_files_path. Updated optimizer configuration to use scheduler: ReduceLROnPlateau with nested parameters. Removed log_params.
tests/assets/minimal_instance_bottomup/initial_config.yaml Added user_instances_only, chunk_size, and bin_files_path. Updated optimizer configuration to use scheduler: ReduceLROnPlateau with nested parameters. Removed log_params.
tests/assets/minimal_instance_bottomup/training_config.yaml Added user_instances_only, chunk_size, bin_files_path. Updated optimizer configuration to use scheduler: ReduceLROnPlateau with nested parameters. Removed log_params.
tests/assets/minimal_instance_centroid/initial_config.yaml Added user_instances_only, chunk_size, and bin_files_path. Updated lr_scheduler to use scheduler: ReduceLROnPlateau with nested parameters. Removed log_params.
tests/assets/minimal_instance_centroid/training_config.yaml Added user_instances_only, chunk_size, bin_files_path. Updated lr_scheduler to use scheduler: ReduceLROnPlateau with nested parameters. Removed log_params.
tests/fixtures/datasets.py Updated provider in data_config, added user_instances_only, chunk_size, and min_crop_size. Restructured lr_scheduler to include scheduler: ReduceLROnPlateau and nested parameters.
tests/training/test_model_trainer.py Added new test test_trainer_load_trained_ckpts for loading trained weights. Modified test_trainer to include new learning rate scheduler configurations and enhanced error handling.

Possibly related PRs

Suggested reviewers

  • talmo

Poem

In the fields where data flows,
New configs bloom like springtime shows.
With schedulers and chunks, we leap,
A rabbit's joy, our promise to keep! 🐇✨
Learning rates dance, adjustments made,
In the world of models, we're not afraid!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 0cf48a8 and 6c33c82.

📒 Files selected for processing (15)
  • docs/config.md (1 hunks)
  • docs/config_bottomup.yaml (2 hunks)
  • docs/config_centroid.yaml (2 hunks)
  • docs/config_topdown_centered_instance.yaml (3 hunks)
  • initial_config.yaml (1 hunks)
  • sleap_nn/data/streaming_datasets.py (1 hunks)
  • sleap_nn/training/model_trainer.py (16 hunks)
  • tests/assets/minimal_instance/initial_config.yaml (3 hunks)
  • tests/assets/minimal_instance/training_config.yaml (4 hunks)
  • tests/assets/minimal_instance_bottomup/initial_config.yaml (3 hunks)
  • tests/assets/minimal_instance_bottomup/training_config.yaml (3 hunks)
  • tests/assets/minimal_instance_centroid/initial_config.yaml (3 hunks)
  • tests/assets/minimal_instance_centroid/training_config.yaml (3 hunks)
  • tests/fixtures/datasets.py (3 hunks)
  • tests/training/test_model_trainer.py (5 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/config.md

[uncategorized] ~179-~179: Loose punctuation mark.
Context: ...ReduceLROnPlateau". - step_lr: - step_size`: (int) Period...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~182-~182: Loose punctuation mark.
Context: ...*: 0.1. - reduce_lr_on_plateau: - threshold: (float) Thre...

(UNLIKELY_OPENING_PUNCTUATION)

🪛 Markdownlint
docs/config.md

178-178: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


179-179: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


180-180: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


181-181: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


182-182: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


183-183: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


184-184: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


185-185: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


186-186: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


187-187: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


188-188: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


184-184: null
Spaces inside emphasis markers

(MD037, no-space-in-emphasis)


184-184: null
Spaces inside emphasis markers

(MD037, no-space-in-emphasis)

🪛 yamllint
initial_config.yaml

[error] 1-1: wrong new line character: expected \n

(new-lines)

🪛 Ruff
sleap_nn/training/model_trainer.py

582-582: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


593-593: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

🔇 Additional comments (29)
tests/assets/minimal_instance_centroid/initial_config.yaml (1)

5-6: Verify the chunk size selection.

The chunk_size of 100 might affect memory usage and training performance. Consider validating this value:

  • Is it optimal for your typical dataset sizes?
  • Have you tested with larger/smaller values to find the sweet spot between memory usage and training speed?
✅ Verification successful

The chunk size value is consistent across all configurations

The value of chunk_size: 100 is consistently used across all configuration files in the repository, including:

  • Test configurations in various test assets
  • Documentation examples for different model architectures (centroid, bottomup, topdown)

This consistent usage across test cases and documentation suggests this is a validated default value. The review concern about validation can be dismissed.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other chunk_size configurations to ensure consistency
rg "chunk_size:" -A 1

Length of output: 1310

tests/assets/minimal_instance/initial_config.yaml (3)

83-90: LGTM: Well-structured learning rate scheduler configuration.

The ReduceLROnPlateau scheduler configuration is comprehensive and includes all necessary parameters with reasonable default values.


6-6: Add validation for chunk_size.

The chunk_size parameter should be validated to ensure it's a positive integer.

#!/bin/bash
# Check if chunk_size validation exists in the codebase
rg "chunk_size.*validation|validate.*chunk_size" --type python

69-69: ⚠️ Potential issue

Specify a value for bin_files_path or remove if not needed.

The bin_files_path parameter is added but left empty. Please either specify a value or remove if not required.

#!/bin/bash
# Check if bin_files_path is used in the codebase
rg -l "bin_files_path" --type python
tests/assets/minimal_instance_bottomup/initial_config.yaml (2)

70-71: Verify implementation of weight loading parameters.

The PR objectives mention backbone_trained_ckpts_path and head_trained_ckpts_path parameters, but they're not visible in this configuration. Please clarify:

  1. Where these parameters should be specified
  2. Their relationship with bin_files_path
#!/bin/bash
# Search for references to the weight loading parameters
rg "backbone_trained_ckpts_path|head_trained_ckpts_path" --type py

5-6: Verify data loading parameters and their impact.

The new data loading parameters could significantly affect training:

  1. user_instances_only: True might reduce the training dataset size
  2. chunk_size: 100 could impact memory usage and training performance

Please ensure these values are optimal for your use case.

✅ Verification successful

Parameters are well-defined with safe default values

Based on the codebase analysis:

  1. user_instances_only (default: True) is a filtering parameter that uses only user-labeled instances when True, which is the safe default behavior
  2. chunk_size (default: 100) is used in data streaming and has a reasonable default value, with tests validating its functionality

The values in the config (True, 100) match the defaults in the codebase, making them safe choices. The parameters are well-documented and consistently used across the data loading pipeline.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any documentation or tests related to these parameters
rg -l "user_instances_only|chunk_size" --type py

Length of output: 4414


Script:

#!/bin/bash
# Let's search for the actual parameters we're interested in
rg "user_instances_only|chunk_size" --type py -A 2

Length of output: 10463

tests/assets/minimal_instance_centroid/training_config.yaml (1)

5-6: Verify data loading parameters.

The new data loading parameters need validation:

  1. Is chunk_size: 100 optimal for your use case? Consider documenting the rationale.
  2. Setting user_instances_only: True might exclude valuable training data. Please confirm this is intentional.
✅ Verification successful

Parameters are well-documented and consistent with examples

The values in your configuration align with both the documentation and example configurations:

  • user_instances_only: True is the documented default, used to focus training on user-labeled instances only
  • chunk_size: 100 matches the documented default size in MB and is consistently used across all example configurations
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for documentation about these parameters
rg -l "user_instances_only|chunk_size" docs/

Length of output: 156


Script:

#!/bin/bash
# Check the documentation content for these parameters
rg -A 5 "user_instances_only|chunk_size" docs/config.md

# Also check usage in example configs
rg -A 2 "user_instances_only|chunk_size" docs/config_*.yaml

Length of output: 1404

tests/assets/minimal_instance_bottomup/training_config.yaml (2)

5-6: Verify data loading parameters.

The new data configuration parameters have important implications:

  • user_instances_only: True will exclude any auto-generated or predicted instances
  • chunk_size: 100 may affect memory usage and processing speed

Please confirm these values are appropriate for your use case, especially for:

  1. Training scenarios where you might need non-user instances
  2. Systems with memory constraints where chunk size might need adjustment

85-85: Document the purpose of bin_files_path.

The PR objective mentions loading .ckpt files, but bin_files_path suggests binary files. This parameter needs documentation to clarify:

  1. The expected file format
  2. Whether this is related to the pre-trained weights loading feature
  3. If this is optional (currently empty)

Consider adding a comment in the YAML to document this parameter's purpose and format requirements.

tests/assets/minimal_instance/training_config.yaml (4)

15-15: Verify min_crop_size validation.

The new min_crop_size parameter is set to null, but its relationship with the existing crop_hw parameter needs clarification. Please ensure that when min_crop_size is set, it's validated against crop_hw values to prevent inconsistencies.


96-103: Review lr_scheduler parameter values.

The ReduceLROnPlateau configuration includes some parameters that might need validation:

  • threshold: 1e-7 and min_lr: 1e-8 are very small values
  • The combination of patience: 5 and cooldown: 3 means waiting at least 8 epochs before another reduction

Consider:

  1. Adding parameter validation
  2. Documenting recommended ranges
  3. Explaining the rationale behind these default values
#!/bin/bash
# Check if these parameters are documented
rg -A 5 "ReduceLROnPlateau" docs/

82-82: Clarify the purpose of bin_files_path.

The new bin_files_path parameter appears to be related to the PR's objective of loading trained weights, but its purpose and usage aren't clear. Please add documentation explaining:

  • Expected file format
  • When this parameter should be used
  • Relationship with the PR's weight loading feature
#!/bin/bash
# Search for documentation or usage of bin_files_path
rg "bin_files_path" docs/

5-6: Document the impact of new data loading parameters.

The new parameters user_instances_only and chunk_size could significantly impact training behavior:

  • user_instances_only: true restricts to manually labeled data
  • chunk_size: 100 affects memory usage and training speed

Please add validation for these values and document their recommended ranges.

docs/config_bottomup.yaml (1)

108-115: LGTM! Well-structured learning rate scheduler configuration.

The restructuring of the lr_scheduler configuration improves clarity by:

  1. Explicitly declaring the scheduler type
  2. Grouping related parameters under reduce_lr_on_plateau
  3. Maintaining a logical hierarchy of settings

This change makes it easier to extend the configuration for other scheduler types in the future.

docs/config_topdown_centered_instance.yaml (2)

107-114: Well-structured scheduler config, but seems out of scope.

The ReduceLROnPlateau configuration is technically sound with appropriate values. However, this change appears unrelated to the PR's stated objective of adding support for loading trained weights.

Let's verify if this change is part of a larger refactoring:

#!/bin/bash
# Search for similar scheduler changes in other config files
rg -l "ReduceLROnPlateau" --type yaml

6-6: Document the purpose and impact of chunk_size parameter.

The newly added chunk_size parameter needs documentation to explain:

  1. Its purpose in data processing
  2. How the value of 100 was determined
  3. Any memory or performance implications

Let's check if this parameter is documented elsewhere:

✅ Verification successful

Let me search for more specific information about the chunk_size parameter in the codebase.


The chunk_size parameter is correctly configured and consistent with the codebase

The value of 100 for chunk_size is the standard default value used across the codebase, as evidenced by:

  • It's consistently set to 100 in all configuration files (initial_config.yaml, training configs)
  • The ModelTrainer class in sleap_nn/training/model_trainer.py explicitly defaults to 100 when not specified
  • It's used for data processing in get_bin_files.py to control the size of data chunks during training
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any documentation or usage of chunk_size
rg -i "chunk.?size" --type md --type rst

Length of output: 491


Script:

#!/bin/bash
# Search for chunk_size in Python files to understand its usage
rg -i "chunk.?size" --type py -C 2

# Also check YAML files for any existing usage
rg -i "chunk.?size" --type yaml -C 2

Length of output: 10807

initial_config.yaml (2)

97-104: Add validation for scheduler type

The configuration hardcodes ReduceLROnPlateau as the scheduler type. According to the documentation, StepLR should also be supported. Consider adding validation for the scheduler type and its corresponding configuration.

#!/bin/bash
# Search for scheduler validation in the codebase
rg -A 5 'scheduler.*(?:ReduceLROnPlateau|StepLR)'

12-15: Validate crop size configurations

The min_crop_size is set to null while crop_hw is specified. Consider adding validation to ensure min_crop_size (if specified) is not larger than the dimensions in crop_hw.

docs/config_centroid.yaml (2)

131-138: Verify scope: LR scheduler changes unrelated to weight loading.

While the learning rate scheduler configuration is well-structured and the values are reasonable, these changes appear to be outside the scope of the PR's objective of "Add option to load trained weights for Head layers". Consider moving these changes to a separate PR focused on learning rate scheduling improvements.

Let's verify if there are related changes in other files:

#!/bin/bash
# Search for other files with similar lr_scheduler changes
rg -l "reduce_lr_on_plateau" --type yaml

6-6: Document the purpose and verify the chunk_size value.

The newly added chunk_size parameter lacks documentation explaining its purpose and impact on data processing. Additionally, the value of 100 should be validated to ensure it's optimal for memory usage and processing efficiency.

Let's check if this parameter is documented elsewhere:

✅ Verification successful

Let's search more broadly for any mentions or usage of chunk_size in the codebase to understand its purpose and implementation:


Based on the search results, I can now provide a final response as the documentation for chunk_size parameter is found in the config documentation:

The chunk_size parameter is properly documented and its value is consistent with defaults

The chunk_size parameter is documented in docs/config.md as: "Size of each chunk (in MB). Default: 100". The value of 100 in the config file matches this default value and is used consistently across all configuration files in the codebase. The parameter is used in data processing during training to control memory usage by processing data in chunks.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for documentation of chunk_size parameter
rg -i "chunk.?size" docs/

Length of output: 494


Script:

#!/bin/bash
# Search for chunk_size in all files
rg -i "chunk.?size" -A 3 -B 3

# Also check for any related documentation in README or docs
fd "README|\.md$" -t f -x cat {}

Length of output: 40633

tests/fixtures/datasets.py (3)

92-92: Explain the rationale for disabling up_interpolate.

Changing up_interpolate from True to False could significantly impact the model's behavior. Please:

  1. Explain why this change was necessary
  2. Document the impact on model performance
  3. Verify if existing tests need updates
#!/bin/bash
# Search for tests that might be affected by this change
rg -A 5 "up_interpolate" --type python tests/

Line range hint 1-171: Verify alignment with PR objectives.

The changes in this file are focused on configuration updates for data processing and training, but I don't see any additions related to the PR's primary objective of loading trained weights for head layers. The PR description mentions new parameters backbone_trained_ckpts_path and head_trained_ckpts_path, but they're not reflected in this test configuration.

Please ensure that:

  1. The test fixtures are updated to include the new weight loading parameters
  2. Test cases are added to verify the weight loading functionality
#!/bin/bash
# Search for implementation of weight loading functionality
rg -A 5 "trained_ckpts_path" --type python

# Look for related test cases
fd -e py | xargs rg -l "test.*trained.*weights|test.*load.*weights"

51-51: Verify data provider changes and add documentation.

The changes to the data provider configuration introduce breaking changes and new parameters:

  1. Changed provider from "LabelsReaderDP" to "LabelsReader"
  2. Added new parameters user_instances_only and chunk_size

Please ensure these changes are documented and verify their impact on existing tests.

Also applies to: 54-55

sleap_nn/training/model_trainer.py (6)

354-358: Ensure backward compatibility for train method

The train method signature has been updated with new parameters. If there are external scripts or codes that call this method, please ensure they are updated accordingly to prevent breaking changes.


410-413: Confirm proper initialization with new checkpoint paths

When calling _initialize_model, verify that the backbone_trained_ckpts_path and head_trained_ckpts_path are correctly passed and utilized. This ensures that the model loads the pre-trained weights as intended.


660-689: Handle potential configuration errors in learning rate scheduler

Ensure that the learning rate scheduler configuration handles missing or incorrect parameters gracefully. Adding validation or default values can prevent runtime errors.


723-733: Ensure consistent super class initialization

Verify that all necessary parameters are passed correctly to the superclass __init__ method to prevent any initialization issues.


794-795: Maintain consistency across model subclasses

Ensure that the updated parameters and initialization logic are consistently applied across all model subclasses (TopDownCenteredInstanceModel, CentroidModel, BottomUpModel). This helps prevent discrepancies and potential bugs.

Also applies to: 804-814


966-976: Verify parameter passing in BottomUpModel initialization

Double-check that backbone_trained_ckpts_path and head_trained_ckpts_path are correctly passed to the superclass to ensure the model is properly initialized with pre-trained weights.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@gitttt-1234 gitttt-1234 changed the base branch from main to divya/add-config-wandb October 31, 2024 13:49
Copy link

codecov bot commented Oct 31, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.37%. Comparing base (91d478c) to head (6c33c82).

Additional details and impacted files
@@                   Coverage Diff                   @@
##           divya/add-config-wandb     #114   +/-   ##
=======================================================
  Coverage                   97.37%   97.37%           
=======================================================
  Files                          38       38           
  Lines                        3695     3702    +7     
=======================================================
+ Hits                         3598     3605    +7     
  Misses                         97       97           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.

Actionable comments posted: 15

🧹 Outside diff range and nitpick comments (19)
tests/assets/minimal_instance_centroid/initial_config.yaml (2)

64-64: Document the bin_files_path parameter.

Given that this PR introduces functionality for loading pre-trained weights:

  1. Consider adding a comment explaining the expected format and purpose of bin_files_path
  2. Should there be validation to ensure the path exists when provided?
  3. How does this relate to the backbone_trained_ckpts_path and head_trained_ckpts_path mentioned in the PR description?

78-85: LGTM! Consider adding parameter descriptions.

The ReduceLROnPlateau configuration looks well-structured with reasonable default values. To improve maintainability, consider adding inline comments explaining each parameter's purpose and impact on training.

Example documentation:

reduce_lr_on_plateau:
  # Minimum change in loss to be considered an improvement
  threshold: 1.0e-07
  # How to measure the threshold (abs or rel)
  threshold_mode: abs
  # Number of epochs to wait after a reduction before resuming normal operation
  cooldown: 3
  # Number of epochs with no improvement after which LR will be reduced
  patience: 5
  # Factor by which the learning rate will be reduced
  factor: 0.5
  # Lower bound on the learning rate
  min_lr: 1.0e-08
tests/assets/minimal_instance/initial_config.yaml (1)

13-14: Document the significance of crop_hw values.

The hardcoded values [160, 160] for crop_hw should be documented to explain their significance and any constraints.

    crop_hw:
-      - 160
-      - 160
+      - 160  # Height of the crop window
+      - 160  # Width of the crop window
tests/assets/minimal_instance_bottomup/initial_config.yaml (2)

70-71: Document the purpose and format of bin_files_path.

The PR mentions loading .ckpt format weights, but bin_files_path suggests binary files. Please clarify:

  1. The expected file format
  2. Whether this is related to the pre-trained weights loading feature
  3. Add documentation for this parameter's usage

84-91: LGTM! Consider adding parameter descriptions.

The learning rate scheduler configuration is well-structured with appropriate values for ReduceLROnPlateau. Consider adding inline comments or documentation explaining the purpose of each parameter for future maintainability.

lr_scheduler:
  scheduler: ReduceLROnPlateau
  reduce_lr_on_plateau:
    threshold: 1.0e-07      # Minimum change in loss to be considered an improvement
    threshold_mode: abs     # Use absolute change in loss
    cooldown: 3            # Epochs to wait before resuming normal operation
    patience: 5            # Epochs to wait before reducing LR
    factor: 0.5           # Factor to reduce LR by
    min_lr: 1.0e-08       # Minimum LR threshold
tests/assets/minimal_instance_centroid/training_config.yaml (1)

75-75: Document the bin_files_path parameter.

Since this parameter is related to loading trained weights (PR objective), please:

  1. Document the expected path format and structure
  2. Add validation to handle empty/invalid paths
  3. Consider providing an example path in the config
tests/assets/minimal_instance_bottomup/training_config.yaml (1)

99-106: LGTM: Well-configured learning rate scheduler.

The ReduceLROnPlateau configuration is comprehensive and uses sensible values. The scheduler will:

  1. Reduce learning rate by half when no improvement is seen
  2. Wait 5 epochs before reducing (patience)
  3. Allow 3 epochs of cooldown between reductions
  4. Stop at min_lr of 1e-8

Consider adding comments in the YAML to explain the purpose of less obvious parameters like threshold_mode and cooldown.

docs/config_bottomup.yaml (2)

6-6: Document the purpose of chunk_size parameter.

The newly added chunk_size parameter would benefit from a comment explaining its purpose and impact on data processing. This helps users understand how to tune this value for their specific use cases.


Parameters should be added to trainer_config section

Based on the implementation in model_trainer.py, the parameters backbone_trained_ckpts_path and head_trained_ckpts_path are used during model initialization and training. These parameters should be added to the trainer_config section of the configuration file, not the model_config section, as they are handled by the trainer.

trainer_config:
  # ... existing config ...
  resume_ckpt_path:
+ backbone_trained_ckpts_path:  # Path to trained checkpoint for backbone
+ head_trained_ckpts_path:      # Path to trained checkpoint for head layers
  optimizer_name: Adam
🔗 Analysis chain

Line range hint 1-115: Missing configuration parameters for trained weights loading.

According to the PR objectives, this change should add support for loading pre-trained weights for head layers. However, I don't see the new parameters backbone_trained_ckpts_path and head_trained_ckpts_path in the configuration. These parameters should be added to enable the described functionality.

Let's check if these parameters are defined elsewhere:

Consider adding these parameters to the configuration file:

trainer_config:
  # ... existing config ...
  resume_ckpt_path:
+ backbone_trained_ckpts_path:
+ head_trained_ckpts_path:
  optimizer_name: Adam
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for the new parameters in other configuration files
rg -l "backbone_trained_ckpts_path|head_trained_ckpts_path"

Length of output: 133


Script:

#!/bin/bash
# Let's check the implementation to understand how these parameters are used
rg -A 10 "backbone_trained_ckpts_path|head_trained_ckpts_path" tests/training/test_model_trainer.py sleap_nn/training/model_trainer.py

Length of output: 17142

docs/config_topdown_centered_instance.yaml (1)

Line range hint 1-114: Missing configuration for trained weights loading.

Based on the PR objectives, this configuration file should include parameters for specifying trained weight paths:

  • backbone_trained_ckpts_path
  • head_trained_ckpts_path

These parameters are essential for the PR's core functionality of loading pre-trained weights for head layers.

Consider adding these parameters under the appropriate section (possibly under model_config or trainer_config).

docs/config_centroid.yaml (1)

Line range hint 89-93: Add configuration for head layer pre-trained weights.

The PR aims to add support for loading pre-trained weights for head layers, but the configuration is missing the necessary parameters. Consider adding head_trained_ckpts_path under the head_configs section to align with the PR's objective.

Example addition:

   head_configs:
     single_instance:
     centered_instance:
     bottomup: 
     centroid:
+      trained_weights_path:  # Path to pre-trained weights for centroid head
       confmaps:
         anchor_part: 0
🧰 Tools
🪛 yamllint

[error] 8-8: trailing spaces

(trailing-spaces)


[error] 9-9: trailing spaces

(trailing-spaces)

tests/fixtures/datasets.py (2)

62-62: Document the purpose of min_crop_size parameter.

The new min_crop_size parameter has been added to preprocessing configuration. Please add documentation explaining:

  • The purpose of this parameter
  • The impact when set to None
  • Valid values and their effects

159-167: LGTM! Consider documenting scheduler parameters.

The restructured LR scheduler configuration with ReduceLROnPlateau is well-organized and provides comprehensive control over learning rate adjustments. The parameters are sensibly configured with:

  • Relative threshold mode with small threshold (1e-07)
  • Conservative reduction factor (0.5)
  • Reasonable patience (5) and cooldown (3) periods
  • Safety floor for minimum learning rate (1e-08)

Consider adding inline comments or documentation explaining the purpose of each parameter and their recommended ranges.

tests/training/test_model_trainer.py (2)

108-110: Consider adding edge cases for StepLR parameters.

While the basic StepLR configuration is tested, consider adding test cases for:

  • Edge case values for step_size (e.g., 0, negative)
  • Edge case values for gamma (e.g., 0, negative, >1)
  • Verification of learning rate changes after each step

337-342: Enhance scheduler type validation tests.

Consider improving the error handling tests:

  1. Test multiple invalid scheduler types
  2. Verify the error message content
  3. Test edge cases (empty string, None, etc.)

Example:

invalid_schedulers = ["ReduceLR", "", None, "CustomScheduler"]
for scheduler in invalid_schedulers:
    OmegaConf.update(config, "trainer_config.lr_scheduler.scheduler", scheduler)
    with pytest.raises(ValueError, match=f"Unsupported scheduler type: {scheduler}"):
        trainer = ModelTrainer(config)
        trainer.train()
docs/config.md (1)

178-188: Technical review of learning rate scheduler documentation

The learning rate scheduler documentation is well-structured but could benefit from additional clarity:

  1. For StepLR, consider adding an example to illustrate the decay pattern.
  2. For ReduceLROnPlateau, the threshold_mode parameter's explanation could be clearer.

Consider updating the documentation with this improved explanation:

 - `threshold_mode`: (str) One of "rel", "abs". In rel mode, dynamic_threshold = best * ( 1 + threshold ) in max mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. *Default*: "rel".
+ - `threshold_mode`: (str) One of "rel" (relative) or "abs" (absolute). Determines how to interpret the threshold parameter:
+   - In "rel" mode: Triggers when improvement is less than (best * threshold) for min mode, or (best * (1 + threshold)) for max mode
+   - In "abs" mode: Triggers when improvement is less than (best - threshold) for min mode, or (best + threshold) for max mode
+   *Default*: "rel"
🧰 Tools
🪛 LanguageTool

[uncategorized] ~179-~179: Loose punctuation mark.
Context: ...ReduceLROnPlateau". - step_lr: - step_size`: (int) Period...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~182-~182: Loose punctuation mark.
Context: ...*: 0.1. - reduce_lr_on_plateau: - threshold: (float) Thre...

(UNLIKELY_OPENING_PUNCTUATION)

🪛 Markdownlint

178-178: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


179-179: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


180-180: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


181-181: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


182-182: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


183-183: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


184-184: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


185-185: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


186-186: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


187-187: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


188-188: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


184-184: null
Spaces inside emphasis markers

(MD037, no-space-in-emphasis)


184-184: null
Spaces inside emphasis markers

(MD037, no-space-in-emphasis)

sleap_nn/training/model_trainer.py (3)

497-498: Include parameter types in docstrings

In the docstring for TrainingModel, you have added descriptions for backbone_trained_ckpts_path and head_trained_ckpts_path. Consider specifying the parameter types for better clarity.

Apply this diff to enhance the docstring:

             head_trained_ckpts_path: Path to trained ckpts for head layer.
+            
+        Args:
+            backbone_trained_ckpts_path (Optional[str]): Path to trained checkpoints for backbone.
+            head_trained_ckpts_path (Optional[str]): Path to trained checkpoints for head layers.

713-714: Add parameter types in docstrings

For consistency and clarity, include the parameter types in the docstrings for SingleInstanceModel.

Apply this diff to enhance the docstring:

             head_trained_ckpts_path: Path to trained ckpts for head layer.
+
+        Args:
+            backbone_trained_ckpts_path (Optional[str]): Path to trained checkpoints for backbone.
+            head_trained_ckpts_path (Optional[str]): Path to trained checkpoints for head layers.

882-895: Update docstrings and parameter types for CentroidModel

Include the parameter types in the docstrings to improve clarity.

Apply this diff:

             head_trained_ckpts_path: Path to trained ckpts for head layer.
+
+        Args:
+            backbone_trained_ckpts_path (Optional[str]): Path to trained checkpoints for backbone.
+            head_trained_ckpts_path (Optional[str]): Path to trained checkpoints for head layers.
🛑 Comments failed to post (15)
tests/assets/minimal_instance/initial_config.yaml (1)

15-15: 💡 Codebase verification

⚠️ Potential issue

min_crop_size parameter should have an integer value

The min_crop_size parameter is used to specify the minimum crop size when crop_hw is not set. Based on the implementation in sleap_nn/data/instance_cropping.py, it expects an integer value or None. Please either:

  • Set an appropriate integer value for min_crop_size
  • Remove the parameter if you want to use the default behavior (equivalent to setting it to 0)
🔗 Analysis chain

Specify a value for min_crop_size or remove if not needed.

The min_crop_size parameter is added but left empty, which could lead to validation issues or undefined behavior.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if min_crop_size is used in the codebase
rg -l "min_crop_size" --type python

Length of output: 161


Script:

#!/bin/bash
# Check if min_crop_size is used in the codebase with ripgrep
rg -l "min_crop_size"

# Also search for any YAML files that might reference this parameter
fd -e yaml -x grep -l "min_crop_size" {}

Length of output: 586


Script:

#!/bin/bash
# Check the implementation and usage of min_crop_size in the core code
rg "min_crop_size" sleap_nn/data/instance_cropping.py -C 2

# Check documentation for min_crop_size
rg "min_crop_size" docs/config.md -C 2

Length of output: 2095

tests/assets/minimal_instance_centroid/training_config.yaml (2)

89-96: 🛠️ Refactor suggestion

Review learning rate scheduler parameters.

The ReduceLROnPlateau configuration needs attention:

  1. threshold: 1e-7 is very small - consider a larger value like 1e-4
  2. patience: 5 epochs might be too short for complex models
  3. min_lr: 1e-8 is extremely small - typical values are around 1e-6

Consider these adjustments:

    reduce_lr_on_plateau:
-     threshold: 1.0e-07
+     threshold: 1.0e-04
-     patience: 5
+     patience: 10
-     min_lr: 1.0e-08
+     min_lr: 1.0e-06
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    scheduler: ReduceLROnPlateau
    reduce_lr_on_plateau:
      threshold: 1.0e-04
      threshold_mode: abs
      cooldown: 3
      patience: 10
      factor: 0.5
      min_lr: 1.0e-06

75-77: ⚠️ Potential issue

Implementation incomplete for PR objective.

The configuration is missing the parameters mentioned in the PR objective:

  • backbone_trained_ckpts_path
  • head_trained_ckpts_path

These parameters are essential for loading pre-trained weights for head layers.

Add the missing parameters:

  bin_files_path:
+ backbone_trained_ckpts_path:
+ head_trained_ckpts_path:
  resume_ckpt_path:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  bin_files_path:
  backbone_trained_ckpts_path:
  head_trained_ckpts_path:
  resume_ckpt_path:
  wandb:
docs/config_topdown_centered_instance.yaml (1)

15-15: ⚠️ Potential issue

Specify a value for min_crop_size or document if it's optional.

The min_crop_size parameter is added without a value. This could lead to:

  1. Configuration validation errors
  2. Unexpected runtime behavior

Please either:

  • Set a default value
  • Document if this is an optional parameter
  • Add validation to handle empty values
initial_config.yaml (4)

1-104: ⚠️ Potential issue

Fix line endings for cross-platform compatibility

The file uses incorrect line endings. Ensure consistent use of Unix-style line endings (\n) across the codebase.

Run the following command to fix line endings:

#!/bin/bash
# Convert CRLF to LF
sed -i 's/\r$//' initial_config.yaml
🧰 Tools
🪛 yamllint

[error] 1-1: wrong new line character: expected \n

(new-lines)


84-84: ⚠️ Potential issue

Security: Remove empty API key field

For security reasons, sensitive information like API keys should not be included in configuration files, even if empty. Consider removing this field and using environment variables instead.

-    api_key: ''
+    # API key should be provided via environment variable WANDB_API_KEY
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    # API key should be provided via environment variable WANDB_API_KEY

3-4: ⚠️ Potential issue

Replace hardcoded paths with relative paths

The configuration contains Windows-specific absolute paths, which will break when used on different machines or operating systems.

Replace absolute paths with relative paths:

-  train_labels_path: C:\Users\TalmoLab\Desktop\Divya\sleap-nn\tests\assets/minimal_instance.pkg.slp
-  val_labels_path: C:\Users\TalmoLab\Desktop\Divya\sleap-nn\tests\assets/minimal_instance.pkg.slp
+  train_labels_path: tests/assets/minimal_instance.pkg.slp
+  val_labels_path: tests/assets/minimal_instance.pkg.slp
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  train_labels_path: tests/assets/minimal_instance.pkg.slp
  val_labels_path: tests/assets/minimal_instance.pkg.slp

27-28: ⚠️ Potential issue

Add configuration for backbone and head trained weights

According to the PR objectives, the configuration should support backbone_trained_ckpts_path and head_trained_ckpts_path for loading pre-trained weights. However, these parameters are missing from the configuration.

Add the missing parameters:

 init_weights: default
 pre_trained_weights: null
+backbone_trained_ckpts_path: null
+head_trained_ckpts_path: null
 backbone_type: unet
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

  init_weights: default
  pre_trained_weights: null
  backbone_trained_ckpts_path: null
  head_trained_ckpts_path: null
tests/training/test_model_trainer.py (1)

346-384: 🛠️ Refactor suggestion

Enhance test coverage for weight loading functionality.

The current test verifies basic weight loading, but consider adding tests for:

  1. Loading only backbone weights
  2. Loading only head weights
  3. Invalid checkpoint paths
  4. Incompatible model architectures
  5. Verifying weights are maintained after training

Example additions:

# Test loading only backbone weights
trainer = ModelTrainer(load_weights_config)
trainer._initialize_model(
    backbone_trained_ckpts_path=(Path(minimal_instance_ckpt) / "best.ckpt").as_posix(),
    head_trained_ckpts_path=None
)
model_ckpt = next(trainer.model.parameters())[0, 0, :].detach().numpy()
assert np.all(np.abs(first_layer_ckpt - model_ckpt) < 1e-6)

# Test invalid checkpoint path
with pytest.raises(FileNotFoundError):
    trainer._initialize_model(
        backbone_trained_ckpts_path="invalid_path.ckpt",
        head_trained_ckpts_path=None
    )

# Test incompatible architecture
incompatible_ckpt = {...}  # Create checkpoint with different architecture
with pytest.raises(ValueError, match="Incompatible model architecture"):
    trainer._initialize_model(
        backbone_trained_ckpts_path=incompatible_ckpt,
        head_trained_ckpts_path=None
    )
docs/config.md (1)

178-188: ⚠️ Potential issue

Markdown formatting improvements needed

The indentation of list items is inconsistent with the rest of the document. The static analysis tool suggests using 4-space indentation for unordered lists.

Apply these formatting changes:

  1. Adjust indentation to match the rest of the document (4 spaces for top-level items, 6 spaces for nested items)
  2. Remove extra spaces around emphasis markers in the threshold_mode description
  3. Fix loose punctuation marks after the scheduler options
-        - `scheduler`: (str) Name of the scheduler to use. Valid schedulers: `"StepLR"`, `"ReduceLROnPlateau"`.
-        - `step_lr`:
-            - `step_size`: (int) Period of learning rate decay. If `step_size`=10, then every 10 epochs, learning rate will be reduced by a factor of `gamma`.
+    - `scheduler`: (str) Name of the scheduler to use. Valid schedulers: `"StepLR"`, `"ReduceLROnPlateau"`
+    - `step_lr`:
+      - `step_size`: (int) Period of learning rate decay. If `step_size`=10, then every 10 epochs, learning rate will be reduced by a factor of `gamma`.

Committable suggestion was skipped due to low confidence.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~179-~179: Loose punctuation mark.
Context: ...ReduceLROnPlateau". - step_lr: - step_size`: (int) Period...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~182-~182: Loose punctuation mark.
Context: ...*: 0.1. - reduce_lr_on_plateau: - threshold: (float) Thre...

(UNLIKELY_OPENING_PUNCTUATION)

🪛 Markdownlint

178-178: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


179-179: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


180-180: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


181-181: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


182-182: Expected: 4; Actual: 8
Unordered list indentation

(MD007, ul-indent)


183-183: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


184-184: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


185-185: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


186-186: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


187-187: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


188-188: Expected: 6; Actual: 12
Unordered list indentation

(MD007, ul-indent)


184-184: null
Spaces inside emphasis markers

(MD037, no-space-in-emphasis)


184-184: null
Spaces inside emphasis markers

(MD037, no-space-in-emphasis)

sleap_nn/training/model_trainer.py (5)

332-336: 🛠️ Refactor suggestion

Update docstrings to reflect new parameters

The _initialize_model method now accepts two new parameters: backbone_trained_ckpts_path and head_trained_ckpts_path. Please update the method's docstring to include descriptions for these parameters to maintain clarity and consistency in documentation.


455-455: ⚠️ Potential issue

Avoid logging sensitive configuration data

When updating the WandB experiment configuration, ensure that no sensitive information (such as API keys or personal data) is being inadvertently logged.


517-518: 🛠️ Refactor suggestion

Remove redundancy in attribute assignment

You are already passing backbone_trained_ckpts_path and head_trained_ckpts_path to the superclass. Assigning them again here may be unnecessary.

Apply this diff to remove redundant assignments:

-        self.backbone_trained_ckpts_path = backbone_trained_ckpts_path
-        self.head_trained_ckpts_path = head_trained_ckpts_path

Committable suggestion was skipped due to low confidence.


576-585: ⚠️ Potential issue

Simplify dictionary key iteration

As per the static analysis hint, you can simplify the dictionary iteration by removing the .keys() method.

Apply this diff to address the issue:

-            for k in ckpt["state_dict"].keys()
+            for k in ckpt["state_dict"]
             if ".backbone" in k
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        # Initializing backbone (encoder + decoder) with trained ckpts
        if backbone_trained_ckpts_path is not None:
            print(f"Loading backbone weights from `{backbone_trained_ckpts_path}` ...")
            ckpt = torch.load(backbone_trained_ckpts_path)
            ckpt["state_dict"] = {
                k: ckpt["state_dict"][k]
                for k in ckpt["state_dict"]
                if ".backbone" in k
            }
            self.load_state_dict(ckpt["state_dict"], strict=False)
🧰 Tools
🪛 Ruff

582-582: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


587-598: ⚠️ Potential issue

Remove debugging print statement and simplify key iteration

  • The print statement on line 596 appears to be for debugging and should be removed or replaced with proper logging.
  • Simplify the dictionary key iteration by removing .keys() as per the static analysis hint.

Apply this diff to address both issues:

             }
-            for k in ckpt["state_dict"].keys()
+            for k in ckpt["state_dict"]
             if ".head_layers" in k
             }
-        print(f"from main code: {ckpt['state_dict'].keys()}")

Committable suggestion was skipped due to low confidence.

🧰 Tools
🪛 Ruff

593-593: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant