Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry about pixel-wise consistency #23

Open
jangbi1 opened this issue Feb 15, 2024 · 1 comment
Open

Inquiry about pixel-wise consistency #23

jangbi1 opened this issue Feb 15, 2024 · 1 comment
Assignees
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@jangbi1
Copy link

jangbi1 commented Feb 15, 2024

Hello, @usert5432. ``

Thanks for your quick and detailed response. #22

Additionally, as I read through your paper, I generated an issue because I had some questions.

I am interested in applying pixel-wise consistency loss for video translation, but I can’t find an explanation for this in the readme.

I'm wondering if you could check on this for me, and if it's possible to use it, I think it would greatly help my research.

Once again, thank you for sharing your astonishing research. I look forward to your reply.

Sincerely,

Jangbi.

@usert5432 usert5432 self-assigned this Feb 17, 2024
@usert5432 usert5432 added documentation Improvements or additions to documentation question Further information is requested labels Feb 17, 2024
@usert5432
Copy link
Collaborator

Hello @jangbi1,

I am interested in applying pixel-wise consistency loss for video translation, but I can’t find an explanation for this in the readme.
I'm wondering if you could check on this for me, and if it's possible to use it, I think it would greatly help my research.

Sure, the difference below shows the modifications need to be done to the Male-2-Female scripts/celeba_hq/train_m2f_translation.py to enable pixel-wise consistency (matching the paper's configuration). After applying this difference, the pixel-wise consistency can be enabled by adding --lambda-consist MAGNITUDE to the script invocation line.

Please, let me know if I should elaborate more on this.

We will update our documentation and provide the proper training scripts with the pixel-wise consistency, but it will take some time, since we are in the middle of another project.

diff --git a/scripts/celeba_hq/train_m2f_translation.py b/scripts/celeba_hq/train_m2f_translation.py
index 05f8fcc..1f7fba4 100644
--- a/scripts/celeba_hq/train_m2f_translation.py
+++ b/scripts/celeba_hq/train_m2f_translation.py
@@ -33,6 +33,11 @@ def parse_cmdargs():
         default = 1e-4, help = 'learning rate of the generator'
     )

+    parser.add_argument(
+        '--lambda-consist', dest = 'lambda_consist', type = float,
+        default = 0.0, help = 'magnitude of the forward-consisntecy loss'
+    )
+
     return parser.parse_args()

 def get_transfer_preset(cmdargs):
@@ -108,8 +113,10 @@ args_dict = {
     'model_args' : {
         'lambda_a'        : cmdargs.lambda_cyc,
         'lambda_b'        : cmdargs.lambda_cyc,
+        'lambda_consist'  : cmdargs.lambda_consist,
         'lambda_idt'      : 0.5,
         'avg_momentum'    : 0.9999,
+        'consistency'     : { 'name' : 'resize', 'size' : 32, },
         'head_queue_size' : 3,
         'head_config'     : {
             'name'            : BH_PRESETS[cmdargs.head],
@@ -131,7 +138,8 @@ args_dict = {
 # args
     'label'  : (
         f'{cmdargs.gen}-{cmdargs.head}_({cmdargs.no_pretrain}'
-        f':{cmdargs.lambda_cyc}:{cmdargs.lambda_gp}:{cmdargs.lr_gen})'
+        f':{cmdargs.lambda_cyc}:{cmdargs.lambda_gp}:{cmdargs.lr_gen}'
+        f':{cmdargs.lambda_consist})'
     ),
     'outdir' : os.path.join(ROOT_OUTDIR, 'celeba_hq_resized_lanczos', 'm2f'),
     'log_level'  : 'DEBUG',

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants