-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing Key Code/Files for Training & Validation on Vimeo-90K and REDS Datasets #16
Comments
Hi @Walnutes,
As this project was part of an internship, I don't have access to the old files anymore, so I probably won't be able to provide the necessary code for Vimeo-90-K. I apologize for this.
The validation output is on the Tensorboard. Here, you can observe only a sequence of frames for visual inspection to observe how results change (and hopefully, improve) for a random video clip (in this case sequence 020 was chosen). If you want, you can monitor multiple sequences with a slight code modification. No metrics were computed at this point. This is because computing metrics only on a single (and short) sequence is quite useless, and running a full validation involving multiple and long sequences is very time-consuming.
For VALIDATION_IMAGE, you are right, the correct path is train/lq. I probably forgot to change this when I cleaned the code for publication.
For validation, you don't need any metadata.txt, since images are loaded directly (see |
Thank you very much for your prompt and detailed response! Could you please provide some reliable references/template code (e.g., Any guidance or pointers to relevant parts of other repositories would be incredibly helpful for replicating your setup. Looking forward to your further response! |
From BasicSR, you need the file vimeo90k_dataset.py. You have to modify the class |
Thank you so much for your detailed guidance! It has been incredibly helpful. I’ve modified
Thank you again for your time and support! Looking forward to your response. |
Besides the training-related questions mentioned earlier, I also have a issue about the model weights you uploaded to HuggingFace. Due to the fact that both Vimeo-90K and REDS datasets require separate training as mentioned in #15. However, the StableVSR weights on HuggingFace only provide a single set/type of weights. Could you clarify:
If it’s the third case, could you kindly share the final weights specifically fine-tuned on the Vimeo-90K and REDS datasets? |
You can use 3 and 1 for Vimeo-90K as well, in this case you are considering just the central frame of the sequence, the previous and next ones.
256 is ok for Vimeo-90K, too. You are doing a crop in width but not in height. You can use a smaller value, like 128 or 192, for more data augmentation, but the dataset is quite large so I think 256 is ok.
I used
Option 2 is correct. The pre-trained model uploaded on HuggingFace allows you to replicate the REDS4 results reported in the paper. |
Hi,
Thank you for sharing your excellent work! I encountered a few issues while trying to train on the Vimeo-90K and REDS datasets:
The
dataset
directory only containsreds_dataset.py
, andtrain.py
seems to import this class exclusively. Could you kindly provide the complete data loading code for Vimeo-90K, as well as any relevant configuration files likeconfig_vimeo.yaml
andmetadata.txt
?While
validation_steps
is defined, I couldn’t locate the corresponding log, output images (e.g., super-resolved images), or evaluation metrics mentioned in your paper. Could you clarify this part?In the
train.sh
script,VALIDATION_IMAGE
points to thetrain/gt
directory. Should this be corrected totrain/lq
? Also, the requiredmetadata.txt
file for this appears to be missing. I attempted to create it based onREDS_train_metadata.txt
and the training settings mentioned in your paper as follows:000 100\n 011 100\n 015 100\n 020 100
Could you confirm if this matches your setup? If not, could you kindly upload the correct file? Additionally, if Vimeo-90K also requires such these files/configs, could you provide it as well?
Thank you in advance for your support and for providing additional details. Your assistance would be invaluable in reproducing your results!
The text was updated successfully, but these errors were encountered: