-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time-Frequency Consistency Loss is not utilized #21
Comments
Hello, I noticed this too. Therefore, I modified the loss function to use the time-frequency consistency loss, and the final experimental results obtained differed significantly from the paper. I hope the author can answer this doubt for us. |
Can you get good results from the other three experiments? How to set the parameters? |
Sorry, I can't reproduce the results of other three experiments either. I can only reproduce the one-to-one results from SleepEEG to Epilepsy with the original model parameter settings. |
I also tried pre-training and fine-tuning using other datasets, but it got bad performance. |
I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad |
Perhaps only the author can answer these questions for us. |
have you solve the problem of subset? |
Sorry, I haven't solved the subset problem yet. Maybe the author only gave the correct settings for the SleepEEG → Epilepsy experiment.
…------------------ 原始邮件 ------------------
发件人: "mims-harvard/TFC-pretraining" ***@***.***>;
发送时间: 2023年4月1日(星期六) 晚上7:53
***@***.***>;
***@***.******@***.***>;
主题: Re: [mims-harvard/TFC-pretraining] Time-Frequency Consistency Loss is not utilized (Issue #21)
I also tried pre-training and fine-tuning using other datasets, but it got bad performance.
I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad
have you solve the problem of subset?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
The author's code seems to have some problems, using the torch's API, TransformerEncoderLayer, in the backbone network, however it does not set the batch_first's to true, according to the author's data format, batch_size should be in the first place, and it does not seem reasonable to use TransformerEncoder for single channel time series input. |
Yes this has also been mentioned in the other issue 19 as well. I agree that the single channel time-series input doesn't make sense, especially since the transformer is currently coded such that the "time" of the self-attention mechanism is actually the singular channel. In this way, the size of the self-attention mechanism is attending over is only 1. |
I noticed that the Time-Frequency Consistency Loss is not being utilized in your code. Could you please confirm whether this is intentional or not? And if it is not being used intentionally, could you please explain the reason behind it and its potential impact on the model's performance?
The text was updated successfully, but these errors were encountered: