-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On Plagiarism of "Trajectory Consistency Distillation" #13
Comments
I would choose plagiarized poop any day that actually does help my QoL with SDXL models rather than some ML paper with a diffusion model that can only be used to generate ImageNet images. Also, notify huggingface team for what? Removing those useful LORAs that are certainly more useful than your models? Not affiliated with authors in any way BTW |
Well done! Another poop that just works on poop datasets like CelebA-HQ and CIFAR-10: https://arxiv.org/abs/2006.11239 . |
I didn't call CTM nor the DDM paper poop, but if you insist, that poop you were talking about will be nothing if Stability.AI or NAI didnt come around and "plagiarize" it. Will someone or sony or CTM team actually release something based on CTM that will be as useful as this LORA released by TCD? Doubt it, in that case, the LORA that has been released by TCD team has more net positive to the community than any from CTM, and to not remove such net positive is the hill i will die on. Still not affiliated with authors in any way. |
We staunchly oppose any forms of plagiarism as well as the unwarranted accusations. |
@Kim-Dongjun I've looked at TCD paper, and I see clearly stated there that the proof comes from your work (CTM), listed in the sources. Are you sure you didn't overreact a bit, here? |
It is actually a disgraceful trick, which means, oh, I just borrowed this part, that's all I get from CTM. But the truth is that the core idea of TCD is highly identical to CTM. Given that TCD has copied word by word in many places, the authors of TCD should have been well aware of CTM, so such behavior is obvious plagiarism. I would say TCD would be a good technical report based on CTM, but the author of TCD apparently did not plan to do so. |
Totally agree. |
Not even the lack of novelty. It is acceptable if one paper lacks novelty but is still published, given it really contributes to any community. But it is disgraceful if the paper deliberately turns a blind eye to already published papers and takes all credit itself. |
From TCD paper (in A. Related Works):
It doesn't quite look like turning blind eye to me. Seems like they gave the credit (or at least tried to), and described how they improved the CTM method. To my taste this should more clearly stated in the introduction, not in the appendix at the end of TCD paper. Reducing the computational cost of the previously used methods can be still seen as scientific improvement (IMO) - especially in the area of ML / AI, where computational costs are often huge. It's just my high-level attempt to understand the problem here, from layman perspective and without assuming bad faith. Instead of hasty judgement of my own I would rather see results of cross-check from reviewers with solid math and latent diffusion background, who would be able to fully understand all the math details of both papers, and evaluate if TCD method really is the improvement over CTM and contribution to the science, and to what degree. |
We sadly found out our Consistency Trajectory Models (CTM, ICLR24) was plagiarized by Trajectory Consistency Distillation (TCD)! See Twitter and Reddit.
We are deeply disappointed of TCD author's inappropriate reaction. Accordingly, we reported their plagiarism issue to their affiliated universities, hugging face team, and ICML. *Speaking on behalf of myself
The text was updated successfully, but these errors were encountered: