Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On Plagiarism of "Trajectory Consistency Distillation" #13

Open
Kim-Dongjun opened this issue Mar 26, 2024 · 9 comments
Open

On Plagiarism of "Trajectory Consistency Distillation" #13

Kim-Dongjun opened this issue Mar 26, 2024 · 9 comments

Comments

@Kim-Dongjun
Copy link

We sadly found out our Consistency Trajectory Models (CTM, ICLR24) was plagiarized by Trajectory Consistency Distillation (TCD)! See Twitter and Reddit.

We are deeply disappointed of TCD author's inappropriate reaction. Accordingly, we reported their plagiarism issue to their affiliated universities, hugging face team, and ICML. *Speaking on behalf of myself

@ilicnon
Copy link

ilicnon commented Mar 27, 2024

I would choose plagiarized poop any day that actually does help my QoL with SDXL models rather than some ML paper with a diffusion model that can only be used to generate ImageNet images.

Also, notify huggingface team for what? Removing those useful LORAs that are certainly more useful than your models?

Not affiliated with authors in any way BTW

@advenTure423
Copy link

I would choose plagiarized poop any day that actually does help my QoL with SDXL models rather than some ML paper with a diffusion model that can only be used to generate ImageNet images.

Also, notify huggingface team for what? Removing those useful LORAs that are certainly more useful than your models?

Not affiliated with authors in any way BTW

Well done! Another poop that just works on poop datasets like CelebA-HQ and CIFAR-10: https://arxiv.org/abs/2006.11239 .
By the way, "Not affiliated with authors in any way BTW", lol

@ilicnon
Copy link

ilicnon commented Mar 28, 2024

Well done! Another poop that just works on poop datasets like CelebA-HQ and CIFAR-10: https://arxiv.org/abs/2006.11239 .

I didn't call CTM nor the DDM paper poop, but if you insist, that poop you were talking about will be nothing if Stability.AI or NAI didnt come around and "plagiarize" it. Will someone or sony or CTM team actually release something based on CTM that will be as useful as this LORA released by TCD? Doubt it, in that case, the LORA that has been released by TCD team has more net positive to the community than any from CTM, and to not remove such net positive is the hill i will die on.

Still not affiliated with authors in any way.

@mhh0318
Copy link
Collaborator

mhh0318 commented Mar 28, 2024

We staunchly oppose any forms of plagiarism as well as the unwarranted accusations.
At current stage, we will maintain the open status of this issue, yet we sincerely hope that there will be a focus on the technical aspects of the issues.

@MoonRide303
Copy link

@Kim-Dongjun I've looked at TCD paper, and I see clearly stated there that the proof comes from your work (CTM), listed in the sources. Are you sure you didn't overreact a bit, here?

@advenTure423
Copy link

advenTure423 commented Mar 30, 2024

@Kim-Dongjun I've looked at TCD paper, and I see clearly stated there that the proof comes from your work (CTM), listed in the sources. Are you sure you didn't overreact a bit, here?

It is actually a disgraceful trick, which means, oh, I just borrowed this part, that's all I get from CTM. But the truth is that the core idea of TCD is highly identical to CTM. Given that TCD has copied word by word in many places, the authors of TCD should have been well aware of CTM, so such behavior is obvious plagiarism. I would say TCD would be a good technical report based on CTM, but the author of TCD apparently did not plan to do so.

@yoyolicoris
Copy link

I would say TCD would be a good technical report based on CTM, but the author of TCD apparently did not plan to do so.

Totally agree.
TCD gives some interesting information, but there should be more novelty if they want to submit it for peer review.
Publishing papers contributes to science, not making products.
The idea of open science should focus on transparency and reproducible research, not favouring any community.

@advenTure423
Copy link

Totally agree. TCD gives some interesting information, but there should be more novelty if they want to submit it for peer review.

Not even the lack of novelty. It is acceptable if one paper lacks novelty but is still published, given it really contributes to any community. But it is disgraceful if the paper deliberately turns a blind eye to already published papers and takes all credit itself.

@MoonRide303
Copy link

MoonRide303 commented Mar 31, 2024

From TCD paper (in A. Related Works):

Kim et al. (2023) proposes a universal framework for CMs and DMs. The core design is similar to ours, with the main differences being that we focus on reducing error in CMs, subtly leverage the semi-linear structure of the PF ODE for parameterization, and avoid the need for adversarial training.

It doesn't quite look like turning blind eye to me. Seems like they gave the credit (or at least tried to), and described how they improved the CTM method. To my taste this should more clearly stated in the introduction, not in the appendix at the end of TCD paper. Reducing the computational cost of the previously used methods can be still seen as scientific improvement (IMO) - especially in the area of ML / AI, where computational costs are often huge.

It's just my high-level attempt to understand the problem here, from layman perspective and without assuming bad faith. Instead of hasty judgement of my own I would rather see results of cross-check from reviewers with solid math and latent diffusion background, who would be able to fully understand all the math details of both papers, and evaluate if TCD method really is the improvement over CTM and contribution to the science, and to what degree.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants