If you are here, you (like me) probably spent some time trying to understand and reproduce some machine learning publications just to finally figuring out that it's unreproducible or borderline (if not completely) bullshit.
It seems that reproducibility is not a concern for most researchers in this field, reviewers don't have time to verify the results of all the papers coming to their desk, and that's how you end up with a paper with doubtful results winning a best paper award at one of the most prestigious ML conference
This will never end with the pressure for showing good results, the growing number of submissions at mainstream conferences, the lack of reviewers time. There is just too many incentives to hide bad results, artifically exagerate good results and cheat, and not enough incentives to make research reproducible or showing limits or problems encountered.
This project want to generate motivation to cut the bullshit. The motivation is simple: If you are a researcher you don't want to be in this list
This is supposed to be a crowd-sourced list of the bullshit people can find in publications, the objectives are:
- Shaming bullshit in publication.
- Saving time for researchers and engineers trying to learn about SOTA in machine learning, by either avoiding papers in the list or checking the rejected PR comments (See "How to contribute").
- Helping recruters to check that the researcher that they are interviewing is not a fraud.
Every time you have a trouble understanding / reproduce a paper, and you think something is fishy just make a Pull Request to add the paper in the list (I'm thinking about one textfile per publication explaining the problems), with tags categorizing the bullshit (for example: unreproducible, fake innovation, maths uncorrect).
If maintainers (so for the moment... me) thoughfully accept that something seems wrong with the publication then they will send an email to you the authors whom will have to defend / explain their work. If the defense is not satisfying after a certain amount of time (will have to fix that 1 month ?), then the PR will be accepted.
Notice that if the PR is not accepted then that means that the comments will provide a good explanation for future readers of the publications in trouble to understand / reproduce.
Some rules for fairness and politeness:
- In case of reproducibility problems, reaching the authors first is probably a good idea. If they doesn't answer or their answers doesn't satisfy you then it's never too late to come here. Eventually if the answers satisfy you but aren't that obvious from the publication itself you could eventually make a Pull Request just to propagate the explaination itself.
- If the publication have a repository, directly submit an issue on it is probably better. Again if they doesn't answer or their answers doesn't satisfy you then it's never too late to come here.
- For a paper in Pull Request, there is presumption of innocence. A paper is not bullshit until its associated Pull Request is accepted.
This project will be what the people do of it, if people get involved into it then it will help bring sanity into the field. But, it can only put pressure on researchers if it get popular, so I'm counting on you.
I'm just passionated about Machine Learning and AI and tired to lose time reading some bullshit papers. There is a lot of holes in the project and I'm counting on you to help me (us ?) filling them up.
I'm working for a company involved into machine learning research, so we will need to find a more unbiased way to consider / accept a Pull Request especially if it's coming from the company I'm working for. I will not add a publication on my own I will only allow myself to accept or reject a pull request, and for the moment in that you will have to trust me and I will do my best to explain when I can't make an unbiased decision.