How is it possible that papers by different scientists end up with different outcomes and recommendations even though they use the same model, the same methods, and the same data set?
The riddle does not end here. Another curious observation is that the scientists do refer to each other's papers, but that they do not report that they have different outcomes. What is more, the proposed methods are only able to beat a standard benchmark, because the researchers used the data selectively. This conspicuous flaw has been repeated for years in articles that are published in international journals and in a textbook of renowned statistics professors. Why has that flaw not been pointed out before? How ubiquitous are the difficulties in discussing questionable aspects of research?
In response to 'sloppy' science, it is typically argued that there is nothing wrong with the rules of science, but only with researchers who fail to apply those rules appropriately in their pursuit of truth. Could it be that science is more sloppy than its fundamental rules suggest? What if the simple rules of science obstruct the complex process of doing and discussing research?
In addressing these questions, Science: Under Submission investigates how we can make it easier for scientists to exchange ideas about how research is conducted.