According to the Publication Manual of the American Psychological Association “once an article is published, researchers must make their data available to permit other qualified professionals to confirm the analysis and results” (p.12). Further, it is expected that the data will be retained for a minimum of five years and the software, mathematical models, details of procedures and other information should also be retained and shared. (1) The manual also covers such things as confidentiality and conflicts of interest. However, a recent article (2) in PLoS Biology found that no studies in a random sample of 441 biomedical journal articles made all raw data directly available and only one included a full protocol.
Importantly, the article is not intended as an expose but rather as a baseline for comparing efforts to improve transparency in research. They found, for example, that while almost 70% of the articles (gathered from 2000 – 2014) contained no conflict of interest statements that more recent articles improved substantially on this metric:
"Between 2000 and 2014, the percentage of articles with no statement of conflict decreased substantially (94.4% in 2000 to 34.6% in 2014), whereas the number of articles reporting statements of conflicts (0% in 2000, 15.4% in 2014) or no conflicts (5.6% in 2000, 50.0% in 2014)
increased (Fig 2)."
Just this month, the American Statistical Association announced that beginning in September 2016 the Journal of the American Statistical Association (JASA) will require code and data and will be utilizing “Associate Editors of Reproducibility” (AER). The announcement begins:
Reproducible Research in JASA
Today one can find plenty of advice on how to use software, R, for example, to create analyses that integrate reproducibility into the process. The availability of electronic repositories allows for a sharing of code, data, graphs, etc. that is unprecedented in the past. Along with open and accessible platforms for publication and sharing of results and conclusions this makes science and research more exciting than ever.
But it still leaves open the question of is it worth it? Does it matter? In business we would sometimes to refer to it as “so what?” Does it clarify something for the client or save the client or the company money or ….? In other words is there tangible value to the audience?
Our first post on this topic veered into the weeds about psychological studies on geniuses. The point was that:
1. The results are still contradictory.
2. So what?
In other words, is it worthwhile to study, repeatedly, whether or not those we label genius should also be labeled nuts? Clearly, I think not.
To put it more eloquently:
Biomedical researchers have an ethical responsibility to ensure the reproducibility and integrity of their work, so that precious research resources are not wasted and, most importantly, flawed or misleading results do not make their way to clinical studies where the faulty evidence could adversely affect study participants.
1. American Psychological Association. (2010). Publication manual of the American Psychological Association Sixth Edition. Washington, D.C.: American Psychological Association.
See also APA Ethics Code Standard 8.14a and APA Ethics Code Standard 6.01
2. Iqbal SA, Wallach JD, Khoury MJ, Schully SD, Ioannidis JPA (2016) Reproducible Research Practices and Transparency across the Biomedical Literature. PLoS Biol 14(1): e1002333. doi:10.1371/journal.pbio.1002333. http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002333
Jesse Sharp is an expert in the analysis of health care data. Passionate about data and the ethics of analysis he writes on topics related to medicine, public health and statistics. More...