For the past 12 months: Median turnaround in 49 days. 91.0% of all decisions made within less than 100 days. More at this link: Editorial Statistics
In response to a replication study by Paul M. Guest that found problems with a published JF article, the authors have retracted their article. This post provides answers to a number of questions that the JF editors have received during the past few days in response to these events.
Why the retraction of this article when retractions have been virtually non-existent in finance and economics in the past?
The reason why retractions have almost never occurred in the past is unlikely due to the absence of errors in past published research. More likely, errors have not been exposed in the past because replication was not attempted or failures to reproduce or replicate results have not reached the public. The Journal of Finance introduced two policies a few years ago that encourage and facilitate replication: (i) The requirement that authors of accepted papers share program code; (ii) the introduction of a special category for “Replication and Corrigenda” papers in the Journal (the replication paper that uncovered errors in this case is forthcoming in this section of the Journal). When reproduction and replication is facilitated and taken seriously, it is to be expected that some errors will be found. If the uncovered errors are sufficiently serious, retraction is a necessary consequence.
How does the data and code used in the replication study differ from the “original data code” that has reportedly been lost according to the authors’ statement in the retraction notice?
The authors of the retracted article provided replication code and data in the supplementary information section of their published article (provision of code is required by the JF’s code sharing policy). This is the data and code that was the starting point for the analysis in Guest’s replication paper. One problem Guest’s analysis uncovers is that some key results of the retracted article are not reproduced by these data and code files that the authors had provided to the JF upon acceptance (see Tables IV and V in the replication paper). The replication paper uncovers other problems as well, but the fact that the posted replication data and code do not reproduce key results reported in the article is a major one. What has reportedly been lost according to the authors’ retraction statement is some other version of data and code that the authors’ retraction statement refers to as the “original data and code that produced the published results.” This is all the editors know at this point.
Why were the authors allowed to describe in the retraction notice the division of tasks in their project?
The retraction guidelines of the Committee of Publication Ethics explicitly encourage Journals to mention the division of tasks in retracted research. At the same time, the guidelines also stress that “authorship entails some degree of joint responsibility” and that such a statement about the division of tasks does not allow authors to dissociate themselves from a retracted publication.
Should referees have caught the errors that lead to the retraction?
They could not. Under our current system of refereeing, referees are not tasked with paper replication. Our code-sharing policy requires sharing of code only after the paper has been accepted. Hence, the code that the authors of the retracted paper posted in the supplementary information section of their article was not available to referees when they reviewed this paper. Publication of a paper does not guarantee with certainty that results are correct. The task of reproduction and replication of research results is left to the academic community. But if errors are found in this process, it is important that the scientific records gets corrected. This has been done in this case. By catching the errors in published work, the system worked as intended.
Should the Journal consider policy changes?
Introduction of the Replications and Corrigenda section in the Journal and the code sharing requirement are not necessarily the end point of efforts to increase reproducibility and replicability. Further steps have been, and still are under discussion.
For the past 12 months: Median turnaround in 48 days. 91.7% of all decisions made within less than 100 days. More at this link: Editorial Statistics
For the past 12 months: Median turnaround in 49 days. 92.2% of all decisions made within less than 100 days. More at this link: Editorial Statistics
For the past 12 months: Median turnaround in 49 days. 91.8% of all decisions made within less than 100 days. More at this link: Editorial Statistics