Publication bias or as it is more often referred to – “the file drawer problem” is defined as an influence of the results of a study on whether or not that study is published. There are a number of reasons as to how the results of a study can influence it’s publication. It may, for example, turn out that the results of the experiment are not statistically significant or only partially significant. Another reason for not publishing could be the that the results do not agree with the expectations of the researcher or sponsor, which sometimes can be a major problem that leads to sometimes dangerous outcomes such as result falsification. So as you see the nature of publication bias can vary. However the most common one is failing to reject null hypothesis (i.e. not getting a statistically significant result).
First of all, let me tell you – publication bias is very common and not only in psychology but also in a lot of other areas. In research literature there has been a few studies that found evidence of publication bias. One of the examples of such research is “Publication Decisions Revisited: The Effect of the Outcome of Statistical Tests on the Decision to Publish and Vice Versa” by T. D. Sterling and colleagues. This was a review of the literature through 1995 that indicated the occurrence of the file drawer effect. Another example could be a study by Hopewell et al (Publication Bias in Clinical Trials due to Statistical Significance or Direction of Trial Result) who concluded that “Trials with positive findings are published more often, and more quickly, than trials with negative findings.” By this point some of you may be a little puzzled and thinking – well OK, there is such thing as the file drawer effect, but what wrong does it do, is it really that big of a problem? Well, my opinion is that it is a big problem in research and there are a number of reasons, but I think the most important one is this:
Most of us know (maybe not, but I assume you do:)) that when a significance level of 0.05 is set in a study, about 5% of repeats of research will falsely reject the null hypothesis when it is actually true. Therefore, if only statistically significant results are published, then it is actually a mis-representation of a true situation. So, because of that some false effects may appear to be empirically supported. It is also very misleading for other researchers who may be wasting their time doing research on something that has already been well-researched and just not reported. Quite unfair, don’t you think?
Since the file drawer effect was researched and identified as a threat, there has been a number of attempts to try and detect publication bias. Almost all methods for doing so have their problems and have been criticised a lot. However, they are still used nowadays. The most famous one was proposed by Robert Rosenthal is based on probability calculations. The method is known as the fail-safe file drawer (or FSFD) analysis. It involves calculating a “fail-safe” number, which helps to decide whether or not the results of a study are resistant to publication bias threat.
In conclusion, the file drawer effect is a big problem in research. Different attempts to detect publication bias, even the most famous ones have only been partially successful and therefore cannot be heavily relied on even though are still in use. I think, that a solution for reducing publication bias could be letting people know of the results of your experiment even if they were not significant. I know, there aren’t any opportunities to publish such findings, but there are a number of websites where you can post information about your attempts and findings regardless of how significant they were. The most used ones are: www.psychologyreplications.org and www.psychfiledrawer.org
Reference:
R. Rosenthal (1979) The “file drawer problem” and tolerance for null results, Psychological Bulletin, Vol. 86, No. 3, 838-641. Retrieved from http://www.cs.ucl.ac.uk/staff/M.Sewell/faq/publishing-research/Rosenthal1979.pdf
T. D. Sterling, W. L. Rosenbaum and J. J. Weinkam (Publication Decisions Revisited: The Effect of the Outcome of Statistical Tests on the Decision to Publish and Vice Versa, The American Statistician, 1995, vol 49 No. 1, pp. 108 – 112
Hopewell, S. et al (Publication Bias in Clinical Trials due to Statistical Significance or Direction of Trial Result, Cochrane Review 2009, Issue 1; abstract available at www.thecochrane library.com)