Sunday, August 28, 2016

Going beyond impact factors—reforming scientific publishing to value integrity

Going beyond impact factors—reforming scientific publishing to value integrity

Sometimes working in academia feels like being a gymnast at the Olympics. Not because we're tumbling through the lab in glittering costumes, but because of the constant pressure to succeed. Gymnasts are tasked to perform more spectacular routines at every competition, while scientists are expected to publish several high impact papers a year. And as seen with French gymnast Samir Ait Said, who broke his leg in a horrific accident at the vault in the 2016 Summer Olympics in Rio, too much pressure can have drastic consequences. While the "publish or perish" culture in science won't break bones, it does have a negative impact – the prevalence of scientific fraud, leading to an increase in the number of retractions of scientific papers. This is a problem that doesn't seem to be easing any time soon – the number of papers retracted in 2010 was more than ten times that in 2000.

A study published in PLOS Biology that investigates what factors scientists reputations are judged on gave a clue as to why this problem exists. When comparing a scientist that produces boring, but reproducible studies with a scientist that publishes exciting, but not reproducible studies, the public perceived the boring scientist as "smarter, more ethical, a better scientist, more typical, and more likely to get and keep a job." Scientists given the same choices agreed that the boring, but certain scientist was smarter, more ethical and the better scientist. But in a departure from the public's opinion, scientists found the exciting but unreliable scientist to be more likely to get a job and be more celebrated by peers. This is a stark contrast to the public's view of science, which seems to favor well-done science over flawed science. Worryingly, when scientists were asked which of the two model scientists they would rather be, more said they wanted to be a scientist that produces exciting results, even though the majority knew that publishing reproducible research is better overall. While one survey of 313 researchers does not represent the whole science community, these results paint a surprising picture of scientists' priorities.

Publish or perish

But it's hard to lay blame on the scientists – as they are not wrong in their perception that innovative science is more celebrated than replicable science. In our current scientific culture, publishing high impact papers often seem to be all that counts for success. The shortage of tenure-track positions (only about 14%-23% of PhD level biologists, chemists, or physicists hold tenure-track positions after five years) leads to a fierce competition between scientists for those jobs. And the main way scientific success is judged in job or grant applications is the number of high impact publications an investigator has authored.

The pressure to publish one's work in the top journals is immense, especially for early career scientists who are just starting to make a name for themselves and have few publications. The prospect of a high impact factor publication has even been shown to effect neuronal response and behavior in controlled experimental settings. The results were comparable to reward-based responses normally seen in the context of money, leading the authors to dub impact factors the "currency of science."

Publishing in high impact journals has disproportional advantages for the authors, compared to publishing the same data in a "lesser" journal. Advantages range from better prospects on the job market and invites to conferences, to better chances on grant applications. This leads some scientists to strive for the perfect, most publishable study. On the way, they may ignore data not fitting their conclusions or leave out negative data. The rush to be the first to publish may also lead to neglect of thorough replication of experiments. The crisis in reproducibility of scientific data is evident, as detailed in a previous post on the PLOS ECR Community Blog. The strong correlation between a journal's impact factor and the number of papers it retracts, found in a study in 2011, supports the notion that scientists are more likely to take risks and forget scientific vigor in order to publish in high impact journals.

How can we cultivate greater value for sound science?

One common way to assess quality of science is judging the journals in which scientists publish. Impact factors of journals are one popular measure in use. Most scientists agree that impact factors are an imperfect measure of quality, but they are an easy metric. Calculating the average number of citations per article published is easy. By extension, judging careers by a list of publications associated with one number is easy. But when has the easy way ever been the best way? Impact factors are based on citations, but, like all other citation-based metrics in use, it ignores the fact that one in 15 citations is from the authors themselves. Most importantly, however, citation numbers do not account for the type of citations. For example, a paper may be cited 1000+ times but only in publications challenging its findings, meaning the impact factor in this case is not representative of the quality of the research.

Recognizing the shortcomings of impact factors, PLOS instead uses article level metrics (ALMs) to measure the quality of science published in their journals. ALMs measure the quality of a scientific article by not only taking the number of citations in to account, but also press coverage, shares, and views the article receives. By taking into account social as well as academic factors as soon as the article is published, the influence of the article is charted over time prior to the point where the article is cited in other academic literature. While ALMs can share some drawbacks with the more traditional impact factor (e.g. self-promotion on social media or news coverage of poorly done science), they do provide an advantage over impact factors in getting a faster picture of the influence a paper has on both the scientific and general communities.

A second alternative to impact factor is valuing science based on scrutiny from an open access community. A lot of scientific journals only let paying subscribers view their publications, which limits exposure to research published in these journals. Open access policies allow anyone interested in a study to access the research, without barriers. More importantly, not just the access should be free but also the peer review process. Peer review before publication is a key step in checking the quality of science, however the current peer review system is imperfect. I believe that post-publication peer review should be a key process to improve science integrity. Ideally both pre- and post-publication peer review would be made available alongside the published manuscript for increased transparency in the scientific process. A few publications have introduced open reviews including EMBO, BMJ Open and F100research. Alternatively, you can find online journal clubs like PubPeer where articles are discussed post-publishing, or leave comments on articles post-publication.

Currently, a lot of sound science remains unpublished, as negative or inconclusive data are less likely to be published due to reporting bias. A 2010 study in PLOS ONE showed that 82% of papers published between 2000 and 2007 in the United States included positive results only, in spite of the value of negative data. By publishing negative or null results the scientific literature captures a more complete picture of a particular field, and includes more balanced information. I feel a well-done study with negative results deserves the same recognition as a positive one, as it still expands human knowledge and saves resources for other researchers. For example, publishing what isn't the cause of a given disease will prevent other scientists from spending time and money looking into the same thing. The PLOS Missing Pieces Collection includes negative, null or inconclusive results, and is a great platform for scientists who conduct an experiment and yield a result of this type. In addition, PLOS ONE is a journal that does publish negative, null, or inconclusive results. Replication studies also receive limited recognition in spite of their importance to advancing the scientific field. They are key in validating scientific findings, but few scientists risk doing them as it is hard to publish them for their "lack of innovation," – a notion we should start to forget.

Boycotting high impact factor journals

Nobel Prize winner Randy Schekman declared his boycott of top notch journals in 2013, arguing that their policies are damaging to science and cause scientists to cut corners. Schekman's statement started an important discussion, but it is important to note that despite his controversial stance, the repercussions for him are slight. As a well-established principal investigator and Nobel Prize winner, people will read his papers whether or not they are published in a high impact journal, because his career is celebrated in the scientific community. Early career researchers do not share Schekman's established professional reputation, and may be hesitant to embrace alternatives to the conventional closed-access, impact factor journals when selecting where to publish their findings. Fortunately, there are plenty of communities devoted to reforming the current scientific publishing system and journals committed to transparency. Some of these include: the OpenCon community, which is a group of ECRs advocating for more open science; and in addition to PLOS journals, eLife, Nature Communications and Cell Reports all publish open access science.

While extreme measures such as completely boycotting high impact journals may not be the best solution for all ECRs, there are initiatives devoted to changing the existing science publishing paradigm. Believing that the "challenges facing early-career researchers because of hypercompetition are damaging the efficiency of science," the Future of Science initiative is a forum lead by post-docs intended for ECRs to discuss the problems they encounter and possible solutions. I encourage all young scientists to take part in the discussion in order to promote change in the scientific community. We need to shift focus back on the science, and not sacrifice ethics and accountability for career advancement.

More information: Kerry Dwan et al. Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias, PLoS ONE (2008). DOI: 10.1371/journal.pone.0003081

Charles R. Ebersole et al. Scientists' Reputations Are Based on Getting It Right, Not Being Right, PLOS Biology (2016). DOI: 10.1371/journal.pbio.1002460

Daniele Fanelli et al. Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data, PLoS ONE (2010). DOI: 10.1371/journal.pone.0010271

Frieder Michel Paulus et al. Journal Impact Factor Shapes Scientists' Reward Signal in the Prospect of Publication, PLOS ONE (2015). DOI: 10.1371/journal.pone.0142537

Eugenie Samuel Reich. Science publishing: The golden club, Nature (2013). DOI: 10.1038/502291a Henry Sauermann et al. Science PhD Career Preferences: Levels, Changes, and Advisor Encouragement, PLoS ONE (2012). DOI: 10.1371/journal.pone.0036307

R. G. Steen. Retractions in the scientific literature: is the incidence of research fraud increasing?, Journal of Medical Ethics (2010). DOI: 10.1136/jme.2010.040923


Regards

Pralhad Jadhav
Senior Manager @ Library
Khaitan & Co


Note | If anybody use these post for forwarding in any social media coverage or covering in the Newsletter please give due credit to those who are taking efforts for the same.

No comments:

Post a Comment