Can there be such a thing as too much research? And if there is, is that a good thing or a bad thing?
Two recent studies suggest that a lot of research is essential to the development of reliable knowledge. Replicating the results of other research studies is an important type of research, because that helps us figure out whether the original studies truly discovered something new, or whether those results were a fluke. And research studies that are variations on other studies – studies that change something from the original study, like an ingredient, or part of the study’s methodology – help us understand whether the results of the original study might apply in other settings or situations. So more research is definitely better than less research.
But another recent study has some very interesting observations on the effect of too much research on the researchers themselves.Too much research makes it difficult for researchers to keep up to date with new developments, and also makes it difficult for them to figure out which research results they should believe. That’s a particularly big concern for researchers in the subjects that are plagued with predatory journals, which have very low, or no, standards for the quality of research they publish. (And the number of predatory journals and the amount of research they publish is also increasing dramatically.)
The authors of the study chose the subject of biomedicial science as their research site because, in their words, “biomedical science depends on a large network of trust among individuals and organizations, including the accurate reporting of data and observations, and the rigourous peer review of publications and grant applications. Thus, changes in the way individual scientists trust each other, or the enterprise as a whole, have major implications for the future of research.”
The authors interviewed 20 US biomedical researchers to see whether these researchers had observed a decline in the “trustworthiness of science” in their field, and whether these researchers had concerns about a “lack of robustness” in scientific research. The interviewees expressed concerns about both of these problems, but what the authors also found was that the concerns were related to a larger issue: what they labelled as “overflow”.
In social science research, “overflow” is defined as having too much of something. Research in several different disciplines has looked at strategies that individuals and organizations use to manage overflow – such as prioritizing what gets attention; using their professional judgement to choose only certain possibilities to pursue; organizing information more efficiently or meaningfully; or moving tasks from overloaded resources to under-utilized resources.
In this study, the interviewees described experiencing research overflow because of the increasing number of scientific journals, the large number of articles being published in newer journals (particularly online ones), an increase in the length of most articles, and more intense competition for jobs and for research grants. The interviewees also mentioned that “the number of scientists/authors is dramatically increasing, whereas the number of reviewers qualified to assess the scientific outputs does not increase proportionally…Good reviewers are a scarce resource.”
What research overflow led to, for most of the interviewees, was their being selective about what published research they paid attention to – with that selectivity based on their own perceptions of the reputation of the researchers, or on the reputation of the journal in which the research was published. As the study’s authors point out, reputation-based selection can result in unjustified distrust of research by unfamiliar researchers or in unfamiliar publications – and it also counteracts the principle that an assessment of research quality should be based on the content of the research itself.
The study’s authors acknowledge that overflow in scientific research is not going to go away. But they argue that its effect needs to be addressed – particularly its negative effect on researchers’ trust in one another and in research in general. They have two suggestions for managing research overflow, on a system-wide level:
- Emergency wards in hospitals use triage to identify and deal with less life-threatening cases, and thus save their professional resources for the most difficult or complex situations. Along similar lines, academic journals could use “non-academic scientists” to initially review submitted research articles for major problems (e.g. poor study design, inadequate or inappropriate data analysis, or fraud). Only articles that passed this initial screening would then be sent for peer review.
- Some research could be published without undergoing peer review, or at least without going through the traditional peer review process. Articles reporting the results of research studies could be archived online for feedback, and only submitted to a journal after feedback had been received. Peer reviewers could be partially replaced by professional editors with subject expertise. Academic journals could publish articles online without peer review, and then encourage online discussion – which presumably would identify the strengths and weaknesses of the research, as peer reviews would. Or researchers could share their initial data or findings online and then concentrate on writing “fewer, more influential publications”.
From my perspective in another academic discipline, I see the same impacts of research overflow that this study identifies in biomedical science. There are more journals and more research being published, and there is definitely a shortage of qualified and available peer reviewers – in part, because many experienced reviewers are saying “no” more often, or just not doing reviews at all. Either reviewers are overloaded with too many requests, or they’re tired of reviewing poor-quality research articles; it seems that more researchers are relying on peer review for suggestions on how to develop or improve very preliminary work. This is frustrating for peer reviewers, who feel they shouldn’t have to explain basic concepts such as the standard formatting of a research article, or what should be addressed in a literature review.
And in my discipline, a noticeable amount of the increased research activity is coming from academics in less developed countries. These academics are being pressured to be more active researchers, and to publish articles, as a way to gain legitimacy for their universities and for the post-secondary education systems in their countries. (I don’t know if this is happening in biomedical science as well.)
The authors’ two suggestions for managing overflow are intriguing. In my experience as a peer reviewer, for journal articles and for academic conference submissions, using “non-academic scientists” to screen research articles would be very welcome. The editors of most journals can “desk reject” an article, which means they don’t send the article out for peer review if the article is fundamentally flawed or not ready for publication. But there’s no similar process for submissions to most conferences, which means low-quality material can easily get sent out for peer review.
There might be disputes over the inherent subjectivity of any preliminary screening criteria for research – what looks like “weak methodology” to you might look innovative and creative to me – but there are often basic quality problems that could easily be identified by someone other than a peer reviewer. Acting as the “data quality police” and getting to see a wide range of articles would also be great training for beginning researchers or scientists. That experience would definitely help them understand how the scientific publishing process works, and to understand the differences between an article that is accepted and an article that is rejected.
The suggestion of using online feedback as a proxy for, or as a replacement for, peer review – that one I’m not so sure about. My unease is primarily because of what tends to happen in Internet commenting in general, with trolls, polarized opinions, and personal attacks. At least some of these problems could be avoided if comments are monitored or moderated. But if online commenting is intended to serve as a form of peer review on an article, policing the discussion could discourage meaningful input. Trolls and personal attacks show up in the peer review process now, but easy Internet access to articles would just make it that much easier for the trolls to spread their vileness. Effective monitoring or moderating of comments on a site also requires the site to support that activity with sufficient resources and attention, and not all websites may be able to provide that support.
But overall I agree that research overflow definitely has a negative effect on how much researchers trust each other’s work, and how much they trust developments in research in general. And I also agree that if researchoverflow isn’t effectively managed, researchers will compete even more intensely to get their work into the “top” journals, and shape their research in the direction that they think will get them there. That, in turn, could lead to important and meaningful research topics getting ignored – because they aren’t the “right” topics for researchers to spend their time on. So while an overflow of too much research is definitely a problem, pressure to pursue research directions that are elitist and exclusionary might hurt the development of knowledge as well.