With the number of academic journals growing endlessly and quality control failing, a new measure for whether research is ‘valuable’ is needed. Unfortunately, a change seems far from certain, Asit Biswas and Cecilia Tortajada write.
On the five year anniversary of Policy Forum, it is appropriate to look to the future and past of problems with communicating and identifying valuable new ideas. To remain relevant in an increasingly competitive world, publications like Policy Forum will have to evolve, continue to challenge prevailing wisdom, and avoid complacency. In many ways, the whole academic system faces the same challenge.
Much is changing rapidly in academia and policy-making. Yet, most university administrators, and an overwhelming number of academics, are blithely ignoring a crucial aspect of that change: how they assess the impact of their work.
Consider research and the metrics by which the quality of research has been judged in the past. The public and policymakers should rightly question the social value, and usefulness, of research, yet universities still overwhelmingly use one measure of this, the number of papers published in ‘high-impact’ journals.
It is important to note some historical context here. Once, it may have been possible for a brilliant person to know nearly all there was to know about what could be loosely called ‘science’. Until the end of the 16th century, versatile geniuses like Aristotle, Isidore of Seville, and Leonardo da Vinci could discuss most issues authoritatively.
But the situation began to change. By the early 18th century, tremendous advances in science made it impossible for anyone to be a universal encyclopaedist. Then, the 19th century witnessed an exponential increase in human knowledge, along with massive technological advancements.
Mastery by even the most brilliant person was no longer possible. Knowledge had to be subdivided further, initially into physics and chemistry, and then to other areas like life sciences and biological sciences.
This trend has only continued. Rapidly advancing knowledge in the 20th century ensured that it was impossible for one to master not only an entire discipline, but even a smaller area, like water, energy, or agriculture.
Now in the 21st century, advances in communication and information technology have significantly changed the research scene, including the financial model for academic publishing.
A negative side-effect of this has been the rapid growth of predatory journals. Papers published by predatory journals are mostly of poor quality, often with poor or cursory peer review.
These developments have ensured that virtually any paper can now be published, irrespective of its quality, as long as the author is willing to pay a fee for its publication.
Unfortunately, the main motive for research publication has become career advancement, academic recognition, and ego satisfaction, rather than impact. Papers are mostly written to receive credit in the academic marketplace. Globally, most universities have contributed to this state of affairs in their race to go up in ranking systems.
This means that the currently dominant view, that quality of research can be judged by publication in journals, no longer holds. Currently, over 1.5 million peer-reviewed articles are published annually, and this number is increasing steadily. This means that significant percentages of published papers are completely ignored, even by the authors’ disciplines.
Of all publications in 2016, over 40 per cent of them are likely to remain uncited by 2021, and an estimated 82 per cent of articles published in the humanities are not even cited once. Clearly, the centuries-old model of judging the impact of research on publication alone has irreparably broken down.
Policymakers must move on too. Vannevar Bush, President Roosevelt’ Science Advisor, in his 1945 report on Science: The Endless Frontier, established the value of basic research and the rationale of public funding for science-based on this, but despite its age, it is still treated as a seminal text on the subject.
Times have changed. Requests for research funding from academic institutions in recent decades have increasingly outpaced available public resources. Accordingly, policymakers are searching for transparent criteria on the quality of research, so that limited funding can be effectively distributed to the most valuable of a number of competing projects. Universities would do well to follow the same criteria.
As Richard Smith, editor of the British Medical Journal has pointed out, “In an ideal world, all research would be high-quality and have a considerable social impact”.
There does not appear to be any direct relationship between whether research is successfully published in highly cited journals, its quality or its social impact. To academics, ‘quality’ often means originality of research topics and the elegance of analysis, and this is often what guarantees publications in high-impact journals, rather than social benefit.
This kind of ‘high quality’ research may have no measurable social benefit at all, and in fact, research that is unlikely to be considered of high quality by journals may have important social benefits.
Generally, scientists do not like to consider the social impacts of their research since it takes them beyond their comfort zone, but it is necessary.
Assessing the social impacts of research is still a project its infancy, and it will be much harder to assess than scientific impacts. Indicators of quality may vary between disciplines, institutions, and even countries, but they are worth pursuing.
As demand for funding increases and public research investment stagnates, social impact will become an ever more important criterion for allocating funds to research, but good indicators of this have proven elusive. Without a serious effort to measure social impact, policymakers and the public may be left in the lurch, unable to get the best out of the expertise of academia.