Navigation auf uzh.ch

Suche

UZH News

Academic Careers

When Numbers Don’t Tell the Whole Story

Quantifiable metrics like the journal impact factor should not be the only way of assessing scientific outputs, said Stephen Curry of Imperial College London at an event held at UZH.
Melanie Keim

Kategorien

“Although numbers appear to be more objective than personal judgement, this can be deceptive,” said Stephen Curry at his talk at UZH. Curry is Chair of the Steering Committee of the San Francisco Declaration on Research Assessment (DORA).

 

Nowadays, figures play a key role in the assessment of academic research. The number of published papers, the number of citations and, in particular, the impact factor of the journal in which the researcher publishes are now viewed as important indicators of scientific excellence. The subject often comes up for critical discussion in the scientific community – as was the case at the fourth network meeting on the Gender Equality Action Plan 2017-20, organized last week by the UZH Office for Gender Equality and Diversity.


The speaker at the event was Stephen Curry, Professor of Structural Biology and Assistant Provost for Equality, Diversity and Inclusion at Imperial College London. On this occasion he spoke in his capacity as Chair of the Steering Committee of the San Francisco Declaration on Research Assessment (DORA), an initiative that challenges today’s strong focus on parameters like the journal impact factor and advocates a more holistic assessment of scientific research.


“Although numbers appear to be more objective than personal judgement, this can be deceptive,” Curry said. For example, how often a paper is cited depends on a range of subjective decisions – which can include factors such as gender-specific assessment patterns. As an example, Curry mentioned studies which revealed that men publish self-citations more often than women.

Plea for open science

Curry pointed out that too strong a focus on aggregated metric data not only often affects the competition for academic positions, but can also have a negative impact on research: For example, if the journal impact factor determines a researcher’s market value, the publication of important data can be delayed as researchers wait until they have been promised a journal with the highest possible impact factor. “The public, however, is not really aware of these links,” says Curry.

Taking the example of how quickly the latest findings on the Zika and Corona virus were shared publicly, Curry illustrated how the assessment practice mentioned above works against public interest. By contrast, open science would raise the benefit of science to society as a whole: Not only because results would become known more quickly, but also because a broad-based review process would enhance the quality of the final publication, Curry believes.

A change of mindset

Besides calling for open science, Curry also mentioned some of the recommendations listed in the DORA document. “Institutions should be explicit that the scientific content of a paper is much more important than publication metrics” is one example stated in the declaration that UZH also signed in 2014.

According to Curry, metrics do not take into account other important aspects that are relevant to the exercise of academic functions. Among these aspects he included the relevance to society of research, teaching and leadership skills.

“We all know that a more holistic assessment would be important,” said Curry. But in a hypercompetitive environment, it is difficult to change people’s mindsets. "Researchers need to be confident that they will receive recognition for their work and not for where they have been published," said Curry.

As positive examples, he cited the appointment procedure at the Charité in Berlin, where applicants are also asked what contribution they make to open science. Similarly, researchers who apply to the Royal Society have to describe what they do to advance young scientists, how they support the research community and what they contribute to the transfer of knowledge from the university to the public.

Assessing scientific outputs fairly is a difficult task, said Curry, recommending training as a means of combatting stereotypical assessment patterns.

The many penetrating questions from the audience also indicated how challenging it is to evaluate scientific outputs satisfactorily. The objection was raised, for example, that the members of selection committees are sometimes not familiar enough with the candidates' research areas and are therefore dependent on quantitative metrics. Curry suggested that external experts could also be included on occasion or that applicants could be asked to write an accessible description of their research. However, he was quick to note the risks this would entail, namely that good writers would be favored, adding: “It’s not trivial.”