We recently published the results and full data from a survey asking 875 research academics living and working in Spain for their views on how important a selection of 39 different criteria should be in the evaluation of candidates’ research CVs in a recruitment process. In a second part of the survey we also asked participants about their level of agreement with a series of statements regarding research evaluation practices.
The initial motivation of this survey was to show that academics in Spain are dissatisfied with the way their research output is currently being evaluated and to propose alternatives to improve the assessment of scientific/academic quality. Looking at the results, I don’t think we can say that this objective has been achieved.
In the second part of the survey, 88% of the participants consider that “publish or perish” is an existing problem, 84% think that evaluation based on the number of published articles fosters questionable authorship strategies, and 71% agree that such a way of evaluation promotes bad research practices. While this seems encouraging, the number of published articles was ranked as the second most important evaluation criterion (after being PI or co-PI in research projects). Does this mean that counting the number of articles is bad but it is the best we have? At the same time, 70% agree that it would be preferable to base research evaluation on a pre-determined number of the best contributions. Could this method, which is already being applied by some national evaluation agencies, be pointing in the right direction?
Despite so much being said and written about the irrelevance of the Impact Factor for the evaluation of individual authors, this criterion was rated as the tenth most important, with 54% of the participants considering it “very” or “fairly important”. The Hirsch index was considered less important and ranked 16th.
Another disheartening observation was that publishing pre-registered studies was not considered relevant for research assessment, ranking 36th in the order of importance, with 51% of the respondents rating it as “not very important” or “not important at all”. At the same time, publishing preprints was the least important criterion rated as not very important or not important at all by 60% of the participants. Although the importance of preprints did not vary by discipline or age, it is promising that preregistration was considered more important among younger researchers (22-40 years old) in the life sciences, with “only” 28% of respondents considering it unimportant.
The inclusion of an interview in the recruitment process, which currently only involves the submission of a CV in a specific format that varies from one institution to another (!), was supported by 79% of the respondents and ranked ninth in order of importance (only one place above the IF!). Interestingly, when dividing the results by gender, it appears that women tend to consider the inclusion of an interview as less important than men: this criterion ranks seventh for men and seventeenth for women.
Another crucial question for the Spanish assessment system is who should be involved in the evaluation committees: only internal members of the department opening the position, only members external to the department, or both. This question is particularly relevant as the Spanish evaluation system has been repeatedly accused of endogamy, favouring candidates affiliated to the hiring department to the detriment of external applicants who often have more experience and merit for the position. According to our results, the majority of respondents believe that evaluation committees should be composed of both internal and external members, but still 12% believe that candidates should be evaluated only by internal members.
In general, it seems that there is still a long way to go to raise awareness in Spain, where researchers still value journal-level indices for individual assessment and do not recognise the importance of practices that clearly improve research quality, such as the pre-registration of experimental studies. Despite these concerns, our survey also shows that most researchers recognise that the current system leads to bad practices and even suggests broadly-supported improvements, such as using a pre-determined number of best contributions for evaluation, or including an interview by a committee composed of external and internal members.
The observations highlighted here are just a few that called my attention. Anyone interested can access the full data and the report found here and also use an interactive graph with various filter options (by age, gender, discipline, and professional status) that can offer many additional insights.