Einstein Foundation Early Career Award 2023

Responsible Research Assessment Initiative headed by Anne Gärtner

At Dresden University of Technology, Anne Gärtner aims to develop novel criteria to assess research output based on quality, transparency, and reproducibility. Her ultimate goal: to reform the way senior-rank faculty members are appointed. Her proposal wins the 2023 Early Career Award. 


A survey among 1,500 psychologists confirmed what Anne Gärtner had experienced firsthand: Appointment procedures at psychology departments hardly ever consider the quality of the research conducted by candidates. The survey participants were asked about the assessment criteria taken into account by their respective universities when recruiting senior faculty. The number of peer-reviewed papers – i.e. papers reviewed by independent researchers – ranked first, closely followed by the number of lead-author publications and other quantitative criteria such as acquired third-party funding.

“Many of these quantitative metrics are only vague indicators loosely linked to of research quality, and they are one of the root causes for the massive replication crisis that we are seeing in the field of psychology and other academic disciplines,” says Dr. Anne Gärtner, a psychologist and neuroscientist based at Dresden University of Technology. With her Responsible Research Assessment Initiative, which won the Einstein Foundation Berlin’s 2023 Early Career Award, she is pushing for a paradigm shift in appointment procedures: away from the rigid focus on quantity and toward a stronger emphasis on quality when assessing research output. 

"We have to work at all levels to redefine incentives in order to reward – instead of penalizing – quality in research.” (Anne Gärtner) 

To this end, Gärtner plans to develop a set of novel, more suitable criteria that integrate qualitative aspects such as integrity, robustness, transparency, cooperation, and innovation. “We have to work at all levels to redefine incentives in order to reward – instead of penalizing – quality in research,” she says. “We will of course continue to count and evaluate the number of peer-reviewed papers, references, and data sets – but only after testing for quality.” 

But how can quality be measured? Gärtner wants to evaluate candidates’ research output mainly in terms of methodological rigor using a scoring system. Do applicants pre-register their research papers, for example, and disclose their methodology before publication? Do the formulated theories adhere to the rules of formal logic? Can their research be replicated and independently verified? Are all research data openly accessible? 

Since 2020, Gärtner has been working with colleagues to develop assessment criteria that are responsive to such questions and help to shortlist candidates in appointment procedures (see preprint 1&2). She now wants to refine these criteria and apply them to two further stages of the selection process: the interview and the review phase, during which candidates undergo external evaluation. 

“I will interview experts and status groups such as professors, doctoral and postdoctoral researchers, as well as administrative staff,” explains Gärtner. The criteria will then be field-tested extensively, revised with the help of experts from the German Psychological Society (DGPs), and aligned with international frameworks such as the Coalition for Advancing Research Assessment (CoARA) or the San Francisco Declaration on Research Assessment (DORA). The ultimate goal is to establish criteria that are valid, robust, efficient, legally sound, and easy to implement – starting in psychology departments and then moving on to other disciplines within the behavioral, cognitive, and social sciences. 

“I hope we will see appointment procedures reformed on this basis,” says Gärtner. She has already set up a prototype of an online interface that appointment committees can use for future selection processes. The Initiative is extensively documenting its progress and will make these files, along with all data and materials, accessible online. 

Anne Gärtner became adamant about advancing research quality after having experienced firsthand the pressure to publish as much and as fast as possible. Quality, she realized, is often not considered a priority. In her own research, she uses neurobiological methods and imaging techniques to explore how humans process and regulate their emotions. Beside her research, she strongly advocates for greater integrity and transparency in psychology. “I was often advised not to worry about issues such as integrity and transparency because it would hurt my academic career,” she says. “The Einstein Foundation has now honored what I’ve been working to promote over the past years: a better way to assess and evaluate scientific research. And it feels good to finally receive recognition for these efforts.” 

While Gärtner’s work is currently focused on the assessment of research output, her long-term plan is to develop a fuller set of metrics that also covers the remaining academic dimensions evaluated in appointment procedures: teaching quality, leadership skills, academic governance, and social impact. Gärtner hopes that the shift away from quantitative metrics in appointment procedures will become a blueprint for the entire academic system and help to overhaul the allocation of research funding, scholarships, and awards, among other things. Researchers will only be able to prioritize quality if the incentives prioritized by the system are changed. 

“Young scientists in particular find themselves caught in a dilemma: They can either bow to the old system and publish as many papers as possible – or invest their time and effort in producing high-quality research, which in turn impacts the quantity of their output, and thus, their career,” explains Gärtner. “I hope that our project contributes to a future where we’ll see not just more research being published every year, but more high-quality and high-value research.”