You are currently viewing AI use is becoming normalized in academic peer review, but governance continues to lag, according to Frontiers

AI use is becoming normalized in academic peer review, but governance continues to lag, according to Frontiers

By Jorge González Arocha

A recent report by Frontiers reveals that, despite persistent doubts and the lack of clear regulatory frameworks, reviewers at academic journals have been incorporating the use of artificial intelligence (AI) in a discreet but steady manner. According to the study, this process corresponds to a phase of “normalization and experimentation,” although it remains marked by “enormous untapped potential.”

The research indicates that 53 percent of surveyed reviewers report using these tools in their evaluation tasks. In addition, nearly one in four (24 percent) acknowledge having increased their use over the past year. This trend points to a gradual normalization within peer review. Nevertheless, the report emphasizes that this progress does not translate into systematic use: overall adoption remains limited.

When specific uses are examined, the pattern is clear. The majority of participants (59 percent) rely on AI primarily to draft review reports. To a lesser extent, 29 percent use it to summarize findings, and 28 percent to flag potential misconduct. These are, for the most part, superficial applications, oriented toward operational tasks rather than critical analysis of scientific content.

Frontiers acknowledges that these uses already demonstrate “AI’s great potential to reduce reviewer fatigue, improve consistency in evaluations, and strengthen integrity checks,” thereby allowing reviewers’ time and expertise to be used more effectively.

From a geographic perspective, the document shows that China leads AI adoption in peer review, with 77 percent of participants reporting its use in evaluation tasks. Africa follows at 66 percent, where AI is perceived less as a threat than as a support tool, particularly in contexts shaped by structural constraints. These regional differences are compounded by generational gaps. Eighty-seven percent of early-career researchers report using AI, compared to 67 percent of senior researchers. However, this expansion is not free of distrust. While many researchers view the gradual introduction of AI positively, a substantial proportion (71 percent) express concern about how these tools are being used by other researchers. In addition, 45 percent report unease about how editors and publishing houses use AI, and 53 percent say they have personally observed incorrect or problematic practices among their peers.

This ambivalence becomes especially clear when reviewers are asked how authors’ use of AI affects their perception of manuscripts. Most acknowledge improvements in writing quality (63 percent), yet a significant share also report that the use of these tools raises doubts about research integrity (52 percent) or introduces errors that demand greater critical scrutiny (48 percent).

The report also reveals a troubling landscape in terms of training. Thirty-five percent of researchers say they are entirely self-taught in their use of AI; 31 percent rely on guidance from their institutions, while only 16 percent depend on guidelines provided by publishers. Even more striking, 18 percent admit taking no measures at all to ensure good practices in the use of these technologies.

These figures highlight a clear gap between the growing use of AI and the weakness of the institutional frameworks that should accompany it. The absence of robust governance structures, clear policies, or, in many cases, basic AI literacy generates not only inequalities in adoption but also deep misunderstandings about the scope, risks, and responsibilities involved.

Against this backdrop, one of the report’s central conclusions is the urgent need to promote truly responsible use of these tools, supported by explicit mechanisms of transparency and accountability that can build trust and ensure effective and safe application.

As the document itself underscores, AI literacy cannot be reduced to informal experimentation.

Achieving genuine AI education requires structured teaching and sustained support, precisely because it remains a real barrier for many researchers. From this perspective, the report directly addresses educators, research institutions, publishers, science communicators, funding bodies, public policymakers, and technology developers. All are assigned an active role in building an approach that combines education, transparency, and responsibility.

Accordingly, the document concludes by presenting a guide for the responsible and innovative use of AI, currently in a phase of community participation. This is not a closed set of rules, but a living framework designed to adapt to concrete practices and diverse needs.

Within this framework are the six pillars of AI governance proposed by Frontiers.

  1. The first is transparency and accountability: the obligation to disclose when, where, and how AI is used throughout workflows, from initial assessment to final production.
  2. The second pillar is AI literacy and capacity building in research. The focus here is not on abstract investment, but on consolidating academic competencies, shifting away from improvised self-learning toward structured, certifiable training programs.
  3. The third pillar centers on ethics and integrity. This entails establishing clear, public standards for AI use in both research evaluation and editorial decision-making.
  4. The fourth pillar addresses equity and access governance, incorporating principles of justice at all levels of AI adoption.
  5. The fifth pillar emphasizes engagement with the academic community, involving researchers, editors, and institutions in the co-creation and review of AI policies.
  6. Finally, the sixth pillar concerns leadership and public policy influence. Promoting responsible AI governance in research and academic publishing requires using evidence from audits and impact assessments to inform global norms and standards, while sharing results openly to strengthen trust and accountability.

In conclusion, the report exposes a profound gap in training and capacity building. AI is not an isolated tool but a complex ecosystem of models, platforms, uses, policies, and human decisions. Understanding it demands more than technical familiarity; it requires judgment, time, and clear guiding frameworks.

In an academic environment shaped by urgency, pressure to publish, and the constant acceleration of intellectual labor, such understanding is often sidelined, hindering a truly critical and responsible adoption of AI.


This article has been published by Dialektika