Points of View on Peer Review and the Potential for AI-Assisted Review
-
Department of Chemistry, Faculty of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan
* Correspondence: Takashiro Akitsu![]()
Received: November 09, 2025 | Accepted: November 10, 2025 | Published: November 11, 2025
Recent Prog Sci Eng 2025, Volume 1, Issue 4, doi:10.21926/rpse.2504015
Recommended citation: Akitsu T. Points of View on Peer Review and the Potential for AI-Assisted Review. Recent Prog Sci Eng 2025; 1(4): 015; doi:10.21926/rpse.2504015.
© 2025 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited.
Keywords
Peer review; artificial intelligence (AI); coordination chemistry; crystallography
Just this month (November 2025), I was tasked with explaining the peer review system to young researchers at a workshop held by the International Union of Crystallography (IUCr) [1]. I will replace crystallography-specific issues with broader topics related to chemistry and materials science, and share my impressions and examples of failures regarding the perspectives and checklist items discussed in the peer review process.
1. Ethics (Duplicate Publication or Plagiarism)
While AI tools can effectively detect similar papers and descriptions, challenges sometimes arise. For instance, standard phrases commonly used in experimental sections, content from original papers in reviews (Figure 1), and titles of cited references can be mistakenly flagged as plagiarism.
Figure 1 Result of plagiarism check of a review article manuscript under review. Yellow highlighted sentences are judged as plagiarism.
2. Worthy of Publication (Impact, Innovative, Original)
Currently, humans can perform detailed searches for originality using literature databases. In particular, crystal structures can be precisely identified from known structures (so-called redetermination). The level of a journal (impact factor) is statistically clear. However, is it possible for AI to judge importance and value and predict ripple effects according to impact factors? In crystallography, the most important thing is the quality of the data and analysis, which is checked automatically.
3. Clear Question, Technically Sound (Answered and Well-Supported)
Recent developments in "Generative AI," such as ChatGPT [2], can answer questions and summarize papers by conducting searches on the internet. This may be possible if the answers provided are logically consistent and contextually appropriate, and if the results of the paper can be compared with those produced by the generative AI. However, is it possible to determine whether the question, answer, and their reasons are logical and free of omissions or leaps? In the field of crystallography, unlike in chemistry, there is debate over the extent to which papers on crystal structure analysis should include spectroscopic data related to the synthesis of new compounds and considerations from computational chemistry.
4. Methods Described Clearly to Replicate
When devising a research method or setting up an experimental system, some approaches are routine and feasible, while others require originality. The latter is original, and even humans need to think hard to understand it. In a narrower sense, this can be rephrased as "Does the experimental section contain all the necessary information?"
In my experience reviewing chemistry papers, I have encountered papers that lack elemental analysis, papers that only include UV-VIS spectra to show differences between solid and solution states, and papers that solely feature powder XRD patterns, which provide limited structural information. Conversely, I have also submitted a broad, irrelevant spectrum for a paramagnetic metal complex upon a reviewer's request due to a misunderstanding of the metal's valence.
5. Results, Discussion and Conclusions
When such comments arise during peer review, they are often the result of simple mistakes, errors in reasoning, or logical flaws. Dealing with differences of opinion is difficult even for humans. For example, did you notice the discrepancy between the predicted structure and the data in the NMR spectrum of the iron complex whose synthesis failed, as shown in Figure 2?
Figure 2 Incorrect 1H-NMR spectrum of a failed iron complex.
6. Literature Cited
Needless to say, this is an area where databases excel. However, there are cases where the intention behind the citation is not straightforward (for example, when the paper in question is an improvement or controversy over a previous paper). However, the choice and importance are up to judgment.
7. Conclusion
In summary, in my opinion, it is currently challenging to use AI in the peer review of scientific papers, especially for subjective items where judgments may vary depending on the reviewer.
Acknowledgments
The author thanks Mr. Takahiro Kawaguchi (Department of Chemistry, Graduate School of Science, Tokyo University of Science, Japan) and Prof. Aung Than Htwe (Department of Chemistry, University of Yangon, Myanmar).
Author Contributions
The author did all the research work of this study.
Competing Interests
The author has declared that no competing interests exist.
AI-Assisted Technologies Statement
In writing this editorial, AI was used to translate Japanese into English (for English grammar checking), but the content is based on the results of workshops described in the text and the author's personal opinions, and no generative AI was used. The author is fully responsible for the content of their manuscript.
References
- IUCr Journals. Homepage [Internet]. Chester, UK: IUCr Journals; 2025. Available from: https://journals.iucr.org/.
- OpenAI. Homepage [Internet]. OpenAI; 2025. Available from: https://openai.com/ja-JP/.



