Citation
Ibragimova, I. and Phagava, H. (2024), "Editorial: AI tools usage in Emerald journal articles", International Journal of Health Governance, Vol. 29 No. 3, pp. 193-199. https://doi.org/10.1108/IJHG-09-2024-163
Publisher
:Emerald Publishing Limited
Copyright © 2024, Emerald Publishing Limited
In this Editorial we would like to look at how the Emerald authors are using artificial intelligence (AI) tools in their work, if they adhere to the Emerald policies and principles on generative AI (GAI) usage, and also to provide some recommendations on how we as Editors should support our authors in legitimate uses of GAI to the benefit of academic research and scholarly communication.
Emerald policy on AI tools
Emerald policy on AI tools and authorship was made publicly available on Feb 22nd, 2023 (https://tinyurl.com/yc3t9djt).
Emerald “Generative AI (GAI) usage key principles” are regularly updated and published on journals’ websites as part of Author Guidelines. When submitting their manuscripts, authors must confirm that the manuscript has been created by the author(s) and not an AI tool/large language model (LLM).
The research and publishing ethics section of the Emerald website (https://tinyurl.com/3xs5xwz4) underlines that
- (1)
In case an AI tool/LLM has been used to develop or generate any portion of the manuscript, it is required “to be flagged in the Methods and Acknowledgements” sections of the manuscript;
- (2)
The author(s) must describe the content created or modified as well as appropriately cite the name and version of the AI tool used;
- (3)
Any additional works drawn on by the AI tool should also be appropriately cited and referenced;
- (4)
Standard tools that are used to improve spelling and grammar are not included within the parameters of this guidance;
- (5)
The in-text reporting of statistics using a GAI tool/LLM is not permissible due to concerns over the authenticity, integrity and validity of the data produced, although the use of such a tool to aid in the analysis of the work would be permissible;
- (6)
The Editor and Publisher reserve the right to determine whether the use of an AI tool is permissible.
Other publishers’ and journals’ policies
Recent bibliometric analysis (Ganjavi et al., 2024) has revealed that by October 2023, among the top 100 largest publishers, 24% provided guidance on the use of GAI, while among the top 100 highly ranked journals, 87% provided such guidance. Of the publishers and journals with guidelines, the inclusion of GAI as an author was prohibited in 96 and 98%, respectively. Only one journal (1%) explicitly prohibited the use of GAI in the generation of a manuscript, and two (8%) publishers (including Emerald) indicated that their guidelines exclusively applied to the writing process. When disclosing the use of GAI, 75% of publishers and 43% of journals included specific disclosure criteria (Ganjavi et al., 2024).
Recommendations on where to disclose the use of GAI varies from publisher to publisher. While many publishers require to include this information in the Methods sections, and/or Acknowledgments, others recommend to include it in the cover letter, suggest to establish a special new section or require that the use of AI tools should be described in several sections of the manuscript (besides Methods). One example of the most detailed AI-related guidelines developed by the health-related journals is JAMA Network Guidance (Flanagin et al., 2024). The authors of this Guidance provide detailed recommendations that could help authors to report the use of AI tools both in conducting their research and in preparing the manuscripts. Following is the summary of those recommendations:
- (1)
Follow relevant reporting guidelines for specific study designs with AI component (where they exist). (Our search in the EQUATOR Network database in July 2024 retrieved 16 guidelines for reporting research with AI component);
- (2)
Description of AI tools used in manuscript preparation should be provided in the Acknowledgement section and state the software platform, program, version, manufacture, date of use, etc.;
- (3)
Methods section should describe how AI was used for specific aspects of the study. Different types of studies could require additional specific information (e.g. details about datasets used for development, training and validation; metric used to evaluate the performance of the algorithms, etc.);
- (4)
Results section should include sufficient details on measurements and comparisons for the reproducibility of results;
- (5)
Discussion section should discuss “the potential of AI-related bias, what was done to identify and mitigate it”, as well as “the potential for the inaccuracy of AI-generated content; and what was done to identify and manage it”.
AI tools used for conducting and preparing research publications
There is a variety of AI tools in different fields of research and scholarly communication, with new tools regularly appearing and existing tools being updated. Because of time- and resource-consuming procedures, as well as established guidelines on conducting and reporting certain types of research, there is specific interest in using AI tools for systematic reviews and other evidence synthesis publications (Gartlehner et al., 2024; Fabiano et al., 2024; Pfeiffer and Dermody, 2024; Parisi and Sutton, 2024; Khraisha et al., 2024; Landschaft et al., 2024; Teperikidis et al., 2024), as well as on conducting qualitative research (Morgan, 2023). Those types of research design are among the most frequently used in the International Journal of Health Governance, as well as in many other Emerald journals.
For this Editorial, we executed a quick search in Emerald Insight database to find out which AI tools authors of Emerald journals are using in their research and how they report it in their articles. We used as search terms names of 25 AI tools mentioned in the review of evidence synthesis tools (Fabiano et al., 2024) and of the two tools used for qualitative research (Morgan, 2023). Our search was limited to journal articles published between March 2023 and July 2024 (including EarlyCite) and was performed on June 30th, 2024. We used Advance Search option and executed each search for each term twice – first only in Abstract and then in All fields. Overall, we have found that Emerald authors have used 9 different AI tools from the aforementioned lists. We did not include in this analysis other AI tools which those authors were also using.
In the cases when we did not have access to full text, we could only state whether the AI tool was mentioned in Abstract or in Acknowledgements. For example, while search in All fields returned 473 articles which mentioned ChatGPT, without access to full text it was difficult to distinguish those that used the tool for their research from those that described the potential benefits and risks of its use in research, education and practice, or provided overviews of published literature about that tool. Thus, we included in our analysis search results for the term “ChatGPT” only when it was found in Abstract – 116 articles (from which only 18 used that tool for their research).
Our analysis has revealed that authors of Emerald journals are actively using a variety of AI tools for different purposes, in practically all subject areas, covered by Emerald, and for different research designs (Table 1). Those tools were reported to be used 540 times (as some articles reported using more than one AI tool).
Though most authors mentioned the use of AI tools in Methods section, in many cases they neither provided information about the software version nor explained how they have used the tool but just stated: “the MAXQDA tool was used to simplify the coding process”, we used “MAXQDA word processing software”, or “we based the results' analysis on MaxQDA software”. In many cases there was even no name of a manufacturer but only a link to a published article about the AI tool.
Practically none of the Emerald authors adhered to the requirement to provide information on the used tool in Acknowledgements section. Only about 21% mentioned the used tools in Abstract, and much less added them as keywords (about 4%). It seems inconsistent as many other tools, used by the authors of the same articles (e.g. citation management, statistics or visualizing software) were often mentioned both in Abstract and Keywords. In two articles, authors stated as a limitation of their research not using a specific AI tool.
We would like to present here as an excellent example of how to report the use of AI tools a recently published article from the “Meditari Accountancy Research” journal (Eager et al., 2024). The authors provided detailed justification for using AI tools for their research; explained the selection of different tools for specific research tasks, including their compatibility with other used software; and described how the tools have been applied. All related information was covered not only in the Methods section, but also in Introduction, Results and Conclusions. Still, this article also does not include required characteristics of the used AI tools in the Acknowledgements.
Global initiative CANGARU
As the lack of standardization of reporting the use of GAI places a burden on authors and reviewers, standardized guidelines to protect the integrity of scientific output are needed (Ganjavi et al., 2024). Thus, a global initiative CANGARU (ChatGPT, GAI and natural LLMs for accountable reporting and use guidelines) was started by a group of researchers from different disciplines (Cacciamani et al., 2023b).
The overall aim of the CANGARU guidelines is to establish commonly shared, cross-discipline best practices for using GAI/GPTs/LLMs in academia. The working group plans to present their guidelines in the following format:
- (1)
The “DON'T” Criteria List to ensure ethical and proper use of GAI/GPTs/LLMs in academic research by covering each step of the academic process;
- (2)
Disclosure Criteria List with guidance for researchers to transparently disclose their use of GAI/GPTs/LLMs in academic research (what and how to disclose, “fostering responsibility and addressing potential risk and limitations associated with this technology”);
- (3)
Reporting Criteria List – a checklist of recommendation to ensure the complete and transparent reporting of GAI/GPTs/LLMs when they are used as interventions in scientific studies.
Those guidelines will “enhance transparency, improve reproducibility, standardize reporting practices, reduce misinterpretation, support peer review and editorial processes, and facilitate research synthesis and knowledge translation” (Cacciamani et al., 2023a).
Conclusion
There are many legitimate uses of GAI in academic research, and there is an important role of journal editors in supporting these uses. This role is well formulated in “The African Journal Partnership Program’s guidance on the use of AI in scholarly publishing” (Wright et al., 2024), which specifically underscores that editors
- (1)
Are responsible for sharing standards and policies for appropriate and transparent use of AI with authors and peer reviewers;
- (2)
Support authors in complying with guidelines for proper AI utilization;
- (3)
Should stay informed about advancements in AI technology to guide and facilitate the effective and ethical integration of AI in scholarly publishing.
In accordance with this guidance, we would like to remind our future authors that they are required
- (1)
To follow relevant reporting guidelines for specific study designs with AI component where they exist (see EQUATOR network) and to submit with their manuscripts the completed checklists for the reporting guidelines they have used;
- (2)
To describe in enough detail the use of each AI tool in Methods section to make the methodology transparent and the research replicable;
- (3)
To provide in the Acknowledgement section the following information about the used AI tools – the software platform, program, version, manufacture, date of use.
Based on our brief analysis, we would also like to suggest additional recommendations for authors:
- (1)
To include information about the used AI tools in the Abstract;
- (2)
To add the name of the tool as a keyword.
Recommendations to Emerald editors and publisher:
- (1)
To develop podcasts and webinars for authors, reviewers and editors, explaining Emerald policies and guidelines;
- (2)
To add reminders about requirements on AI tools use/disclosure to standard messages sent to authors and reviewers.
Reported use of AI tools in Emerald journal articles (published between March 2023 and July 2024)
AI tool | N = 540 | Study design | Purpose | Abstract (yes/no) | Method (yes/no) | Acknowledgements (yes/no) | Other sections |
---|---|---|---|---|---|---|---|
ChatGPT | 18 | Case study = 16 Experimental study = 2 | Effectiveness of AI model Sentiment analysis Extracting content Risk identification Producing meta-analytic paper Citation generating Developing methods of experience learning Support in crisis management Compiling a list of cybersecurity systems Knowledge synthesis | Yes = 18 | Yes = 18 | Yes = 4 No = 14 | Results = 13 Keywords = 12 Practical implications = 2 Originality = 1 Purpose = 1 |
Covidence | 34 | ScR = 15 SR = 12 IntR = 2 UmbR = 1 Meta-ethnography = 1 NarR = 1 Meta-analysis = 1 LitR = 1 | Data screening and extracting Data cleaning | Yes = 5 No = 29 | Yes = 33 No = 1 | No = 34 | Limitation = 1 |
Distiller SR | 2 | SR (1) ScR (1) | Selection, inclusion and exclusion of articles that met the initial criteria duplicate removal, relevance screening, data extraction | No = 2 | Yes = 2 | No = 2 | |
Elicit.org | 1 | Scoping study | Leveraging AI tools to augment traditional scoping study techniques | Yes | Yes | No | Introduction, Purpose, Conclusions |
Rayyan.ai | 23 | SR = 13 ScR = 6 Meta-analysis = 1 Bibliometrics = 2 LitR = 1 | Uploading and sorting study references Screening, highlighting pertinent keywords and adding notes Screening titles and abstracts Blinding the results of each team member’s review | Yes = 1 No = 22 | Yes = 21 No = 2 | No = 23 | |
Research rabbit app | 2 | LitR = 1 SR = 1 | Searching big data for connections between articles and authors | No = 2 | Yes = 2 | No = 2 | |
Scite.ai | 2 | Bibliometrics = 1 ScR = 1 | Citation analysis, identification of key trends patterns, themes within the literature, uncovering hidden connections, conducting semantic analysis and performing topic modeling | Yes = 2 | Yes = 2 | No = 2 | Title = 1 Introduction = 2 Discussion = 1 Purpose = 1 Conclusions = 1 Keywords = 1 |
ATLAS.ti | 256 | Qualitative research (254) SR (2) | Transcription and coding of data, themes generating Visualizing the network structures | Yes = 51 No = 205 | Yes = 256 | No = 256 | Keywords = 4 Results = 1 |
MAXQDA | 202 | Qualitative research (191) SR (9) ScR (1) LitR (1) | Coding and analyzing the data in three stages calculating correlations for pairs of codes, indicating the likelihood that those two codes appear in the same document | Yes = 35 No = 167 | Yes = 199 No = 3 | No = 202 | Background = 1 Results = 6 Discussion = 2 Conclusions = 2 Keywords = 3 Note = 1 Limitation = 1 |
Source(s): SR = systematic review; ScR = scoping review; LitR = literature review; IntR = integrative review; UmbR = umbrella review; NarR = narrative review
References
Cacciamani, G.E., Eppler, M.B., Ganjavi, C., Pekan, A., Biedermann, B., Collins, G.S. and Gill, I.S. (2023a), “Development of the ChatGPT, generative artificial intelligence and natural Large Language models for accountable reporting and use (CANGARU) guidelines”, arXiv [cs.AI], available at: http://arxiv.org/abs/2307.08974 (accessed 25 July 2024).
Cacciamani, G.E., Collins, G.S. and Gill, I.S. (2023b), “ChatGPT: standard reporting guidelines for responsible use”, Nature, Vol. 618 No. 7964, p. 238, doi: 10.1038/d41586-023-01853-w.
Eager, B., Deegan, C. and Fiedler, T. (2024), “Insights into the application of AI-augmented research methods for informing accounting practice: the development – through AI - of accountability-related prescriptions pertaining to seasonal work”, Meditari Accountancy Research. doi: 10.1108/medar-08-2023-2116.
Fabiano, N., Gupta, A., Bhambra, N., Luu, B., Wong, S., Maaz, M., Fiedorowicz, J.G., Smith, A.L. and Solmi, M. (2024), “How to optimize the systematic review process using AI tools”, JCPP advances, Vol. 4 No. 2, e12234, doi: 10.1002/jcv2.12234.
Flanagin, A., Pirracchio, R., Khera, R., Berkwits, M., Hswen, Y. and Bibbins-Domingo, K. (2024), “Reporting use of AI in research and scholarly publication-JAMA network guidance”, JAMA, the Journal of the American Medical Association, Vol. 331 No. 13, pp. 1096-1098, doi: 10.1001/jama.2024.3471.
Ganjavi, C., Eppler, M.B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G.S., Gill, I.S. and Cacciamani, G.E. (2024), “Publishers' and journals' instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis BMJ”, BMJ, Vol. 384, e077192, doi: 10.1136/bmj-2023-077192.
Gartlehner, G., Kahwati, L., Hilscher, R., Thomas, I., Kugley, S., Crotty, K., Viswanathan, M., Nussbaumer-Streit, B., Booth, G., Erskine, N., Konet, A. and Chew, R. (2024), “Data extraction for evidence synthesis using a large language model: a proof-of-concept study”, Research Synthesis Methods, Vol. 15 No. 4, pp. 576-589, doi: 10.1002/jrsm.1710.
Khraisha, Q., Put, S., Kappenberg, J., Warraitch, A. and Hadfield, K. (2024), “Can large language models replace humans in systematic reviews? Evaluating GPT-4’s efficacy in screening and extracting data from peer-reviewed and grey literature in multiple languages”, Research Synthesis Methods, Vol. 15 No. 4, pp. 616-626, doi: 10.1002/jrsm.1715.
Landschaft, A., Antweiler, D., Mackay, S., Kugler, S., Rüping, S., Wrobel, S., Höres, T. and Allende-Cid, H. (2024), “Implementation and evaluation of an additional GPT-4-based reviewer in PRISMA-based medical systematic literature reviews”, International Journal of Medical Informatics, Vol. 189, 105531, doi: 10.1016/j.ijmedinf.2024.105531.
Morgan, D.L. (2023), “Exploring the use of artificial intelligence for qualitative data analysis: the case of ChatGPT”, International Journal of Qualitative Methods, Vol. 22, doi: 10.1177/16094069231211248.
Parisi, V. and Sutton, A. (2024), “The role of ChatGPT in developing systematic literature searches: an evidence summary”, J Eur Assoc Health Info Libr [Internet], Vol. 20 No. 2, pp. 30-34, doi: 10.32384/jeahil20623, available at: https://ojs.eahil.eu/JEAHIL/article/view/623
Pfeiffer, J.K. and Dermody, T.S. (2024), “Artificial intelligence and scientific reviews”, Annual Review of Virology. doi: 10.1146/annurev-vi-11-060624-100111.
Teperikidis, L., Boulmpou, A., Papadopoulos, C. and Biondi-Zoccai, G. (2024), “Using ChatGPT to perform a systematic review: a tutorial”, Minerva Cardiology Angiology, No. 26, doi: 10.23736/S2724-5683.24.06568-2.
Wright, C.Y., Lartey, M., Khomsi, K., Peres, F., Yilma, D., Kigera, J., Flanagin, A., Gbakima, A., Ofori-Adjei, D., Sumaili Kiswaya, E., Sidibé, S., Togo, A. and Muula, A.S. (2024), “The African Journal Partnership Program's guidance on the use of AI in scholarly publishing”, Ghana Medical Journal, Vol. 58 No. 1, pp. 1-4, doi: 10.4314/gmj.v58i1.1.