Artificial Intelligence (AI) and large language model (LLM) tools have rapidly impacted various sectors, including academic research and publication. As AI technologies advance, so do the potential and challenges associated with their application. These guidelines outline how AI tools and technologies can be utilized to support the writing process of articles published in the African Human Mobility Review. The use of AI technologies in academic writing should adhere to the principles of integrity, transparency, and confidentiality, aligning with ethical practices in research and publication that promote scientific integrity. The use of AI tools for unethical practices such as AI plagiarism and fabrication of data, should be strictly avoided.
The journal's AI Policy will be updated regularly to take into account new developments in publishing best practices, ethical issues, and academic standards. These revisions guarantee that our guidelines continue to be thorough and applicable, according to the changing requirements of the research community.
AFRICAN HUMAN MOBILITY REVIEW (AHMR), as a general rule, follows the ASSAf and SciELO Guidelines for the Use of Artificial Intelligence (AI) Tools and Resources in Research Communication.
Authors
Authors must disclose the use of Generative AI tools in their research and manuscript preparation (i.e., data analysis, literature review, or manuscript writing, etc.) under the methodology section. There is no need to acknowledge the use of AI tools for proofreading and editing purposes.
Content generated by AI tools should be cited and referenced using a general format for software citation. In all cases, AI tools should adhere to ethical principles stated in the journal guidelines for ethical use. AI, Large Language Models (LLMs), such as ChatGPT, or any similar technologies, cannot be listed as an author or co-author. Authors are solely responsible for the entire content of their manuscript, including the verification of any AI-generated information to ensure accuracy and reliability.
Editors
Editors should adhere to editorial standards and best practices, protecting the confidentiality of every author’s work and treating all authors with fairness, objectivity, honesty, and transparency. The use of manuscripts in generative AI systems may result in risks related to data and proprietary rights infringement, confidentiality, and other issues. Editors are, therefore, prohibited from uploading unpublished manuscripts to AI tools.
Peer reviewers
Peer reviewers play a vital role in upholding the standards of academic publishing and are entrusted with unpublished manuscripts. As such, peer reviewers must not upload unpublished manuscripts to Generative AI tools to protect author confidentiality. Peer reviewers can use AI tools to support the review process in specific areas such as manuscript summarization, identification of gaps in research, finding related literature, and detection of ethical issues. However, final decision in peer review cannot be automated and should be human-based as AI can produce inaccurate, incomplete and biased content. Peer reviewers will always be accountable for maintaining the integrity and correctness of their reviews.