ChatGPT and Generative AI
Policy on ChatGPT and Generative AI Based on COPE (Committee on Publication Ethics) for MATRIK: Journal of Management, Informatics Engineering, and Computer Engineering, valid starting from Vol. 25, No. 1
Use of Large Language Models and Generative AI Tools in Manuscript Preparation.
MATRIX acknowledges the value of using large language models (LLMs), such as ChatGPT, and generative AI as productivity tools to support authors in manuscript preparation. These tools may aid in idea generation, content structuring, summarization, language refinement, and improving clarity. However, it is essential to recognize that these models have limitations and cannot replicate human creativity and critical thinking. Human intervention remains crucial to ensure the accuracy and appropriateness of the content presented to readers. Therefore, MATRIX requires authors to consider the following when using LLMs in their submissions:
- LLM-generated texts may include biased or previously published content, such as racism, sexism, or other forms of bias. Minority perspectives may be underrepresented. The use of LLMs may reinforce these biases due to decontextualized outputs.
- Accuracy: LLMs may generate false or misleading content, especially outside their domain expertise or when handling complex or ambiguous topics. They might produce linguistically coherent but scientifically incorrect text, including fabricated facts or citations. Some models may also lack access to the latest data.
- Contextual Understanding: LLMs may struggle with idioms, sarcasm, humor, or metaphors, leading to misinterpretation or incorrect content.
- Training Data: LLMs rely on large datasets of high quality. Such data may be unavailable in some domains or languages, limiting model performance.
Author Guidance:
Authors must:
- Clearly disclose the use of LLMs in their manuscript, including the specific model used and its purpose. This information should be provided in the methods or acknowledgements section, as appropriate.
Disclosure Statement:
Authors must include a declaration at the end of their main manuscript file, before the References section, under a new heading titled "AI and AI-Assisted Technologies Declaration in the Writing Process."
Statement Template:
During the preparation of this work, the author(s) used [NAME OF TOOL/SERVICE] for [PURPOSE]. After using this tool/service, the author(s) reviewed and edited the content as needed and take full responsibility for the publication's content.
A downloadable draft of this declaration is available [here].
This declaration is not required for basic tools to check grammar, spelling, references, etc. If no AI tools were used, no statement is necessary.
- Verify the accuracy, validity, and relevance of content and citations produced by LLMs, correcting any errors or inconsistencies.
- Provide a list of sources for generating content and citations, including those created using LLMs. Authors must review all citations for accuracy.
- Be alert to potential plagiarism if the model reproduces substantial text from other sources. Authors should cross-check with original sources.
- Acknowledge the limitations of LLMs in their manuscript, such as potential bias, inaccuracies, and knowledge gaps.
Artificial intelligence tools such as ChatGPT are not eligible to be listed as authors in any manuscript submission.
If discovered post-publication, undisclosed use of such tools may warrant appropriate corrective measures by journal policy.
Authors should consult the specific submission guidelines of their target journal for additional policies concerning the use of AI tools.
Responsibilities of Editors and Reviewers:
Editors and reviewers should assess the appropriateness of LLM usage and ensure the accuracy and validity of AI-generated content.
For Further Information:
See the recommendations by the World Association of Medical Editors (WAME) regarding ChatGPT and scholarly manuscripts and the Committee on Publication Ethics (COPE) position statement on AI tools in authorship.
This policy may change in the future as we work with our publishing partners to understand how emerging technologies may help or hinder the research publication process. Please visit this page for the latest updates.