AI Policies
Artificial Intelligence (AI) Usage Policy
In recent years, the use of artificial intelligence (AI)-based tools—such as ChatGPT, Gemini, Claude, and similar tools—has grown rapidly among researchers. These tools indeed promise efficiency, particularly in language editing, outlining, and initial data processing. However, IJoRIS believes that quality scientific work is born from critical thinking, intellectual responsibility, and ethical commitment by humans, not machines.
Therefore, this journal has established the following policy regarding the use of AI in all stages of publication—from manuscript writing and peer review to editorial management. This policy adheres to the principles established by the Committee on Publication Ethics (COPE) and Elsevier and is tailored to IJoRIS's unique character as an interdisciplinary journal that frequently addresses sensitive issues in the realms of religion, ethics, philosophy, and the social sciences.
1. Authors May Use AI—But with Full Responsibility
Authors are permitted to use AI tools to assist in the technical process of writing, such as:
- correcting grammar or style,
- restructuring sentences for clarity,
- formatting bibliographies,
- translating texts (with rigorous verification).
However, AI must not replace the author's role in:
- structuring intellectual arguments,
- interpreting findings in the context of religion or moral values,
- making theological, ethical, or philosophical claims,
- generating data, figures, or references.
Every part of the manuscript—even if assisted by AI—remains the full responsibility of the author. If there are factual errors, ideological biases, or unsubstantiated claims, the author must answer for them, not the machine.
2. Mandatory Disclosure of AI Use
Transparency is essential. Every author using AI must explicitly declare this in the submitted manuscript. This statement should be placed in a special section titled:
“Statement on the Use of Artificial Intelligence,” placed immediately before the bibliography.
The statement must include:
- the name of the AI tool (e.g., ChatGPT version 40, accessed June 2025),
- the section of the manuscript it assisted with (e.g., background development),
- the intended use (e.g., clarifying complex sentences).
Example of an adequate statement:
The authors used Gemini (version 1.5, accessed March 2025) to improve the paragraph structure of the analysis section. All content has been reviewed, verified, and adjusted according to the authors' own interpretation.
Failing to disclose the use of AI—especially if it was used to generate core content—would be considered a violation of academic ethics and could result in the rejection or retraction of the manuscript.
3. AI Is Not an Author
According to the official position of COPE (2024), AI tools should not and will never be listed as authors, either primary or co-authors.
Why? Because authorship is more than just a name on the front page. It requires:
- the ability to make intellectual decisions,
- the willingness to take responsibility for accuracy and originality,
- The capacity to approve the final version and handle post-publication inquiries.
- Machines can't do that. Only humans can be called authors.
4. Strict Prohibitions on Peer Review
Peer review is a confidential and responsible process. Reviewers are strictly prohibited from:
- uploading a manuscript (or part of it) to an AI tool,
- asking the AI to “write a review” on their behalf.
If a reviewer wishes to use AI solely to correct the grammar of their review, they must ensure that no confidential information is leaked and inform the editor. However, IJoRIS encourages that reviews be entirely self-written and thought out—because that is the essence of peer review: critical assessment by fellow experts.
5. Editorial Role Remains in Human Hands
The editorial team can use AI for administrative tasks such as:
- plagiarism detection,
- format checking,
- initial screening based on the journal's scope.
However, academic decisions—whether a manuscript is worthy of publication—should only be made by human editors. IJoRIS will never delegate ethical, theological, or interdisciplinary considerations to algorithms.
6. Beware of Bias and Hallucinations
AI tools often produce:
- false references (“academic hallucinations”),
- biased interpretations of particular religious traditions,
- generalizations that are insensitive to cultural context or beliefs.
Authors should critically review any AI output—especially when discussing topics of religion, identity, or moral values. Don't blindly trust "answers" that sound convincing. Verify, confirm, and consider the ethical implications.
7. Consequences of Violation
If it is found that:
- The manuscript is largely AI-generated, with minimal human supervision.
- References or data are falsified with the help of AI.
- The use of AI is intentionally concealed.
Then IJoRIS reserves the right to:
- reject the manuscript,
- Retract the published article.
- Report the matter to the author's institution.
Such cases will be handled in accordance with COPE guidelines on violations of publication ethics.
8. This Policy is Living and Evolving
Technology changes rapidly. Therefore, this policy will be reviewed annually—or more frequently if necessary—to remain relevant, fair, and in line with best practices in scientific publishing.
IJoRIS is not anti-AI. We believe AI can be a useful tool—as long as it does not replace human reason, integrity, and responsibility. Because ultimately, science is not about how fast we write, but how honest, wise, and how we contribute to human understanding.











