AI Policy

We recognize two kinds of uses for generative AI in writing, limited and substantive uses, defined below.

Limited use of AI is any kind of use that affects only mechanical aspects of the writing, such as spellchecks, grammar checks, and other similar uses. These kinds of uses are, for the most part, variations on capacities already possessed by standard word processors, as well as the pre-generative AI capabilities of writing software like Grammarly. A second kind of limited use is the use of AI for literature search or any similar task which is a variation on the capacities of systems like Google Scholar or PubMed.

Substantive use of AI is defined as any use of AI to draft or write a paper or to generate the ideas that make up the substance of a paper. Other uses of AI in the composition process which bring into question the authorial contribution, intellectual carefulness, and authenticity of the submitted work also count as instances of substantive use. This includes the use of AI to source citations, summaries, or quotes for claims when the author has not carefully read the cited work themselves and determined their appropriateness.  

Substantive uses of AI are not permitted by our editorial policy. Limited uses are tolerated, but are redundant since we have an editorial process for accepted papers. Uses of generative AI which straddle the line between limited and substantive uses, such as the use of paraphrase capacities of generative AI tools, are very strongly discouraged.

Authors must disclose any use of AI tools at submission. Failure to disclose AI use is treated as a breach of editorial trust regardless of whether the use would otherwise be permitted.

As a matter of integrity, we also limit our own editorial use of generative AI to limited uses, and we do not use it to make evaluative judgments about a submission’s arguments or merits.

Below we explain some of the philosophical reasoning behind this editorial position.

Some Comments on the Philosophical Concerns which Motivate the Policy

  1. The Metaphysics and Politics of AI

All machines have politics and metaphysics. This is to say, they encode substantial assumptions about the nature of the world and how it ought to be organized. Many Muslims choose to characterize contemporary AI systems simply as “tools”, which is in itself a helpful adjustment to the radical metaphysical questions raised by these systems, but overlooks the significant metaphysical and political views that drive the design of AI. 

AI research has historically been torn between a characterization of itself as an attempt, on the one hand, to model cognition for research purposes and, on the other, as an attempt to reproduce or recreate mindedness. Lately, as the venture of AI has become the demesne of ideologically-minded individuals and organizations who think of themselves as remaking humanity altogether (Hao 2025), the latter characterization has become dominant. Thus, the anthropomorphism that increasingly surrounds these systems is no accident: AI systems are designed less and less as scientific models, and more and more to appear on par with sentient minds. The design philosophy that underlies a good deal of industrial AI research endorses a view of mindedness which increasingly seeks to erase the distinction between minds and machines. Consequently, because the philosophy is shipped with the product, the proliferation of these systems tends to legitimize the ideas about mindedness and its place in the world which the philosophy endorses. And the more entrenched the philosophy becomes, the less the “tool” description appeals and the more the once fundamental distinction between mind and machine is undermined. Without a critical view of AI and the philosophy which it encodes, it is an open question how long, even among well-intentioned Muslims, fundamental and pivotal ideas about sentience, sapience, mindedness, and human experience might survive. “We are being hustled willy-nilly in a certain direction, far from all the landmarks to which humanity has been accustomed; and, if we do not know what it is we are going to, we can at least see what it is we are going from and judge thereby the loss we have suffered.” (Eaton)

This reading of the underlying worldview of AI design is not as hyperbolic as it might first appear. A rethinking of the nature of mindedness and human experience is characteristic and almost intrinsic to AI research. In 2022, Blake Lemoine, an ethicist at Google, announced that he believed LaMDA, Google’s in-house language model, was sentient. This announcement became the focus of a media circus that divided observers into those who took Lemoine seriously and those who did not (though even some of the latter would add the caveat, “at least not yet”). Lemoine was eventually fired because, in part, Google believed his statements to be baseless. He was accused by some experts of having fallen afoul of the ELIZA Effect, though others thought he was only precipitate but may not be wrong for long. All of this disagreement and the firing, however, only made Lemoine more insistent. And, almost predictably, more famous. Eventually, given the platform by his 15 minutes of fame, his words on the subject of the sentience of LaMDA resounded with a theological echo. He declared, and it was well reported, on the matter: “Who am I to tell God where he can and can’t put souls?” 

Since then, statements of this kind have become more common because, as stated, the tendency of these systems to elicit anthropomorphic responses is by design and almost an intrinsic part of the project of AI: right at the very beginning of the field, in one of the papers that initiated the field, Alan Turing makes a similar claim in response to a theological reservation about the possibility of “thinking machines”. Reception of these sorts of ideas in the public has been more welcoming than any Muslim should be comfortable with. There have already been catastrophic consequences, with reported instances of “AI-fueled spiritual fantasies” as well as “prophets claiming they have “‘awakened’” chatbots and accessed the secrets of the universe through ChatGPT” (Rolling Stone). Mainstream AI developers have also begun to warm to similar ideas. In their model card for Claude Opus 4 and Sonnet 4, Anthropic constantly refers to the ostensible spirituality of their models. They do not outright endorse a claim that these systems experience spirituality, but some of their other concerns, as for instance with “model welfare”, suggests that they think it is at least on the horizon. Consequently, they have begun to prepare for the possibility. The inclinations in this direction are very common and have been historically associated with AI because they are the norm, not a tangential exception, for the philosophy underlying the design and engineering of AI systems. 

What does all of this mean for how Muslims should receive these systems? Does all of this mean that Muslims must, on pain of theological instability, oppose AI? Can Muslims not simply continue to rely on the “tool” view of AI systems? After all, aren’t these radical philosophical ideas only contingently connected to the success of AI? It is true that, strictly speaking, one need not endorse any of these ideas to build or use AI, but the mere fact of association is often enough to legitimize the ideas. This fact is demonstrated over and over again in the history of technology. It has been said that the spread of materialism and speculative naturalism piggybacked on the successes of science and technology without being instrumentally essential to them. Participation in the proliferation and entrenchment of these technologies might be individually innocuous, but runs the risk of facilitating, or at the very least passively permitting, the propagation of these ideas. And this constitutes a surrender of principle for expediency (one of the very characteristic features of the worldview underlying mainstream industrial AI design and use). The answer to the initial questions about how Muslims might respond to the design and proliferation of AI must take, at the minimum, the form of a critical awareness: “Our duty” in this, as in any other case, “is to carefully watch the progress of human thought and to maintain an independent critical attitude towards it.” (Iqbal) This appears to cut against the recommendation made by some Muslims, including, famously, Ibtihal Aboussad, in a talk at the Muslim Council of Britain’s AI Conference to the effect that Muslims must try to take the lead on AI. But caution is more warranted here than pro-activism because the risks are so high. 

A parallel may be drawn here to the debate in classical fiqh about image-making and the naturally attendant discussions about the mimicry of animacy. While it may legitimately be argued, and has so been argued by a handful of jurists, that the proscription of image-making was put in place to prevent any chance of backsliding into idolatry by early Muslims, most jurists, classically, have played it extremely safe and argued that image-making, no matter when or where, is proscribed not just because of the cultural risk borne by early Muslims, but because it constitutes a theological danger everywhere and at all times: any mimicry of animacy, because life is the ostensible ultima of creation, ostensibly runs the risk of convincing the image-maker of their supposed omnificence. This risk, however remote, is too substantial, and utmost caution is indicated. This position does not deny that images (even very life like ones) can be made (cf. Q 20.88); rather, it maintains that any mimicry of animacy, however convincing, fails to be the real deal but may be mistaken as such. Even if the “golden calf that appears to low” (Q 20.88) may not sway the unswayable faithful, the calf, invariably attended by the charms of Samiri (here the underlying design philosophy), may be overwhelming enough to create a division and undue metaphysical chaos which is best avoided. This reasoning, grounded in a philosophical caution, partly informs our reading of the current situation and the resultant need for cautiousness we infer from it.

  1. Reference and Deference to Tradition

In addition to all of this, the use of AI, for our purposes at least, undermines reference and deference to the knowledgeable pious and sustains a deleterious epistemic individualism at odds with the structure of knowledge transmission in our tradition. AI systems lack the standing to transmit religious knowledge or give judgments since epistemic standing in our tradition is more than just about the form of knowledge. Even if a jurist and an AI were to give the very same response to some query, the AI’s response is illegitimate because it has no standing, much in the same way a sentence delivered by a judge is legitimate while one handed out by a jury is illegitimate even if the jury has determined guilt and knows the judgment indicated by its verdict, since handing out sentences is the prerogative not of the jury but the judge. Making pronouncements on religion is the prerogative of a human person whose reference to lived experience is a sine qua non of judgment and whose participation in a living tradition is a prerequisite for their standing. Since AI systems lack lived experience and, a fortiori, cannot participate in a living tradition, they lack both grounds for standing. 

Regarding the debate on whether AI systems might be capable of experience, the issue may be open in the secular philosophical context, but not in our theological context. Assumptions made by Alan Turing and Blake Lemoine that God may choose to ensoul a machine is entirely speculative and, in our tradition, devoid of any defensible evidentiary basis. Thus, any pronouncement on a matter of religion by an AI is illegitimate and if the pronouncement must be corroborated by a jurist anyway, redundant. Could a jurist however use AI in their job? Certainly, though all the other concerns we raise here are binding on the jurist as well, and should give pause to any serious jurist and give them reason to use AI, if at all, only for the very limited purposes we have defined above.

  1. The Aims of Traversing Tradition

Part of the aim of Traversing Tradition is to help mature a generation of honest and serious Muslim thinkers and writers who understand and have a lived experience of both their tradition and secular modernity. Because our aim is partly developmental in this sense, the substantive use of AI undercuts the foundation of our project. It is still relatively early days for the use of generative AI in writing, but emerging evidence suggests that those who use these tools in a substantive fashion accrue “cognitive debt” (Kosmyna et al.), suggesting that it does not foster the sort of thinkers and writers we are seeking to develop. The use of generative AI also tends to be associated with diminished scholarliness and cutting of corners in research, thinking, and writing, an attitude that runs against any sensible editorial or academic policy in general, and our content guide specifically. We have written that “we require writers to draw from primary and secondary sources to substantiate any claims they make in their pieces, particularly claims about Islam and Muslim life, in general.” Much of this, more frequently than not, tends to be undercut by the substantive use of AI. Per our editorial aim to cultivate careful thought, disciplined reading, and accountable witnesses, we do not publish work whose substance is outsourced to generative systems.

We are also not motivated by the productivity imperative that drives a lot of writers to the use of generative AI. Our aim is not to merely pump up the number of articles written by Muslims; we are not dominated by the reign of quantity. We are more concerned with quality, authenticity, and the lived experience of Muslims. The productivity imperative is, moreover and in general, irrelevant in art and spirituality. A single article drawn out of an author’s most intimate self and carefully written is far more valuable than anything hastily generated with AI. AI generated material is already flooding the floors of public discourse and we do not wish to contribute to this dilution of meaningful discourse. A witticism of Saadi’s is relevant here in summarizing our concern on this score: “Whatever is produced in haste goes hastily to waste.” There have, furthermore, been significant arguments about the environmental, social, economic, and human costs of generative AI. Contributing to all of these substantial burdens at the click of a button for work that is likely to be subpar and inauthentic is a betrayal of certain obligations we have as humans and Muslims.

And finally, we also want our authors to be able to take full responsibility for what they say or produce on our platform. For some of the reasons above and for other sociotechnical reasons, we do not consider AI systems agents and so we do not deem that they can take responsibility. This makes any substantive use of AI problematic since it blurs the contributive lines between authors and the machine. Our policy is also, in part, intended to avoid the difficulties inherent in this. 

Conclusion

For all the reasons given, we ask our writers, on pain of their good faith as Muslims, not to use generative AI in any substantive manner in works submitted to us.


Drafted by Nuhu Osman Attah. Approved by the Board January 17, 2026 and updated as of February 15, 2026.