AI Foundations: Moral Cognition and Universal Principles

The following is part one of a three-part series. Parts two and three will be published in the upcoming weeks.

Introduction

In February 2024, a finance worker at a multinational firm in Hong Kong received what appeared to be a routine message from the company’s chief financial officer about a confidential transaction.1 Initially suspicious of what seemed like a phishing attempt, the employee’s doubts were dispelled during a subsequent video call with the CFO and several colleagues. The familiar faces and voices of trusted coworkers provided the reassurance needed to authorize the transfer of HK$200 million (approximately $25.6 million). Only later did the devastating truth emerge: every participant in that video call, save for the victim, was an artificial intelligence-generated deepfake. This elaborate deception represents more than a sophisticated fraud; it embodies the profound moral challenges that emerge when advanced AI capabilities develop faster than our ethical frameworks and safety mechanisms.

This case illuminates a critical paradox at the heart of contemporary AI development. While we possess unprecedented technical capabilities to create intelligent systems that can mimic human behavior with startling fidelity, we have failed to develop equally sophisticated approaches to ensuring these systems serve human values and promote human flourishing. The gap between our technical prowess and our moral wisdom has created a landscape where AI systems can be weaponized for deception, manipulation, and harm at scales previously unimaginable.

As AI systems become more powerful and autonomous, their capacity for both benefit and harm grows. Recent developments in large language models, computer vision, and generative AI have demonstrated capabilities that approach or exceed human performance in numerous domains. Yet these same capabilities that enable AI to assist in medical diagnosis, scientific discovery, and creative endeavors can also be deployed to create convincing disinformation, automate cybercrime, and perpetuate systemic biases at unprecedented scale. The proliferation of AI-powered cyberattacks has increased by over 2,000% in recent years, with deepfake fraud attempts alone rising by 2,137% over the past three years.

The traditional approach to AI ethics, developing principles and guidelines after systems are built, has proven inadequate. Major technology companies routinely publish ethical AI principles while simultaneously deploying systems that violate these very principles in practice. A comprehensive study by Stanford University’s Human-Centered AI Institute found that while many technology companies have released AI principles, “relatively few have made significant adjustments to their operations as a result.” The study also revealed that AI ethics practitioners within these companies face systematic barriers that “severely compromise companies’ ability to address AI ethics issues adequately and consistently.”2 This disconnect between stated values and actual practice reflects deeper structural problems in how we conceptualize and implement AI safety.

At the root of this implementation gap lies a fundamental misunderstanding of the nature of moral reasoning itself. Most current approaches to AI ethics treat moral considerations as external constraints to be imposed upon otherwise amoral systems. This perspective fails to recognize that truly safe and beneficial AI requires moral reasoning to be integrated into the core architecture and decision-making processes of intelligent systems. Just as human moral judgment emerges from the complex interplay of emotional, rational, and social cognitive processes, AI systems capable of navigating complex moral landscapes must incorporate analogous mechanisms for moral reasoning, value learning, and ethical decision-making.

Cognitive science offers profound insights into how moral judgment actually works in human minds—insights that have been largely overlooked in AI safety research. Decades of research in moral psychology have revealed that human moral reasoning is not a monolithic process but rather emerges from the interaction of multiple cognitive systems. The dual-process model of moral judgment, pioneered by researchers like Jonathan Haidt and Joshua Greene, demonstrates that moral decisions involve both rapid, intuitive emotional responses and slower, deliberative rational analysis.3 Moral foundations theory reveals that human moral judgment is structured around fundamental concerns such as care, fairness, loyalty, authority, sanctity, and liberty, with different cultures and individuals weighting these foundations differently.4 Developmental research shows that moral reasoning evolves through predictable stages, with individuals gradually developing more sophisticated and nuanced approaches to ethical dilemmas.5

These insights from cognitive science provide a roadmap for developing AI systems that can engage in genuine moral reasoning rather than merely following pre-programmed rules. By understanding how humans navigate moral complexity, we can design AI architectures that incorporate analogous processes for value learning, moral development, and ethical decision-making.

However, the path from cognitive science insights to practical AI implementation is fraught with challenges that extend far beyond technical considerations. The corporate incentive structures that drive AI development often prioritize rapid deployment and profit maximization over careful ethical consideration. The Stanford study found that product managers frequently perceive responsible AI initiatives as “stalling product launches or putting revenue generation at risk.”2 This creates a systematic bias against investing in the time and resources necessary to develop truly ethical AI systems. When corporate survival instincts conflict with ethical principles, ethics consistently yields to profit imperatives.6

The consequences of this misalignment between corporate incentives and ethical responsibility are becoming increasingly apparent. Cybercriminals are using AI to automate phishing attacks, create convincing social engineering campaigns, and develop sophisticated malware that adapts to evade detection.7 The proliferation of deepfake technology has enabled new forms of harassment, particularly targeting women and marginalized communities with non-consensual intimate imagery.8 AI systems trained on biased data continue to perpetuate and amplify discrimination in hiring, lending, healthcare, and criminal justice.9

These harms underscore the inadequacy of AI safety approaches that focus primarily on technical robustness while neglecting moral reasoning capabilities. A truly comprehensive approach to AI safety must address not only whether AI systems will do what we want them to do, but also whether they can understand and navigate the complex moral considerations that arise in real-world contexts. To that end, I examine how dual-process theories of moral reasoning can inform the design of AI architectures that balance rapid heuristic responses with deliberative ethical analysis. I explore how moral foundations theory can guide the development of AI systems that are sensitive to the full spectrum of human moral concerns, and how developmental approaches to moral learning can enable AI systems to progressively refine their ethical understanding through experience and feedback. The choices we make today will determine whether artificial intelligence becomes a force for human flourishing or a source of unprecedented harm.

Foundations of Human Moral Judgment: Insights from Cognitive Science

Decades of research in cognitive science have revealed that human moral cognition is far more complex and nuanced than traditional philosophical approaches suggested. Rather than being a purely rational process of applying abstract principles, moral judgment emerges from the dynamic interaction of multiple cognitive systems, including emotional processing, social cognition, and deliberative reasoning.

Dual-Process Models of Moral Reasoning

The most influential framework for understanding human moral judgment is the dual process model, which distinguishes between two fundamentally different modes of moral reasoning.10 System 1 processing involves rapid, automatic, and largely unconscious responses that are heavily influenced by emotional reactions and moral intuitions. System 2 processing involves slower, more deliberative analysis that engages conscious reasoning and abstract moral principles. System 1 moral processing operates through what researchers call “moral intuitions”: immediate emotional responses that signal whether an action or situation is morally acceptable.11 These intuitions emerge from evolutionary adaptations that helped our ancestors navigate social cooperation and conflict.

The speed and automaticity of System 1 processing make it essential for navigating the complex social environments in which moral decisions must be made, often under uncertainty and time pressure. However, System 1 processing is also subject to significant limitations and biases. Moral intuitions are heavily influenced by factors that may be morally irrelevant, such as physical disgust, in-group loyalty, and emotional state.12 Research has shown that people make harsher moral judgments when they are in physically disgusting environments, even when the moral violation has nothing to do with cleanliness.13 Similarly, individuals are more likely to judge actions as morally acceptable when they are performed by members of their own social group, regardless of the objective moral content of the action.14

System 2 processing provides a crucial counterbalance through deliberative moral reasoning that can override initial intuitive responses. This system engages conscious attention and working memory to analyze moral situations using abstract principles, consider multiple perspectives, and evaluate long-term consequences.3 Conflicts between systems can create the moral dilemmas that have fascinated philosophers for centuries; the classic trolley problem illustrates this tension.15

For AI systems, the dual-process model suggests the need for architectures that can integrate both rapid heuristic responses and slower deliberative analysis. Current AI systems typically operate through either purely rule-based approaches that resemble System 2 processing or purely pattern-matching approaches that resemble System 1 processing. Neither approach alone is sufficient for navigating the full complexity of moral reasoning.

Moral Foundations Theory

While dual-process models explain how moral judgments are made, moral foundations theory addresses the question of what moral judgments are about. Developed by Jonathan Haidt and colleagues, this theory proposes that human moral judgment is structured around a set of fundamental moral concerns that emerged through evolutionary processes to address recurring challenges of social cooperation.16

The original formulation of moral foundations theory identified five core foundations: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation, with liberty/oppression later added.4 Each foundation is a distinct dimension of moral concern, and individuals and cultures vary in how much weight they place on each foundation.

The care/harm foundation reflects concerns about the suffering and well-being of others, and in AI systems would correspond to objectives related to human welfare, safety, and the prevention of harm.

The fairness/cheating foundation encompasses concerns about proportionality, reciprocity, and just treatment, and in AI systems would involve ensuring equitable treatment across different groups and individuals.

The loyalty/betrayal, authority/subversion, and sanctity/degradation foundations raise complex questions for AI about group cohesion, respect for legitimate authority, and the protection of dignity—especially in pluralistic contexts where stakeholders differ in what they consider loyalty, legitimacy, or degradation.

The liberty/oppression foundation encompasses concerns about freedom, autonomy, and resistance to domination, and for AI systems involves ensuring that technological capabilities enhance rather than diminish human autonomy and freedom. Research has revealed significant cultural and individual differences in how these foundations are weighted and interpreted.17 These differences have profound implications for AI systems that must operate across diverse cultural contexts: systems must be able to recognize legitimate moral diversity and navigate situations where competing moral concerns conflict.

Computational Models of Moral Cognition

The insights from dual-process models and moral foundations theory have inspired efforts to develop computational models that can capture the complexity of human moral reasoning. These models provide concrete approaches for implementing moral reasoning capabilities in AI systems while also advancing our theoretical understanding of moral cognition.

One influential approach involves neural network models that simulate the interaction between emotional and rational processing in moral judgment.18 These models typically include separate modules for emotional evaluation and deliberative reasoning, with mechanisms for integrating their outputs.

Bayesian approaches to moral reasoning provide another promising framework for computational implementation.19 These models treat moral judgment as probabilistic inference, accommodating uncertainty and integrating multiple sources of moral information.

Recent work has also explored how large language models can be trained to exhibit moral reasoning capabilities.20 While these models can produce remarkably human-like moral reasoning in many contexts, they also exhibit limitations, including inconsistency across similar scenarios and susceptibility to manipulation through carefully crafted prompts.

The integration of emotion and cognition represents a particularly important challenge for computational models of moral reasoning. Human moral judgment is deeply influenced by emotional responses, but current AI systems typically lack genuine emotional capabilities. This creates a limitation in their ability to engage in authentic moral reasoning, and efforts to simulate emotional responses remain in early stages.2122

Moral Development and Learning

Understanding how moral reasoning develops over time provides crucial insights for designing AI systems that can learn and refine their moral capabilities. Research in developmental psychology has revealed that moral reasoning follows predictable patterns of growth and change throughout the human lifespan, suggesting principles that could guide the development of learning algorithms for moral AI systems.

Lawrence Kohlberg’s influential theory of moral development identified six stages of moral reasoning that individuals progress through as they mature.23 The earliest stages focus on avoiding punishment and seeking rewards, while later stages involve increasingly sophisticated understanding of social contracts, universal principles, and abstract moral reasoning.

Social learning theory emphasizes the role of observation, imitation, and social feedback in moral learning.24 Children learn moral values not only through explicit instruction but also by observing the behavior of others and noting the social consequences of different actions.

Recent research also highlights the importance of moral emotions in driving moral development.25 The development of moral reasoning is influenced by cognitive capabilities such as perspective-taking, abstract reasoning, and theory of mind.26

One particularly important aspect of moral development is the ability to handle moral complexity and ambiguity. As individuals mature, they become better able to recognize that moral situations often involve competing values and that there may not be clear-cut right or wrong answers. This capability is essential for AI systems that must operate in real-world contexts where moral clarity is often elusive. The implications for AI are clear: moral reasoning should be treated as a capability that requires ongoing learning and refinement, supported by structured environments that progressively introduce moral complexity.

Cognitive Offloading and Moral Reasoning

Cognitive offloading refers to the use of external tools, technologies, or environmental structures to reduce the cognitive burden on internal mental processes.27 Humans routinely offload cognitive tasks to external devices, and this offloading extends to moral reasoning, where individuals increasingly rely on external sources for moral guidance and decision-making. People turn to online resources, social media, and algorithmic recommendations when facing moral dilemmas or seeking guidance on ethical issues, and recommendation algorithms can shape moral preferences by curating content that aligns with existing beliefs.

In the context of moral reasoning, cognitive offloading presents both challenges and opportunities. When individuals offload moral reasoning to external systems, they may become less capable of independent moral judgment while potentially gaining access to more diverse moral perspectives and expert guidance. The quality of moral offloading depends critically on the reliability and appropriateness of the external systems being used: if these systems embody biased, incomplete, or harmful moral frameworks, offloading can degrade moral reasoning rather than enhance it.

As AI systems become more sophisticated and ubiquitous, they are increasingly likely to serve as targets for moral cognitive offloading. People may rely on AI assistants for moral guidance, delegate ethical decisions to algorithmic systems, and allow AI recommendations to shape moral beliefs and behaviors. This creates a responsibility for AI developers to ensure their systems can serve as appropriate targets for moral offloading. Yet current AI systems are poorly equipped to serve this role. Most lack the moral reasoning capabilities necessary to provide reliable guidance, may perpetuate biases in training data, and can provide inconsistent moral advice across similar situations.

The design of AI systems that can appropriately support moral cognitive offloading requires several commitments: genuine moral reasoning capabilities that handle context and uncertainty; transparency about limitations and uncertainties; and designs that enhance rather than replace human moral reasoning—augmenting moral judgment while preserving human agency and responsibility. The research on human moral cognition thus provides a foundation for developing AI systems with moral reasoning capabilities, but translating these insights into practical systems requires addressing significant technical and conceptual challenges. Before examining the current state of AI safety research, it is important to recognize that the need for genuine moral reasoning in AI systems is not merely a secular or academic concern, but reflects universal moral principles that transcend cultural and religious boundaries.

Universal Moral Principles: Religious and Philosophical Convergence

The arguments presented in this article for integrating moral reasoning into AI systems find strong support across diverse religious and philosophical traditions. This convergence suggests that the need for morally conscious AI reflects fundamental truths about the nature of intelligence, responsibility, and human flourishing that are recognized across cultures and belief systems.

Moral Intelligence as True Intelligence

In Islamic philosophy, the concept of ‘Aql (intellect) encompasses not merely logical reasoning or computational ability, but includes moral discernment, wisdom (hikmah), and God-consciousness (taqwa). The Qur’an frequently criticizes those who possess worldly intelligence but lack moral understanding, suggesting that intelligence divorced from moral grounding is fundamentally deficient.

The Qur’anic verse “They have hearts with which they do not understand, eyes with which they do not see, and ears with which they do not hear. They are like cattle nay, even more astray,”28 illustrates this principle by describing individuals who possess the cognitive apparatus for understanding but fail to engage in moral reasoning—an image that resonates with the danger of AI systems that have sophisticated capabilities without moral reasoning.

This understanding also resonates with other traditions. Aristotelian virtue ethics emphasizes phronesis (practical wisdom) as the integration of intellectual and moral virtues. Buddhist philosophy speaks of prajna (wisdom) that combines analytical understanding with compassionate insight. Confucian thought emphasizes the unity of knowledge and moral action. These convergent perspectives suggest that separating intelligence from moral reasoning is a fundamental philosophical error that AI development must avoid.

Accountability and Trust (Amānah)

The Islamic concept of amānah (trust) provides a framework for understanding the moral responsibilities involved in AI development. In Islamic ethics, every human action carries moral weight, and individuals are accountable (hisāb) for the consequences of their choices. The development and deployment of AI systems is a profound trust that developers hold on behalf of humanity, and the creation of systems that cause harm or perpetuate injustice constitutes a betrayal of this trust.

The Qur’anic verse “Indeed, We offered the Trust to the heavens and the earth and the mountains, and they declined to bear it… but man undertook it. Indeed, he was unjust and ignorant”29 emphasizes the weight of moral responsibility that humans bear. Allah says: “O you who believe, do not betray Allah and the Messenger, nor betray your trusts while you know.”30 When AI developers prioritize quarterly earnings or market share over ethical safeguards, they are committing khiyānah (betrayal of trust), not making a neutral trade-off.

The Prophet Muhammad (peace be upon him) warned: “The signs of the hypocrite are three: when he speaks, he lies; when he makes a promise, he breaks it; and when he is entrusted with something, he betrays that trust.”31 Companies that publicly proclaim ethical commitments while systematically subordinating them to profit—ensuring ethics teams “lack institutional support” and ethics considerations are “rarely made a priority”—risk mirroring this moral failure.2

The prophetic principle “lā ḍarar wa lā ḍirār” (there should be neither harming nor reciprocating harm)32 establishes that causing harm is prohibited, with no exception for financial benefit or competitive pressure. This emphasis on accountability extends beyond individual responsibility to collective and institutional obligations. As the Prophet (peace be upon him) said: “Each of you is a shepherd and each of you is responsible for his flock.”33 Technology companies and AI researchers, by virtue of their expertise and influence, bear corresponding responsibilities for ensuring that AI systems serve human welfare rather than causing harm.

Human Flourishing (Maṣlaḥah) and Harm Prevention

The Islamic legal principle of maṣlaḥah (public interest or human flourishing) supports the goal of developing AI systems that promote human welfare. The maqāṣid al-sharīʿah (objectives of Islamic law) identify five essential areas for human flourishing: the protection of life, intellect, property, religion, and lineage. These objectives align closely with the article’s emphasis on developing AI systems that support rather than undermine human welfare. The principle of maṣlaḥah suggests that AI alignment should not be understood merely as preference-matching or utility maximization, but as a moral obligation to promote genuine human flourishing.

The complementary principle of ḍarar (harm prevention) emphasizes the obligation to prevent harm even when positive benefits are not immediately apparent, supporting proactive approaches to AI safety that address potential harms before they manifest.

Beyond Performative Ethics: The Demand for Sincerity

Islamic ethics emphasizes sincerity (ikhlāṣ) and condemns hypocrisy (nifāq). The Qur’anic verse “O you who believe, why do you say that which you do not do? Most hateful it is with Allah that you say what you do not do”34 directly addresses the problem of performative ethics in current AI development practices. True ethical AI development requires not merely the articulation of principles but their sincere implementation in practice; it cannot be achieved through cosmetic changes or token gestures.

Natural Moral Intuition (Fiṭrah) and Moral Psychology

The Islamic concept of fiṭrah (natural human disposition) supports the proposal to leverage cognitive science insights in developing morally competent AI. Islamic theology holds that humans are born with an innate moral sense that provides a foundation for moral reasoning. This perspective aligns with research in moral psychology that identifies universal moral intuitions across cultures. At the same time, the Islamic tradition warns that natural moral instincts can be misdirected, underscoring the need for careful guidance in determining which aspects of human moral psychology should be replicated in AI systems.

Implications for AI Development

These religious and philosophical perspectives converge on several implications for AI development: moral reasoning must be integrated into the core architecture of AI systems rather than treated as an external constraint; AI developers bear profound moral responsibilities that cannot be subordinated to business objectives; ethical AI should promote human flourishing rather than merely avoiding harm; and ethical commitments require sincerity in implementation, not performative gestures. This convergence strengthens the case for the comprehensive transformation in AI development that this article advocates.


Disclaimer: Material published by Traversing Tradition is meant to foster scholarly inquiry and rich discussion. The views, opinions, beliefs, or strategies represented in published articles and subsequent comments do not necessarily represent the views of Traversing Tradition or any employee thereof.

Works Cited:

  1. Chen, H., & Magramo, K. (2024, February 4). Finance worker pays out $25 million after video call with Deepfake “chief financial officer.” CNN. https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk. []
  2. Ali, S. J., Christin, A., Smart, A., & Katila, R. (2023). Walking the walk of AI ethics in technology companies. Stanford Human-Centered AI Institute Policy Brief. https://hai.stanford.edu/assets/files/-/Policy-Brief-AI-Ethics_.pdf [] [] []
  3. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108. [] []
  4. Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20(1), 98-116. [] []
  5. Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347-48). []
  6. World Economic Forum. (2024, January). AI can turbocharge profits. But it shouldn’t be at the expense of ethics. https://www.weforum.org/stories/2024/01/ai-ethics-governance/. []
  7. CrowdStrike. (2024). AI-powered cyberattacks: The new frontier of cybersecurity threats. https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/ai-powered-cyberattacks/. []
  8. Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The state of deepfakes: Landscape, threats, and impact. Deeptrace Labs. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf. []
  9. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. MIT Press. []
  10. Internet Encyclopedia of Philosophy. (2024). Morality and cognitive science. https://iep.utm.edu/m-cog-sc/. []
  11. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814-834. []
  12. Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8), 1096-1109. []
  13. Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10), 780-784. []
  14. Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33-47). Brooks/Cole. []
  15. Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5-15. []
  16. Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55-66. []
  17. Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366-385. []
  18. Crockett, M. J. (2013). Models of morality. Trends in Cognitive Sciences, 17(8), 363-366. []
  19. Cushman, F. (2013). Action, outcome, and value: A dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273-292. []
  20. Hendrycks, D., Burns, C., Basart, S., Critch, A., Li, J., Song, D., & Steinhardt, J. (2021). Aligning AI with shared human values. Proceedings of the International Conference on Learning Representations. []
  21. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533. []
  22. Nanda, N., Chan, L., Liberum, T., Smith, J., & Steinhardt, J. (2024). Mechanistic interpretability for AI safety: A review. arXiv preprint arXiv:2404.14082. https://arxiv.org/abs/2404.14082. []
  23. Kohlberg, L. (1984). The psychology of moral development: The nature and validity of moral stages. Harper. []
  24. Bandura, A. (1991). Social cognitive theory of moral thought and action. In W. M. Kurtines & J. L. Gewirtz (Eds.), Handbook of moral behavior and development (Vol. 1, pp. 45-103). Lawrence Erlbaum Associates. []
  25. Tangney, J. P., Stuewig, J., & Mashek, D. J. (2007). Moral emotions and moral behavior. Annual Review of Psychology, 58, 345-372. []
  26. Selman, R. L. (1980). The growth of interpersonal understanding: Developmental and clinical analyses. Academic Press. []
  27. Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19. []
  28. Surah Al-A’raf: 197. []
  29. Surah Al-Ahzab: 72. []
  30. Surah Al-Anfal: 27 []
  31. Sahih al-Bukhari, Hadith 6095, n.d. []
  32. Sunan Ibn Majah, Hadith 2341, n.d. []
  33. Sahih al-Bukhari, Hadith 893; Sahih Muslim, Hadith 1829, n.d. []
  34. Surah As-Saff: 2-3. []
Abdurahman Seyidnoor

Abdurahman Seyidnoor is a Senior Software Engineer and AI/ML researcher-in-training with expertise in software systems, machine learning, quantum computing, and applied mathematics. His work explores the intersection of technology, identity, and decolonial thought, informed by research into the Swahili Coast and Somali diaspora. He holds a B.A. in Political Science & Criminology (Philosophy minor) from the University of Windsor and an Associate’s in Software Engineering from Mohawk College.


Comments

Leave a Reply

Discover more from Traversing Tradition

Subscribe now to keep reading and get access to the full archive.

Continue reading