The AI companies are increasingly consulting faith-based organisations and ethics experts to develop moral principles for future AI systems as concerns grow around artificial general intelligence, safety, bias, and responsible technological development.
Leading artificial intelligence firms OpenAI and Anthropic have reportedly initiated discussions with religious and interfaith leaders to explore how moral values and ethical principles can be integrated into future AI systems.
According to reports, executives from the two companies participated in a “Faith-AI Covenant” roundtable held in New York, where representatives from multiple religious communities discussed the growing influence of AI and the challenges of building systems capable of making ethically responsible decisions.
The meeting comes amid increasing global debate over the future of artificial general intelligence (AGI), a stage where AI systems may approach human-like reasoning and decision-making capabilities. While AI models can already perform advanced tasks such as coding, data analysis, and content generation, technology experts acknowledge that machines still struggle to independently distinguish between morally acceptable and harmful actions.
Interfaith groups join AI ethics discussions
The gathering reportedly included representatives from organisations such as the Hindu Temple Society of North America, The Sikh Coalition, Baha’i International Community, the Greek Orthodox Archdiocese of America, and The Church of Jesus Christ of Latter-day Saints.
The initiative was organised by the Geneva-based Interfaith Alliance for Safer Communities, which focuses on issues including extremism, radicalisation, and online safety. Reports suggest similar discussions may be conducted in cities such as Beijing, Nairobi, and Abu Dhabi in the future.
The conversations reportedly focused on whether ethical guidance from faith traditions could help technology companies define long-term principles for AI behaviour beyond what government regulations alone can achieve.
Growing focus on AI morality and governance
Technology companies have increasingly begun investing in AI ethics research as concerns around bias, misinformation, cybersecurity, and autonomous decision-making continue to rise. Both Anthropic and Google DeepMind have reportedly engaged philosophers and ethics specialists to help shape value systems for AI models.
Anthropic, which publicly outlines behavioural principles for its AI assistant Claude through a framework known as the “Claude Constitution,” has previously stated that ethical experts and religious perspectives contributed to the process.
However, experts note that defining universal moral standards for AI remains a major challenge because ethical values often vary across cultures, religions, and societies. Critics argue that reaching broad consensus on sensitive topics could prove difficult as AI systems become more influential in everyday life.
Despite the challenges, the growing engagement between technology firms and faith-based organisations signals a broader shift in how the global AI industry is approaching questions of responsibility, governance, and human values in the age of intelligent machines.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




