BioLogos Signs Interfaith Statement Calling for Ethical AI
BioLogos President Kristine Torjesen joined faith leaders in Rome to sign a groundbreaking statement calling for AI that respects human dignity.
Image courtesy of the American Security Foundation.
This October, BioLogos President Kristine Torjesen joined faith leaders in Rome to sign a groundbreaking statement calling for technology that respects human dignity and remains under human control.
Below, Kristine shares her reflections on attending the American Security Foundation’s Summit on Ethics and Artificial Intelligence. She also explores the statement’s key principles and looks forward to BioLogos’ role in shaping how we use this technology ethically.
To read the full Joint Statement from the summit, click here.
Artificial intelligence (AI) is among the defining issues of our time.
Like most technology, it can be used for great good or great harm, but it’s not neutral. It’s already changing the way we live, learn, and relate to one another.
I’ve just returned from an AI and Ethics Summit at the Vatican, where Christian and Jewish leaders gathered to consider how faith can guide AI’s future. Together, we released a Joint Statement on AI Ethics, calling for technology that serves humanity while respecting human dignity and remaining under human control.
The statement outlines five essential principles for ethical AI: accuracy, transparency, privacy, security, and human dignity/common good.
It also calls for independent evaluation of AI systems, safeguards to protect children and other vulnerable people, and vigilance so that AI never replaces human relationships or moral responsibility.
Why Did BioLogos Sign This Statement?
As Christians, we are called to love our neighbor (Mark 12:31), protect the least of these (Matthew 25:40), and care for God’s creation (Genesis 2:15).
Few technologies in our time will more profoundly impact – either positively or negatively – our ability to fulfill these responsibilities than AI.

Image courtesy of the American Security Foundation.
Leveraged responsibly, AI can help us overcome the major environmental and health problems we face. But if we use it without care, we won’t just fail to solve these issues. We will exacerbate them further.
BioLogos joined other faith leaders in Rome because, with so much at stake, we must ensure that AI is used in ways that reflect our faith and uphold human dignity.
Five Principles for Artificial Intelligence
In the joint statement, we called for AI to possess five key principles. Let’s take a closer look at each:
Principle #1: Accuracy
Asking ChatGPT a question feels like an individual, one-off request. But when millions do it every day, AI is undoubtedly shaping how humanity sees the world and each other.
And when these AI systems answer with bias or without nuance, their massive and growing influence can hinder human flourishing. This is particularly true for vulnerable populations, who may be misrepresented or misled by AI systems.

Image used under license from Shutterstock.com
The joint statement calls on AI developers to allow independent evaluators to regularly test their models for biases. Should issues be found, developers must repair and re-train their models away from views that offend human dignity.
We also recommend that regulators require large language models (LLMs) to cite their sources and developers continue to find ways to improve the accuracy of their systems.
Principle #2: Transparency
AI systems are often black boxes, even for specialists. The way they operate makes it difficult to know how they reach decisions.
When we rely on AI systems whose workings we don’t fully grasp, we may unknowingly allow their hidden logic to shape our own. And when that logic reduces humans to data points, we risk losing sight of their status as fellow image bearers.

Joanna Ng | Data, Truth, & AI
Joanna discusses some of the risks that come from putting too much trust in computers and artificial intelligence.
The joint statement demands that AI systems be transparent about when they are present, when they are being used, and how they come to conclusions.
We must also prevent AI from portraying itself as human. We have already seen evidence of harm when chatbots interact with users in anthropomorphic ways. That’s why we ask developers to avoid LLMs that refer to themselves as being alive or having emotions.
Principle #3: Privacy
We believe that the right to privacy should extend to our use of AI systems.
When a technology grows as quickly as AI has, it can be difficult for privacy and security to keep up.

Image used under license from Shutterstock.com
Users deserve to know how their data are collected, stored, and used. We therefore call for AI developers to guard against unauthorized access or misuse of data.
As a safeguard, regulators should also require AI developers to share how user data are being collected, stored, and employed by AI systems.
Principle #4: Security
When you think of AI, you may picture it as a helpful chat box that answers questions or proofreads emails. But we cannot ignore its more threatening use cases.
Countries have started using AI-enabled facial recognition technology to locate, arrest, and harm ethnic and religious minorities. As AI continues to develop, we should also ask how it will be integrated into military action across the globe.
To ensure safety and security, states must make AI a tool of protecting civil rights, not violating them. That means refraining from using AI systems to surveil citizens and working to stop manipulations of AI that encourage violence against humans.
Crucially, the international community must also come together to ban the use of AI to autonomously wage war.
Principle #5: Human Dignity and Common Good
AI should be developed in alignment with a robust understanding of human dignity, one that takes into account our faith, emotions, culture, work, and environment.

Image used under license from Shutterstock.com
We can apply this concept of common good to AI’s impact on each of these unique aspects:
- We must refuse to idolize or worship AI and tread carefully in how we allow it to influence our spiritual life;
- We should reject AI systems that replace human friends, romantic partners, parents, or religious authorities;
- We need to carefully navigate how AI will impact human jobs.
Common good extends beyond our own individual selves and applies to the world God has created. Large AI systems require massive amounts of water and electricity. Caring for creation in the age of AI means knowing how much energy it is consuming and how we can make it more efficient and sustainable.
Conclusion
Participating in this summit was both sobering and hopeful. AI is moving fast – sometimes faster than our moral reflection – but there’s a growing movement of people who believe faith has a vital role to play in guiding how we use it.
Read more:
- The Church’s Esther Moment in the Era of AI
- A Christian Educator’s Perspective on AI in the Classroom
- What Does AI Mean for the Church and Society?
- Loving My Neighbor in a Technological World
- AI Friends and Christian Virtue: Why AI Shouldn’t Replace Human Community
At BioLogos, we’re committed to exploring questions of faith and science for human flourishing, including thoughtful engagement with AI. In the coming months, we’ll be sharing more about how people of faith can lead in shaping the ethical and wise use of technology.
Curious About Faith and AI?
Join our email list to receive up-to-date insights and expert reflections on how we can faithfully engage with artificial intelligence.
About the author






