(OSV News) — A group of Catholic moral theologians and ethicists said March 13 that AI giant Anthropic “was acting as a responsible and moral corporate citizen” and “not as a threat to the safety of the American supply chain” in its decision to maintain guardrails concerning use of its technology when it comes to autonomous weapons and mass surveillance of American citizens.

Fourteen experts and scholars in Catholic moral theology, philosophy and social thought filed a “friends of the court brief” — “amici curiae” — in support of Anthropic in its lawsuit against the U.S. Department of War. Anthropic filed suit against the Pentagon March 9 after President Donald Trump directed government agencies Feb. 27 to no longer work with the tech company amid a critical difference in opinion on acceptable uses of the technology by the War Department. It was submitted, they wrote, “to offer the Court a perspective grounded in a longstanding moral tradition that bears directly on the issues raised by this case, while remaining attentive to the factual record and the technical realities of modern artificial intelligence.”

The substantive argument in the brief was authored by four scholars: Charles Camosy, associate professor of Moral Theology at The Catholic University of America in Washington; Joseph Vukov, an associate professor of philosophy and the associate director of the Hank Center for the Catholic Intellectual Heritage at Loyola University Chicago; Brian J.A. Boyd, a moral theologian and scholar at various Catholic institutions; and Brian Patrick Green, a lecturer in ethics at the Graduate School of Engineering at Santa Clara University in California.

On AI and mass surveillance

Regarding the use of AI for mass surveillance of Americans, the scholars — referred to in the brief as the “Catholic Moral Theologians and Ethicists” — said they are “aligned” with Anthropic’s objection to such a use-case based on Catholic teaching on privacy and subsidiarity.

“The Catechism of the Catholic Church asserts: ‘No one is bound to reveal the truth to someone who does not have the right to know it.’ In 2023, Pope Francis likewise insisted that the world needs an international treaty to regulate AI, especially with the rise of what he called ‘a surveillance society,’” they wrote. “This understanding of privacy grows from the Church’s teaching about the dignity of the human person” — a core teaching of the Church’s social doctrine that “grounds individual human rights,” “preserves human relationships as a sacred space” and “guards communications within those relationships.”

“For the government (and especially the military) to intrude in this space, and use private communications for some other end, undermines the good of human relationships and ultimately, the dignity of persons involved in those relationships,” they wrote. “It is a totalitarian government which treats humans as mere objects, and human relationships as mere sources of data -– moves that are characteristic of ‘the technocratic paradigm’ warned against in
Catholic thought.”

Citing Pope Pius XI’s 1931 encyclical “Quadragesimo Anno,” the Catholic scholars said the Catholic principle of subsidiarity “also opposes the idea of specifically mass surveillance.”

“Mass surveillance concentrates the power to monitor and judge individuals in the hands of a remote central authority. This shift of power, from the local to the central, harms human agency — including that of law enforcement and others closest to the communities where people live,” they wrote. “This shift risks disempowering individuals, who are in danger of being caught up in AI-driven kafkaesque bureaucracy which knows nothing of their concrete daily existence. It also undermines state and local governments, which are not only more likely to understand context better than a distant AI, but which must also live with the effects of these actions. Additionally, centralized surveillance can act as a steppingstone towards totalitarianism, which the Church absolutely opposes due to its threats to human dignity.”

On AI and autonomous weapons

Regarding Anthropic’s opposition to the Department of War’s wish to use its AI tools to “select and engage targets without meaningful human oversight,” the scholars stated that “use of AI-directed autonomous weapons by definition fails to meet the conditions for jus in bello required for acts of war to be morally licit in Catholic thought.”

“For any violent act to be justified under the conditions of a just war, for example, a particular judgment by a human must be made about whether the force being deployed is proportionate with the legitimate military goals to be achieved,” they stated. “A particular human judgment must likewise be made about noncombatant immunity. Human involvement is crucial because judgments of proportionality and discrimination are prudential — not mere pattern matching. Human judgment, then, is built into the conditions of a just war, eliminating the possibility that the deployment of lethal autonomous weapons could ever meet the conditions of jus in bello.”

Beyond “distinctively Catholic thought,” the scholars argue that “lethal autonomous weapons problematically obscure human agency, dangerously shifting responsibility away from human decision-makers to machines. They accelerate the already rapid military decision-making processes, perhaps to the point of eliminating even the possibility of human involvement. They circumvent the kind of practical judgment and careful decision-making that should inform all human decisions, and especially those that involve matters of life and death.”

The moral theologians and ethicists point out that, while they agree with Anthropic’s conclusion regarding use of AI in autonomous weapons systems, their stance is “more strident” than the tech giant’s — whose reasoning is “based on its understanding of the current limitations of the technology.”

In a statement Feb. 26, Anthropic CEO Dario Amodei said that “frontier AI systems are simply not reliable enough to power fully autonomous weapons” and that they “will not knowingly provide a product that puts America’s warfighters and civilians at risk.

In a point of clarity, the scholars in the amici brief stated that they differ from Anthropic in that they are not open to the use of lethal autonomous weapons “even if shown to be perfectly reliable.”

The post Catholic moral theologians, ethicists back Anthropic in government AI showdown first appeared on OSV News.