Artificial intelligenceChristianityCommunity & FamilyFeaturednuclear weaponsOpinionOpinion (Restoring America)Opinion Restoring AmericaRestoring AmericaScienceScience and Technology

Will AI herald a new Hiroshima?

As a particle physicist, I’m interested in the fundamental laws of nature and questions connected to the origin of the universe. My field grew out of the same intellectual soil that produced the field of atomic research, decisively pushed forward by the Manhattan Project. Arguably, entering the race to develop atomic weapons was necessary and justified as an effort to use nuclear dominance as a deterrent, but that effort went much further than scientists originally intended.

Less than a century has passed since humanity encountered the horrific devastation that nuclear weapons can cause when it witnessed the bombings of Hiroshima and Nagasaki. It must give us pause to contemplate whether we have learned our lesson regarding the potentially catastrophic consequences of developing new technologies in the absence of moral direction. With artificial intelligence, we are facing a similar turning point, one that is more subtle and dangerous.

The White House recently announced its AI Action Plan. This is excellent news as AI has already become a multi-purpose tool within society and may soon be the weapon of choice of tomorrow.

While scientists and developers may have an academic interest in what’s possible, the ultimate applications of AI have the potential for extreme good and extreme evil. We must stop to think about the implications for our society and our souls.

The new plan for American AI dominance is timely and necessary. It’s impossible to protect against attacks that maliciously use AI without being at the forefront of AI development as a nation. The AI train is undeniably in motion, and no calls for pauses will stop it. There are promising advances that are already revolutionizing our lives and entire industries. But as a Catholic scientist, I see another dimension. The dangers to the human mind and soul that come from the misuse of AI are already a reality. As more powerful AI tools are developed, the new Hiroshima might not be a singular cataclysmic event. Rather, it will come as a subtle enemy targeting millions of souls. And it’s already here.

There is nothing intrinsically evil about chatbots or AI in general. Large language models are currently little more than giant soups of linear algebra that train to identify patterns in text, and ultimately perform a very specific task: predicting and mimicking human conversation.

The difference between predicting the next word in a conversation and having a conversation may seem subtle, but it is critical to our understanding and safe usage of these tools. When humans engage in conversation, we are doing much more than returning the most likely response. We consider, examine our memories, and engage our free will. The AI bots have no free will equivalent, no value system, and no real decision-making in generating a response. They generate a likely response to a prompt, and the result looks a lot like human conversation.

Chatbots inevitably fool most of us, given how natural these conversations seem at first glance. We often make the mistake of thinking that the AI is engaging in logical thinking. This is not the case. Chatbots don’t think. Users who engage AI as if it were human encounter a dangerous combination of the anonymity of the internet with the falsification of human connection. If the consequences of the pornographic dehumanization of the person or the social isolation of excessive social media use are not yet abundantly clear to society, they are about to be.

AI has clear dangers that anyone, even in a secular society, can identify. In recent years, admirable efforts have been made to introduce safeguards against false information, harmful activities, and even the appearance of medical advice. While developers try to improve these guardrails, the most important disclaimer is missing: one that warns us of potential damage to the human soul.

Millions of people are already flocking to AI platforms in lieu of social connection. Some seek emotional support from a non-human therapist, others substitute human relationships entirely with AI companions. Cases of teenage depression and suicide have already occurred to users of these chatbots, as have instances of humans fooling AI to obtain instructions on how to harm themselves and others

This backward trend of humanizing what is not human can easily dehumanize what is. No matter how good the guardrails become, the only solution is a recognition of the uniqueness and value of true human connection. Society is painfully learning what Christianity already knows: that our humanity can be mimicked but never replicated.

AI EXPLAINS WHY AI IS BIASED AGAINST REPUBLICANS

We often hear about Hollywood-esque apocalyptic dangers of a future general intelligence taking over humanity. There is a far greater danger facing us if we don’t take seriously the spiritual consequences of our engagement with these technologies. If we don’t explain what AI is and what it isn’t, we may one day look back and realize that we did not lose control of AI, but of ourselves. 

The Hiroshima of AI may unfold not in shockwaves and fallout to be measured in lives lost but in a silent devastation of love unspoken, community abandoned, and the slow forgetting of what it means to be human, made in the image and likeness of God.

Fernanda Psihas is a professor of physics and computer science at Franciscan University of Steubenville and a Catholic speaker on the intersection between faith, science, and technology. She conducts research in experimental particle physics, detection technologies, and AI.

Source link

Related Posts

1 of 74