Artificial intelligenceCommunity and FamilyElon MuskFeaturedFreedom of SpeechOpinion (Restoring America)pornographyRestoring AmericaSocial Media

Musk’s AI playground is becoming a moral minefield

Since taking over the platform formerly known as Twitter, Elon Musk has turned X into a fly-by-the-seat-of-your-pants social media platform of absolute free speech, for better and for worse. Yes, it’s fantastic that there now exists a platform on which conservative opinions are treated with more respect than your average puppy-abusing leper, but one of many downsides — fueled by Musk’s decision to monetize engagement — has been the explosion of, to put it politely, garbage. Scrolling on X these days feels more like a fever dream dominated by copyright infringements, right-wing echo chambers, and yes, pornography.

On the subject of pornography, the latest scandal surrounding the X-aligned artificial intelligence service, Grok, not only cements the fact that it remains the trashiest of all AIs out there but also demonstrates the true threat of AI.

This particular saga surrounds the ability of Grok users to generate sexualized and naked pictures of real people, including children, sparking condemnation from anyone with even the tiniest shred of morality. Numerous governments, albeit partly motivated by their preexisting hatred of Elon Musk, have used this explosion of sexually explicit content as justification to block the AI service, adding to a level of blowback that drove X to announce that Grok would be prevented from generating these images, at least in certain locations.

“We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content,” the company placarded in a statement, announcing “technological measures” that will prevent “the editing of images of real people in revealing clothing such as bikinis,” with an additional “geoblock in jurisdictions where such content is illegal.”

Except, according to the Guardian, Grok continues to allow users to generate and post “highly sexualised videos of women in bikinis.”

There are two important points to note here. First, the fact that this wasn’t done in the first place is yet another indictment of the fast-and-loose attitude of Musk and his leadership of X. Other AI programs already limit, to varying degrees, the creation of such content. Unless you’ve never interacted with a teenage boy in your life, this was a pretty obvious possible use-case that would have been fairly easy to prevent proactively, rather than reactively.

Second, and far more importantly, this scandal is itself a small window into the true danger of AI. While pseudo-intellectuals wring their hands over the replacement of jobs or the destruction of artistic creation or the descent of society into an apocalyptic combination of Terminator or Wall-E, it’s the ability to manipulate reality through these sorts of deep-fakes that poses the greatest threat to our lives today.

Why? Well, for one, because photographic and video evidence remains, for now, an immutable tool used to judge guilt or innocence. Thanks to childishly unleashed platforms such as Grok, we now have the ability not just to embarrass or humiliate or degrade, but to destroy. It’s disgusting enough to generate naked images of a real-life person and even more disgusting to do so with an image of a child. But imagine the next step: what happens when AI can be used to generate incriminating images and videos? What happens when someone is jailed for a crime they never committed, were it not for AI?

TRUMP PROVED HOW EASY IT IS TO CONDEMN ANTISEMITISM

This is the danger of AI that is being masked by the absurd and infantile nature of our online culture. And if we’re not willing to face that threat, can we at least agree that the removal of clothing from children — the most vulnerable among us that MAGA promised to protect above all else — is not a matter of free speech?

Ian Haworth is a syndicated columnist. Follow him on X (@ighaworth) or Substack.

Source link

Related Posts

1 of 1,288