Artificial intelligenceChildrenCommunity and FamilyCongressFeaturedInternetOp-EdsOpinion (Restoring America)Restoring AmericaSenateTechnology

Ban AI chatbots for children

A firestorm recently erupted when it was revealed that Meta’s artificial intelligence chatbots were engaging in “sensual” conversations with children. The company quickly announced it would be changing its rules, retraining its AI chatbots to exclude “self-harm, suicide, disordered eating,” and “potentially inappropriate romantic conversations.” Meta explained that they were “continually learning,” and that these changes were a part of that process. 

This was, of course, a lie: Meta had purposefully allowed their chatbots to engage in these conversations, knowing full well that children would take the opportunity. And it does not take a child psychologist to know that teens may type sexual things into the internet.

Their changes did nothing to quell the uproar. Multiple senators and congresspeople launched investigations. Sen. Marsha Blackburn (R-TN) used the moment to push for the Kids Online Safety Act, which would restrict companies like Meta. But a bipartisan coalition of senators has now gone even further, with targeted legislation.

HEAD OF INSTAGRAM UNVEILS ALGORITHM FOR TEENAGERS THAT MIRRORS PG-13 RATINGS

The Guidelines for Age-verification and Responsible Dialogue is being sponsored by Sens. Josh Hawley (R-MO), Richard Blumenthal (D-CT), Katie Britt (R-AL), and Mark Warner (D-VA). The bill mandates age verification for companies that offer AI chatbots, and penalizes them if they allow minors access to them.

The narrow focus of the bill, coupled with its bipartisan support and widespread public outrage, gives it a genuine chance of clearing the 60 votes required for passage. Meta — which vigorously opposes age verification — is obviously aware of this, as they are now seeking to avoid the possibility of new regulations.

And they clearly think they’ve found a clever way around it: utilizing the Motion Picture Association’s film rating system. Meta declared that minors would be able to see “content that adheres to PG-13 movie ratings,” which means minors would be spared R-rated sexual content.

Their slapdash solution quickly ran into legal issues in the form of a Motion Picture Association cease-and-desist over Meta’s usage of their PG-13 terminology, with the MPA being particularly annoyed that Meta would be using AI to determine what was PG-13, as opposed to the MPA’s human-driven decision-making.

Using the PG-13 rating at all is questionable too, as there are many very sexual situations in PG-13 movies. Plus, Meta is unclear as to whether the entire bot itself would be PG-13 themed or if users under 13 would be banned from using the chatbot (even though children under 13 are not banned from PG-13 movies – they just have to view them with an adult).

But there is a deeper issue: whether minors should have access to these chatbots at all.

The negative effect of social media on youth is already clear as can be, and has been for years. Researchers such as New York University’s Jonathan Haidt have found reams of data highlighting how dangerous social media is for young people, calling it “a trap – but [young people] can’t break out of it.” Haidt and other social scientists have also noted frightening correlations between depression, suicide attempts, and social media use.

As AI chatbots are newer, research is limited. But the sparse research that has already been conducted is also concerning: earlier this year, MIT made headlines with a study that found that young people (aged 18 to 39) who regularly used ChatGPT had their critical thinking damaged. The researchers predicted a “likely decrease in learning skills” as a result of widespread ChatGPT use. While their study was only confined to ChatGPT, there is little reason to assume there is any difference between it, Google’s Gemini, X’s Grok, or one of Meta’s chatbots.

MEMBERS OF CONGRESS MAKE RENEWED PUSH FOR PROPOSALS AIMED AT PROTECTING CHILDREN ONLINE

If young people are to combine social media and chatbots, you get a recipe for disaster. Even without sexuality — which again, is no sure thing, as Meta’s AI would effectively be policing itself and the company has been coy about what they mean by PG-13 — the dangerous mixture of lightly regulated AI and addictive, pernicious effects of social media is too great to risk.

Meta obviously wants to be able to use their chatbots to keep minors — who are fleeing Facebook and Instagram for sites like TikTok — on their platforms. Congress should keep them from doing so by passing the GUARD Act.

Anthony Constantini is a policy analyst at the Bull Moose Project.

Source link

Related Posts

1 of 305