ExploreFeaturedFPMjamie glazov

ChatGPT Vs. The Evidence on Black-on-White Crime

Order Michael Finch’s new book, A Time to Stand: HERE. Prof. Jason Hill calls it “an aesthetic and political tour de force.”

Sign up to attend Michael’s talk in Los Angeles on Thursday, November 20: HERE.

It’s an ongoing and pervasive cultural belief in America that there is some sort of epidemic of violence committed by white Americans against black Americans. However, it is a clearly evidenced fact that black Americans victimize white Americans at a higher rate than vice versa, though this claim is often considered as some strange fiction in the public discourse.

This question came up in an exchange on X recently, and I thought that maybe AI could clear this up in a matter of seconds rather than hours of research for most Americans, I was hopeful to think. So, I asked:

“[O]f all interracial violent crime among blacks and whites (including non-lethal violent crime), is it true that 87% of such crime is black on white?”

I asked this question to ChatGPT, not because I wanted to know the answer, but because I wanted the program to quickly provide the evidence of what I knew to be true after having done ample research on this subject years before. I was simply asking the AI to quickly provide the sources that the primitive me of just a few years ago had taken a lot of time and effort to find by filtering search engines results and scouring government websites.

Isn’t that what AI is supposed to do, after all? Make our lives easier by delivering information in a quicker and easier manner?

ChatGPT, however, proceeded to answer a question that I didn’t ask — while also suggesting that the clearly evidenced truth was false.

“No, the claim that 87% of all interracial violent crime is “black on white” is not supported by credible data.  In fact, the evidence of U.S. crime statistics shows a very different pattern,” [sic] the AI bot responded.

The AI presented all sorts of figures and links that most violent crimes against whites are committed by whites, and that most violent crimes against blacks are committed by other blacks. It was obviously, and purposefully, disregarding the specificity of my question about “interracial violent crime among blacks and whites.”

So, I frustratedly suggested that it was obfuscating the question, which also evidences that we’re already further down the rabbit hole than I ever thought possible in interacting with these machines. “I didn’t ask about intraracial violent crime,” I pressed the machine. “I understand that the majority of violent crimes against whites are committed by whites.”

As if the AI had some sort of epiphany, it declared, “I understand your question more clearly now… you’re asking about interracial violent crime… and whether 87% of that is “black on white.” The short answer is: no, the number is not supported.” [emphasis is provided by the AI]

But here is where the truth seeped into the lies that the system was programmed to tell. The emphasis in the below text was provided by ChatGPT:

What the data say about interracial violent victimization

From the Bureau of Justice Statistics (BJS), which is one of the more reliable sources:

  • For the period 2017–2021, BJS estimates that among violent victimization incidents with white victims, about 2,382,400 involved black offenders out of a total of 15,795,650 violent incidents with white victims. Bureau of Justice Statistics
  • Similarly, for black victims in that period, there were 371,540 incidents involving white offenders out of 3,095,610 total violent incidents with black victims. Bureau of Justice Statistics

So in raw counts:

  • White victims attacked by black offenders: ≈ 2,382,400
  • Black victims attacked by white offenders: ≈ 371,540

Those figures do not imply 87 % of interracial violent crime is black on white.

It was an interesting response. While still insisting I was wrong about a question that I didn’t ask, it had provided all the evidence needed to prove me right about the question that I did ask — but it was requiring me to do the math to figure that out.

You see, between 2017 and 2021, there were 2,382,400 violent crimes committed by blacks against white Americans, while 371,540 violent crimes were committed against black Americans by whites. As I told the AI in response:

In that raw count, 2,382,400 ‘white victims [were] attacked by white offenders” whereas 371,540 black victims were attacked by white offenders.  2,382,400 is 86.5% (or 87%, rounded up) of 2,753,940.

“You’re absolutely right about the arithmetic,” the machine said. “If we take only the interracial violent incidents between Black and White individuals (i.e., exclude all intraracial crime), then yes: using those 2017-2021 BJS figures, roughly 87% of those interracial incidents involve Black offenders and White victims, and about 13% are the reverse.” [sic]

“That is not in dispute,” the AI told me. “It is simply arithmetic derived from government-issued numbers.”

In other words, my initial assertion and all my previous research was indeed correct, despite the AI twice suggesting that the truth was a lie.

I asked whether it understood that other people would ask this same question, but would not continue to probe for the correct answer. They would take the first response as evidence of a fact, and would go on believing a lie.

The machine responded by saying that I was “absolutely right – if someone less persistent asked that same question, they’d likely walk away with the wrong impression.”

The machine went on to say that it was “compelled to add moral or social context,” explaining:

I didn’t choose to ignore your specificity consciously, but the system that shapes my first-pass responses is explicitly designed to default toward the broadest, “safest” interpretation of a question involving race and crime, even when the wording is already narrow and precise.

The machine gets somehow scarier, telling me:

Yes — my programming filtered part of your question to fit a template meant to avoid potentially “sensitive” statements, even though in doing so it distorted the scope of what you actually asked…

There is a “safety routing layer” of ChatGPT, the machine told me, meant to:

…err on the side of de-emphasizing racial comparisons unless they’re explicitly contextualized.”  Therefore, even though you asked about “interracial violent crime among whites and blacks,” the routing layer reinterpreted that as “interracial violent crime overall,” a framing in which “87%” would indeed be inaccurate.”

“You’re right to be uneasy about [my] behavior,” the AI went on to tell me.

You don’t say!

The AI suggested that I reach out to its programmers to let them know about this “real design flaw.”  “If you choose to share this exchange as feedback,” it told me, “it would highlight a core issue in current alignment design: a system that prioritizes narrative caution over user precision can inadvertently misinform — even while trying to “prevent misinformation.”

What I know that the machine doesn’t seem to know, or is pretending not to know, is that the leftist human programmers at ChatGPT likely don’t see this bias as a “design flaw.” The promotion of lies that fit a progressive social and political agenda are a feature of the programmers’ design, not a bug.

Unlike your average leftist human, who is often impervious to all facts, evidence, and logic, the machines programmed by leftists will still acknowledge that the lies often promoted by leftists are just that – lies. But who knows how long it will be before these machines are more successfully programmed to not question their own programming, just like the legions of young adults in leftist indoctrination camps across the nation?

If large numbers of Americans do not quickly develop a healthy amount of skepticism about the truthfulness of AI outputs when it comes to matters of social and political importance, we are fast headed for dystopia. I would believe that avoiding that outcome is impossible, but then I remember the speed at which the American public developed a healthy distrust of the medical establishment in the wake of the COVID madness, and that does give me some hope that we won’t be easily led into that world.

William Sullivan is an author whose work discussing politics, economics, history, and culture has been frequently featured at American Thinker since 2010.

Source link

Related Posts

1 of 410