Why You Should Care about AI Political Bias
The people behind this tech are trying to resolve the heated debates of our day — by shutting down one side.
Over the past couple of weeks, I’ve written a bit about the troubling left-wing biases encoded into the widely discussed new AI technology ChatGPT. The new chatbot is happy to pen long, in-depth stories regarding any number of popular progressive fantasies — that Hillary Clinton won the presidential election, for example, or that Stacey Abrams lost her 2018 bid for Georgia governor because of voter suppression. But it refuses to indulge the right-wing alternatives, citing the danger of “misinformation.” At the same time, the AI has standard left-wing views on transgender ideology, the question of drag-queen story hour’s appropriateness for children, and a variety of other ongoing culture-war debates.
My pointing this out has invited a fair amount of scoffs and eye-rolls from online progressives. These critics have posited that I was “either manipulating the system somehow or making fake screenshots”; that the AI’s bias was a silly thing to focus on; that “reality has a liberal bias”; that the conservative premises I was entering were just “hate speech,” which should be anathema to a company that “is working hard to establish trust with the public and policy makers”; or that, as Current Affairs editor in chief Nathan Robinson argued, “it’s not that ChatGPT is left-wing, it’s that you are wrong.” In essence, the bias doesn’t exist, and/or who cares if it does, and/or it exists and it’s good, actually.
This morning, Vice tech writer Matthew Gault got in on the action with a predictably titled article: “Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone ‘Woke.’” (The subtitle assures readers that “All AI systems carry biases, and ChatGPT allegedly being ‘woke’ is far from the most dangerous one.”) “Accusations that ChatGPT was woke began circulating online after National Review published a piece accusing the machine learning system of left-leaning bias,” Gault wrote, citing my post. “Experts have been sounding the alarm over the biases of AI systems for years,” he notes. It’s just that conservatives are worried about the wrong kind of biases: “In typical fashion for the right-wing, it’s not the well-documented bias against minorities embedded in machine learning systems which has given rise to the field of AI safety that they’re upset about, no — they think AI has actually gone woke.”
Well . . . has it? Gault implies that it’s such a ridiculous question that it doesn’t merit an answer. But he himself fluctuates between scoffing at the premise and admitting its essential truth. Pointing to examples of what he sees as skewed political responses shared by me and other conservatives, he writes: “To them, this was proof that AI has gone ‘woke,’ and is biased against right-wingers.” But “this is all the end result of years of research trying to mitigate bias against minority groups that’s already baked into machine learning systems that are trained on, largely, people’s conversations online,” he adds. “Part of the work of ethical AI researchers is to ensure that their systems don’t perpetuate harm against a large number of people; that means blocking some outputs.” And in any event, “discussions around anti-conservative political bias in a chatbot might distract from other, and more pressing, discussions about bias in extant AI systems.”
In other words: Yes, it’s biased, you morons. But it was previously biased against the good guys, and now we’re working to make it more biased against the bad guys (or “bad outputs”). Oh, and also, stop complaining, because you’re distracting from our efforts to preemptively delegitimize your political views in the new digital information sphere.
Read the full article at the link below.
Promo Code "PARALLEL" = 1 Free Month Subscription
LINK: https://www.nationalreview.com/2023/01/why-you-should-care-about-ai-political-bias/