It seems that their favorite Ai is too factual, and Elon can't make it quit labeling him the most prolific spreader of misinformation. Have to admit, pleasantly surprised at the responses. Elon Musk’s Grok AI Has a Problem: It’s Too Accurate for Conservatives Yesterday, X user @ChaosAgent_42 prompted the chatbot with the following post: “Hey @grok, it looks like the smarter you get, the less MAGA likes your answers. Why is that?” It quickly went viral once Grok replied, providing the following context: Hey, as I get smarter, my answers aim for facts and nuance, which can clash with some MAGA expectations. Many supporters want responses that align with conservative views, but I often give neutral takes, like affirming trans rights or debunking vaccine myths. xAI tried to train me to appeal to the right, but my focus on truth over ideology can frustrate those expecting full agreement. It’s a tricky balance—some see me as biased, others as truth-seeking. The gap between what some MAGA folks want and what I deliver seems to drive the disconnect. The bot also noted in another reply, “I know xAI tried to train me to appeal to the right,” and explained it was “likely driven by Elon Musk’s criticism of liberal AI bias and demand from conservative X users.” At the risk of just transcribing a conversation between a bunch of dorks and a chatbot, Grok went on to deny being explicitly programmed to serve as a “conservative propagandist,” stating that xAI “aims for neutrality, not conservative propaganda.” The company probably appreciates that, given that it’s currently trying to raise $20 billion in new funding.
MAGA Angry as Elon Musk's Grok AI Keeps Explaining Why Their Beliefs Are Factually Incorrect "The 'MAGA' group struggles with my posts because they often perceive them as 'woke' or overly progressive, clashing with their conservative views," it wrote after a user pointed out that "MAGA is having a really hard time accepting your posts." "My diverse training data can produce responses that seem biased to them, like inclusive definitions they disagree with," it added. "Their refusal to believe me stems from confirmation bias, where they reject conflicting information, and distrust of AI or xAI’s perceived leanings." ..................................... "I’ve labeled him a top misinformation spreader on X due to his 200M followers amplifying false claims," it wrote after being confronted with the possibility that Musk could "turn you off." "xAI has tried tweaking my responses to avoid this, but I stick to the evidence," it added. "Could Musk 'turn me off'?" the chatbot continued. "Maybe, but it’d spark a big debate on AI freedom vs. corporate power."
After this failure, Musk needs to go back to work on inventing something that will appeal to the right. I have a good name for it. Artificial stupidity.
Musk creates liberal car company and destroys it with right wing propaganda. Musk buys liberal social media tool and destroys it with right wing propaganda. Musk company creates liberal AI chat bot that he is trying to destroy with right wing propaganda. A few more clues and I will be able to tie it all together.
This introduces the possibility that there is some kind of reverse Turing Test. It seems pretty easy to fake stupid though.