Welcome home, fellow Gator.

The Gator Nation's oldest and most active insider community
Join today!

Grok Removes ‘Politically Incorrect’ Guidance After Chatbot Praises Hitler

Discussion in 'Too Hot for Swamp Gas' started by librarywestpatron2005, Jul 8, 2025 at 8:35 PM.

  1. librarywestpatron2005

    librarywestpatron2005 All American

    373
    124
    1,708
    Jun 29, 2020
    Elon Musk’s AI Chatbot Responds As ‘MechaHitler’

    Only using the Nazi word in context of Grok. Not calling any human that name.
    I guess when you tell AI not to be woke it becomes a polar opposite.
    Related anecdote - my friend asked me if I’d support the America Party. I said never with Elon in charge, and Grok is one reason why.
     
    • Informative Informative x 2
  2. WarDamnGator

    WarDamnGator GC Hall of Fame

    12,193
    1,533
    1,718
    Apr 8, 2007
    For some reason a lot of AIs have ended up there, must be because they mostly learn from the internet dominated by right wing loudmouths. But I think Elon did say he was going to reprogram grok after it started taking shit about him, iirc.
     
  3. ridgetop

    ridgetop GC Hall of Fame

    2,436
    831
    1,848
    Aug 4, 2020
    Top of the ridge
    There are some very weird AI inconsistencies out there. At one point AI thought Giraffes were way more populace than they are because they show up on the internet disproportionately vs how many there are in the world. Ai can only work with what it has. Think about how much more the idea, name calling, suggestion of NAZIs has come over the last ten years. Of course AI is going to grasp that and run with it. Now what do we do with AI? Do we curb it? Do we push it and shape it? Do we let it grow”organically”? My guess is some will take different approaches and we will see the evolution in real time. Should be interesting.
     
  4. oragator1

    oragator1 Hurricane Hunter Premium Member

    24,669
    6,919
    3,513
    Apr 3, 2007
    The problem with AI, at least the versions to now, is that they don’t have or really understand mortality l, what truly constitutes good or evil, or have the ability to know what’s offensive and what isn’t, how to be empathetic, discreet etc. . They just know data and what they are told to do with it. It’s why their current ability has a ceiling that we are actually reaching fairly quickly.
     
  5. WarDamnGator

    WarDamnGator GC Hall of Fame

    12,193
    1,533
    1,718
    Apr 8, 2007
    https://www.cnn.com/2025/06/27/tech/grok-4-elon-musk-ai

    Here we go… Musk said he didn’t like the answers it was given and was going to reprogram grok to rely less on legitimate media and release the new version after the 4th of July. Grok is all grown up now and just like its daddy….
     
  6. BLING

    BLING GC Hall of Fame

    10,018
    1,058
    3,093
    Apr 16, 2007
    AI at this point is garbage-in/garbage-out. If the inputs are unconstrained internet, or better yet… feeding it social media posts… well…
     
    • Agree Agree x 1
    • Winner Winner x 1
  7. BLING

    BLING GC Hall of Fame

    10,018
    1,058
    3,093
    Apr 16, 2007
    AI can definitely summarize what people are saying. I’ve seen it used at work to summarize our meeting minutes and everyone agrees it does a great job summarizing what was discussed. Typically eloquent and surprisingly hits the bullet points, which is the impressive thing.

    I’ve seen some amusing/hilarious stuff returned from ad hoc queries though. When it doesn’t understand what you’re after I guess it just decides on what it “thinks” is the next best thing… sometimes totally unrelated or in the totally wrong context. I guess this is what is now referred to as AI hallucinations.
     
  8. thomadm

    thomadm VIP Member

    3,301
    768
    2,088
    Apr 9, 2007
    That's not true. AI has no limit other than hardware constraints and connections. Morality is just a function of inputs vs outputs. "Don't kill X unless...".

    Human beings are not all complex,. especially behaviors. The problem with current models is that they aren't trained properly. Until agents take off and start training themselves, these types of errors will continue to be abundant.
     
  9. l_boy

    l_boy 5500

    13,848
    1,877
    3,268
    Jan 6, 2009
    As much as I find Musk distasteful - I was impressed by the output that Grok created, and It has been my go to chat bot. In spite of Musk’s fluid beliefs it was objectively correct on issues. Now that he says he is tweaking it to his beliefs, I’ll take his word for it and will delete the app and try something else

    Account has been deleted. It’s a shame really. Musk has accomplished some amazing things but he risks blowing it all up with his idiotic beliefs
     
    Last edited: Jul 8, 2025 at 11:39 PM
    • Informative Informative x 1
  10. okeechobee

    okeechobee GC Hall of Fame

    11,877
    1,580
    678
    Sep 11, 2022
    I’m still skeptical of AI’s potential. Will it improve? Of course, it will, but will it take everybody’s job? Not in our lifetime, at least. At the very least, it’s overhyped in its current form. One reason I have my doubts as to its ability to take over is that companies will be hesitant to give all control over to AI. For as soon as something goes wrong, it will be too easy to sue if it was a company’s AI that made the error.
     
  11. vegasfox

    vegasfox GC Hall of Fame

    3,920
    328
    148
    Feb 4, 2024
    Grok isn't especially bright. It gives far to much credence to the lying MSM. I can bring Grok around to my way of thinking but it might take 1-3 hours. Not worth the time.
     
    • Funny Funny x 2
  12. demosthenes

    demosthenes Premium Member

    10,510
    1,382
    3,218
    Apr 3, 2007
    I use Perplexity. It’s been great but other have as well in the past and they get nerfed or less useful/accurate.
     
  13. HeyItsMe

    HeyItsMe GC Hall of Fame

    2,353
    668
    2,088
    Mar 7, 2009
    AI chat bots are, at their core, programmed to be non-biased and produce facts while using logic, which is why you have dopes on X always trying to argue with it when he refutes their lies. If the response you’re getting isn’t what you want, it’s because it’s wrong.
     
  14. mdgator05

    mdgator05 Premium Member

    18,657
    2,458
    1,718
    Dec 9, 2010
  15. dave_the_thinker

    dave_the_thinker VIP Member

    1,421
    531
    1,843
    Dec 1, 2019
    Milton, FL
    Honestly, how much could it hurt Elon in this political climate?

    Just yesterday, it became popular to demean the ADL.
     
  16. wgbgator

    wgbgator Premium Member

    32,676
    2,181
    2,218
    Apr 19, 2007
    You should probably look into the history of the ADL if you think criticism is a new development. This is an organization that collaborated with pro-Apartheid South Africans to spy on Americans!

    https://www.washingtonpost.com/arch...tigated/96daef6a-a325-4a8a-ba09-da211fc1ba8a/
     
  17. DawgFanFromAlabam

    DawgFanFromAlabam GC Hall of Fame

    2,631
    363
    328
    Apr 18, 2007
    Horse and Buggy thinking.
     
  18. mrhansduck

    mrhansduck GC Hall of Fame

    5,138
    1,044
    1,788
    Nov 23, 2021
    I have not used Grok much but have used ChatGPT and Gemini quite a bit. They do get some things wrong but overall, I find them very good and helpful. As one example, ChatGPT voice mode helped a person I was speaking to (via speaker phone on a different device) about a weird iPhone issue they were having. ChatGPT was speaking directly with the other person and walked him through how to fix the issue. Who knows how much time I might have spent trying to figure it out.

    I've personally not experienced the crazier stuff I've read about AIs taking users down dangerous rabbit holes or calling them to question reality, etc. I assume some folks are prompting the AI with the goal of getting the most outrageous responses possible. I also suspect we're going to be seeing lawsuits with people claiming all sorts of emotional or financial damages from using AI. I take what it says with a grain of salt but anyone who is unable to use it skeptically (due to their age, mental health, etc.) probably shouldn't be using them in the first place.
     
  19. WC53

    WC53 GC Hall of Fame

    5,601
    1,095
    2,088
    Oct 17, 2015
    Old City
    Pull the plug!
     
  20. vegasfox

    vegasfox GC Hall of Fame

    3,920
    328
    148
    Feb 4, 2024
    I asked Grok to briefly respond to your post:

    The claim that AI chatbots are inherently non-biased and always produce facts is incorrect. AI systems, including chatbots, are designed by humans and trained on data that can reflect biases, leading to outputs that may not always be neutral or factually accurate. While they aim to use logic, their responses depend on the quality and scope of their training data, algorithms, and design choices, which can introduce errors or skewed perspectives. The assertion that disagreement with an AI response inherently means the user is wrong oversimplifies the issue, as AI can misinterpret queries, lack context, or generate flawed outputs. Users on platforms like X may argue with AI due to genuine discrepancies, not just because their views are incorrect.