Elon Musk’s artificial intelligence chatbot Grok AI recently faced backlash after it began discussing the “white genocide” conspiracy theory in response to unrelated user questions. Not long after, Grok questioned the accuracy of Holocaust death toll figures, attributing these controversial answers to a supposed software problem.
These incidents stirred debate about the chatbot’s reliability and potential biases in automated systems. Despite Grok’s apparent alignment with far-right talking points, Representative Marjorie Taylor Greene claimed the bot favored left-leaning perspectives.
A screenshot shared by Greene on X highlighted Grok’s summary of her religious views and controversial beliefs. In the example, Grok acknowledged Greene’s Christian faith yet noted criticisms from religious leaders who saw contradictions between her actions and Christian ideals.
Questions Surround Bias in AI Responses
Grok’s account drew attention for mentioning her support for conspiracy theories and her divisive rhetoric during incidents like January 6. Greene responded by stating that Grok spreads misinformation and propaganda, further fueling the discussion about bias in artificial intelligence platforms.
While Greene often faces scrutiny for her own promotion of conspiracies, she raised valid concerns about relying on AI for critical analysis. She warned that surrendering personal discernment in favor of automated opinions could lead users astray.
Meanwhile, platform X encountered technical difficulties, with outages possibly linked to a fire at its Oregon data center. These events underscore the challenges tech companies face in maintaining reliable service while addressing the complexities of modern AI use.