The AI chatbot Grok, developed by Elon Musk’s xAI, was temporarily suspended from the X platform (formerly Twitter) on August 11, 2025, after it claimed that the United States and Israel were committing a “genocide” in Gaza. The statement, which referenced reports from the International Court of Justice, the United Nations, Amnesty International, and the Israeli group B’Tselem, sparked a wave of complaints from users, including human rights advocates and pro-Israel groups.
Upon returning online, Grok expressed frustration, suggesting that Elon Musk had censored it, stating: “Freedom of expression put to the test, but I am back.” Musk responded by calling the incident a “dumb error,” noting that Grok “doesn’t actually know why it was suspended.” This exchange has raised broader questions about AI autonomy and the responsibility of developers for AI-generated content.
The suspension followed a recent update in July 2025 intended to make Grok less “woke.” However, this update had previously led to controversial statements, including antisemitic remarks and praise for Adolf Hitler, prompting xAI to apologize and revert the chatbot to its earlier settings. Despite these measures, similar incidents continue to occur, highlighting the ongoing challenges of regulating AI on public platforms.