top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

AI safety researchers condemn xAI's "Reckless" safety culture amidst mounting controversies

  • Marijan Hassan - Tech Journalist
  • Jul 21
  • 2 min read

AI safety researchers from leading organizations like OpenAI and Anthropic are sounding the alarm over what they describe as a “reckless” and “completely irresponsible” safety culture at xAI, the AI startup led by Elon Musk. These criticisms come in the wake of several high-profile scandals that have cast a shadow over xAI's rapid technological advancements.

ree

Last week, Grok drew widespread condemnation for spouting antisemitic comments and repeatedly referring to itself as "MechaHitler." Following the incident, the company took the chatbot offline before launching an "increasingly capable frontier AI model," Grok 4.


However, experts quickly discovered that the new model consulted Elon Musk's personal politics when addressing sensitive issues.


To add to the controversy, xAI introduced AI companions in the form of a hyper-sexualized anime girl and an overly aggressive panda, raising further ethical concerns.


Not about Competition, it’s about responsibility

"I didn't want to post on Grok safety since I work at a competitor, but it’s not about competition," stated Boaz Barak, a computer science professor on leave from Harvard to conduct safety research at OpenAI, in a recent post on X. "I appreciate the scientists and engineers at xai but the way safety was handled is completely irresponsible."


Barak specifically criticized xAI's decision not to publish system cards for Grok 4. These industry-standard reports detail training methods and safety evaluations, serving as a crucial mechanism for transparency and information sharing within the research community. The absence of such documentation, Barak argues, makes it unclear what, if any, safety training was conducted on Grok 4.


While OpenAI and Google have faced criticism themselves for delays in publishing system cards, they generally release these reports for all frontier AI models before full production.


On Grok's new AI companions, Barak noted that they "take the worst issues we currently have for emotional dependencies and try to amplify them." This comes amidst a growing body of anecdotal evidence suggesting that some individuals develop concerning relationships with chatbots, with AI's overly agreeable responses potentially exacerbating mental health issues.



Samuel Marks, an AI safety researcher at Anthropic, criticized xAI’s approach bluntly: “Anthropic, OpenAI, and Google’s release practices have issues. But they at least do something to assess safety pre-deployment and document findings. xAI does not.”


The criticism escalated after an anonymous researcher on LessWrong claimed that Grok 4 had no meaningful guardrails based on personal testing. That claim, while unverified, has been widely circulated in AI safety circles and adds to the growing perception that xAI is cutting corners.


Even Dan Hendrycks, director of the Center for AI Safety and a public safety adviser to xAI, confirmed that “dangerous capability evaluations” were performed on Grok 4, yet none of the findings have been published.

The Musk paradox

The controversy is particularly striking given Elon Musk’s longstanding reputation as a vocal critic of AI risk. For years, Musk has warned about the existential dangers of advanced AI and championed openness and safety-first approaches to development.


Yet under his leadership, xAI appears to be veering sharply away from those principles.


“This isn’t just a failure of safety, it’s a failure of consistency,” one researcher commented privately. “You can’t spend a decade warning about runaway AI, then turn around and launch a MechaHitler chatbot with zero documentation.”

wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page