Elon Musk’s artificial intelligence venture, xAI, Grok, has failed to meet its promised target of releasing a comprehensive AI safety framework, according to the latest findings by The Midas Project. The company has faced criticism for its loose approach toward safety, with its AI system xAI, Grok recently coming under fire for generating inappropriate content and displaying a much less filtered demeanor than leading competitors like Gemini and ChatGPT.
In February during the AI Seoul Summit, xAI put out a preliminary draft of its AI safety policy, detailing its basic security philosophies and outlining factors like benchmark tests and deployment strategies. This eight-page outline, however, only addressed prospective AI systems not currently being built and fell short of explaining clear procedures for identifying and addressing risks, despite xAI’s public statements.
xAI’s Delayed Safety Commitments
Within its initial draft, the company pledged to update its safety guidelines and publish revised policies within three months, setting a deadline of May 10 that quietly slipped by without further communication from xAI. Although Elon Musk has been a vocal critic of unchecked AI and its potential hazards, his company’s record lags behind peers, with watchdog SaferAI ranking xAI low due to insufficient risk management.
Other leading organizations in the AI field display similar shortfalls, frequently releasing model safety assessments late or neglecting them altogether. This pattern among industry leaders has led to heightened concerns among experts, particularly as artificial intelligence technology continues to evolve and poses greater risks.
Recent months have seen companies like Google and OpenAI equally criticized for hurried evaluations and delayed safety disclosures, broadening worries about industry standards. The lack of transparency is striking as these advancements move the field into increasingly capable—and potentially more hazardous—territory.
Calls are growing for tighter oversight and enforceable standards to ensure that AI deployment does not compromise safety at such a pivotal time for technological development. As the industry’s influence grows, stakeholders and advocacy groups are pushing for tangible action to accompany lofty public promises.