A fresh tweak to the company’s developer rules now flat out bans anyone from using X’s data or API to train language models, as detailed in the updated authenticity policies authenticity policies and rules. The prohibition appears under the “Reverse Engineering and other Restrictions” section, spelling out an explicit no to anyone with hopes of tuning or building foundation models from X’s content.
This shift is a clear play to preserve X’s data as a company asset, especially after xAI swooped in and took control of X earlier this year. It is no surprise that Musk prefers to keep valuable user-generated material in the business family, rather than feed the training of rival tech platforms’ models.
Not long ago, X actually allowed the use of its public data for artificial intelligence model training. Policy changes last year signaled an openness to letting third parties build their systems using X’s material. That door is now shut.
Other tech companies have also started drawing stricter lines on data usage for artificial intelligence purposes. Reddit recently moved to guard against bots scraping its content, while The Browser Company has built new limits into its artificial intelligence-centered browser’s terms. Everyone wants to be sure their data does not just vanish into someone else’s artificial intelligence tool.
AI Arms Race Moves Into National Security
While X shores up its borders, another artificial intelligence player is courting the government. Anthropic has unveiled a custom set of “Claude Gov” models designed specifically for United States national security operations.
Anthropic says these government-targeted models were shaped by feedback from officials and are already running inside classified agencies. The models help agencies with tasks like analyzing intelligence and planning operations, and Anthropic claims they have been tested for safety on par with their usual platforms.
These specialized models offer better handling of secret information and can process sensitive documents more effectively. Anthropic also highlights improvements in understanding key languages and cyber data—both crucial for intelligence missions.
Meanwhile, interest from government and defense sectors in artificial intelligence has driven competition among the big research labs. OpenAI is seeking to connect directly with the Defense Department. Meta now makes its Llama models available to defense operations, and Google is busy shaping a version of Gemini artificial intelligence for secure environments.
Cohere, another artificial intelligence company focused mostly on business users, has entered the scene as well, partnering with Palantir to get its tools into government hands.
Tech giants are marking their territory, making clear where, how, and by whom their user data can be used. The data handling by AI models content tug-of-war is only heating up.