Cloudflare has just announced fresh updates for Cloudflare One, aiming to give businesses more control as they dive deeper into AI-powered tools and apps.
That insight comes courtesy of a tool called the Shadow AI Report, which lets security teams analyze traffic to spot patterns and risks fast.
Across sectors as different as finance and design, workers have turned to AI to speed up projects and tackle routine tasks, sometimes without ever talking to their company’s security experts.
This wide adoption, while exciting, makes it much easier for sensitive information to slip out the door by accident or for engineers to spin up new AI-driven services that might not align with strict security requirements.
Stronger Barriers Against Shadow AI
Enter Cloudflare’s new layer called AI Security Posture Management, built specifically to guard against the hazards that come from unapproved or unchecked AI experiments inside organizations.
Security teams can now automatically flag, review and control which AI tools get used and how, cutting off risky behavior before it becomes a crisis.
Cloudflare Gateway handles these security policies in real time, locking down the flow of information at the network edge so remote and hybrid workers are just as safe as their office-based colleagues.
Instead of totally banning AI at work, Cloudflare is letting teams set rules at the prompt level with a feature called AI Prompt Protection.
This means if someone tries to feed sensitive data, such as proprietary code, into an untrusted AI service, they get a warning or are blocked from continuing.
Matthew Prince, Cloudflare’s CEO and co-founder, explained, “Cloudflare is the best place to help any business roll out AI securely. The world’s most innovative companies want to pull the AI lever to move, build and scale fast, without sacrificing security. We are in a unique position to help power that innovation–and help bring AI to all businesses safely.”
New controls also shine a light on how AI tools interact outside the company’s walls.
With Zero Trust MCP Server Control, all requests from AI models or apps to external tools are gathered on a single dashboard, making oversight much simpler and letting IT teams set limits at both the user and server level.
Ultimately, these upgrades aim to protect data privacy and security while clearing a safer path for teams who want to get creative with generative AI.