While the assembly’s attention was drawn to ongoing crises in places like Palestine and Sudan, a determined group of advocates, including Nobel laureate Maria Ressa, made sure that the dangers of AI remained part of the agenda. Ressa stood before representatives urging countries to draw what she called “AI red lines,” arguing that the world must move quickly to eliminate what she described as “universally unacceptable risks” posed by unchecked artificial intelligence.
Later, members of the Security Council debated whether it was finally time to set concrete global standards for AI, especially as new autonomous weapons and nuclear capabilities emerge. The conversation stretched on for over three hours as diplomats shaped the same message in different words: AI is no longer science fiction. It is woven into daily life, and the absence of global rules leaves the door open for chaos.
Belarus, in a noteworthy intervention, sounded the alarm about a growing gap between nations able to harness the potential of AI and those at risk of being left behind. “There is a new curtain being created, not ideological this time, but technological,” argued the Belarusian representative. The country warned that such division risks pushing parts of the world into “an era of neocolonialism.”
Global Governance and National Efforts Collide
The UN also charted a roadmap for global debate with the launch of its new artificial intelligence governance mechanisms within the UN. Here, UN Secretary-General António Guterres signaled a new level of inclusivity, saying “For the first time, every country will have a seat at the table of AI.” He shared that the UN would soon create an International Independent Scientific Panel on AI, and that a global fund for AI capacity development was in the works.
Around the world, nations compete to capture AI’s economic benefits. Countries like the United Arab Emirates, Bhutan, and China touted domestic successes in AI innovation. Other voices, including policymakers, academics, and industry leaders from places such as Nigeria, pressed for urgent action to tackle inequality and make sure the technology benefits everyone.
Against this international backdrop, the state of California is pushing forward on its own terms. Last year, an ambitious AI safety bill was vetoed for being too tough, but lawmakers responded with a revised blueprint known as SB 53. Governor Gavin Newsom, speaking in New York, offered support, remarking, “We worked with industry, but we didn’t submit to industry.”
SB 53 stands out because it requires top AI companies to publish safety plans and report any safety breaches, while protecting whistleblowers inside the industry. Sacha Haworth, from the Tech Oversight Project, called these protections “tremendous” and emphasized that compromise is part of progress. “You want to push the envelope as far as you can, but you can’t give up on incrementalism,” she said.
Globally, there remains uncertainty about how much power international gatherings really wield when major players in Silicon Valley continue to shape the future of AI largely outside of UN oversight. The world, it seems, is still searching for the balance between innovation and the safeguards needed to make sure AI works for everyone.