Wargames once belonged to corridors full of maps, sticky notes, and hands full of dice.
Now, the arrival of artificial intelligence is changing the tempo of military strategy sessions.
At Johns Hopkins University’s Applied Physics Laboratory, researchers are redesigning how the Defense Department thinks through complicated scenarios. A pair of in-house AI tools, in-house AI tools, GenWar and SAGE, have captured the attention of officials throughout defense, energy, and intelligence circles.
James Miller, assistant director for policy and analysis at APL, revealed that these tools are being lined up for use on networks carrying top secret information. Some departments want the AI model running as soon as feasible, racing to tap classified intelligence in wargames.
Current Pentagon wargames can be grueling, eating up weeks in prep time even before anyone rolls a die. Human input has always been the bottleneck, whether for software-driven simulations or traditional board games that seem borrowed from someone’s attic.
GenWar and the Art of Streamlining Simulations
GenWar transforms the process by letting commanders use ordinary speech to describe the scenario they want to explore. Rather than taking weeks, the scenario springs to life within minutes. The AI translates plain English into simulation inputs, making it accessible for those who have never set foot in a programming class.
Kelly Diaz, who leads Advanced Concepts and Capabilities at APL, joked about the steep learning curve of older systems. “I took AFSIM training for a week and I still can’t do anything in it,” she said, adding that it almost feels like you need to earn a doctorate to use them.
GenWar sits between the user and the simulation engine, ensuring only realistic, physically possible scenarios get generated. According to Andrew Mara, who heads national security analysis at APL, this approach means “it can’t spin off into ‘I landed 16 aircraft on the moon.’”
SAGE, the lab’s other AI tool, takes things a step further. Here, generative AI acts in the role of decision makers themselves. A SAGE wargame might simulate a tense national security meeting, with some or all the voices at the table powered by chatbots.
The AI can also stand in for the expert staff who would normally advise, represent adversary nations, or even adjudicate who wins or loses in a scenario. SAGE allows one human to practice strategic decision making without needing a full room of colleagues.
Miller acknowledged the experimental AI sometimes goes off script, veering into bizarre territory if left unchecked. But running hundreds of these games at computer speed yields new insights and surfaces outlandish ideas that push humans to question their assumptions.
“The goal isn’t to find ‘the answer,’” Mara explained. Instead, the game opens up more alternatives than one team could imagine and spot patterns buried beneath the noise. Sometimes, that’s exactly the jolt strategic thinking needs, as seen in AI in US national security operations.