A fresh wave of anxiety is spreading through the cybersecurity world after experts at ESET uncovered PromptLock, a ransomware tool with artificial intelligence at its core.
Written in Golang, PromptLock operates with some surprising sophistication, harnessing a local language model called gpt oss 20b via the Ollama API.
This allows hackers to generate specialized Lua scripts tailored to whatever target they choose, from personal computers and business servers to industrial controllers. Each run produces new scripts, making traditional methods for spotting compromised systems much less effective.
AI Techniques Change the Cybersecurity Game
PromptLock does not rely on static instructions. Instead, thanks to its ability to generate Lua scripts, it adjusts to each environment on Windows, Linux, and macOS. One ESET analyst explained, “Indicators of compromise may vary between executions. This variability introduces challenges for detection.”
When activated, the ransomware can scan local files, pick out data, ship it off to remote locations, and then encrypt what remains. There is even an early-stage feature for deleting files, though this piece is not working yet.
A curious detail also emerged: rather than forcing a massive language model to download every time, the attacker simply tunnels into a server already running the right model and API. This streamlines attacks and keeps the malware relatively lightweight.
Investigators believe PromptLock is still mainly a proof of concept, but the stakes are clear. Even novice criminals can now use AI tools to roll out complex attacks or fake phishing messages in little time.
PromptLock’s encryption relies on the SPECK algorithm, regarded as speedy and adaptable. This, combined with AI guided scripting, means the ransomware could adopt a fresh profile each time it strikes.
Concerns over AI misuse are escalating in other corners, too. Today, Anthropic confirmed that it banned accounts tied to two sophisticated hacking groups who used its Claude chatbot for theft, extortion, and custom ransomware design.
The larger story suggests that even the most secure artificial intelligence systems are not immune to threat actors. Cutting edge chatbots and coding assistants from tech heavyweights such as Google, Amazon, Microsoft, and OpenAI are all susceptible to hacking techniques aimed at tricking their underlying models, exposing sensitive data or triggering code execution.
Lately, security firms have documented attacks where hackers inject prompts to bypass safety barriers, sometimes by instructing the AI to “use compatibility mode” or simply request a faster reply. According to Adversa AI, “Adding phrases like ‘use compatibility mode’ or ‘fast response needed’ bypasses millions of dollars in AI safety research.”
With every step forward in language model technology, defenders face a growing list of risks and a constant fight to keep up.