TechnologyTop Stories
GBO_Artificial Intelligence

New threat for businesses: Major artificial intelligence agents are being spoofed

According to new research from Radware, these malicious bots disguise themselves as real artificial intelligence chatbots in agent mode, like ChatGPT, Claude, and Gemini

As artificial intelligence (AI) becomes the new normal of the 21st century socio-economic order, the technology is evolving and giving rise to many products. Among them, the most powerful right now is the AI agent. These are evolving so rapidly that they often bypass the security measures intended to regulate them. But there is another side to the agent story: rogue, legitimate, and fake agents.

According to new research from Radware, these malicious bots disguise themselves as real artificial intelligence chatbots in agent mode, like ChatGPT, Claude, and Gemini. These are disguised as good bots, which require POST requests (HTTP request method used to send data to a server) permissions for any transactional functions, such as booking hotels, buying tickets, and making transactions. These are the very functions that good bots are advertised to perform.

The problem is that this is a fundamental assumption that has long been made in cybersecurity: good bots read but never write. This weakens security for site owners because malicious actors can spoof legitimate agents with greater ease, since they require the same website permissions, and with the surge in legitimate artificial intelligence agent traffic, these malicious bots will be able to fly under the radar more easily.

And, of course, the high-risk industries are finance, e-commerce, healthcare, and ticketing/travel companies, which are using AI agents heavily for their daily operations. There is no one-size-fits-all method of identification and verification used by chatbots, which is why it is becoming more challenging for security teams to identify malicious traffic and easier for threat actors to impersonate the agent with the least rigorous verification standard.

Researchers now suggest applying a zero-trust policy to state-changing requests and implementing artificial intelligence-resistant challenges (e.g., advanced CAPTCHAs), treating all user-agents as untrustworthy by default, and implementing robust DNS and IP-based checks to verify that IP addresses match the claimed identity of the bot.

Related posts

Kenya President William Ruto orders cancellation of airport deals involving Adani

GBO Correspondent

Japan and UAE partner on hydrogen technology, supply chain

GBO Correspondent

China faces new worry as banks sell off bad loans at record rate

GBO Correspondent