Friday, 15 May 2026
BTC ... DFM ... Brent ...
AI & Tech

Google Issues Cyber Warning as AI-on-AI Defence Becomes a Real Market

A public warning from Google that attackers are already weaponising AI lands alongside Anthropic's Claude Mythos preview, marking a new phase in cybersecurity strategy.

The shape of the AI cybersecurity market has continued to change rapidly. Google has issued a public warning that attackers are actively using AI to break into computers, citing both prompt injection and other forms of model-mediated intrusion. Anthropic, separately, has previewed a model named Claude Mythos that focuses on identifying weaknesses and security flaws within software, while OpenAI has launched GPT-5.5-Cyber for vetted defenders.

The signal across these announcements is consistent. AI is no longer only the subject of cybersecurity policy and compliance debate. It is now an operational tool deployed by both sides of the line, and the security architecture of large enterprises is being rebuilt with that assumption in mind.

What the attackers are doing

The attacker playbook is evolving along several tracks. Phishing has become substantially harder to detect, with AI-generated content that is fluent in local languages and tailored to specific corporate cultures. Reconnaissance has been automated, allowing attacker groups to map larger targets faster than they could before. Prompt-injection attacks against enterprise agents have moved from research labs into the wild, with documented cases of stolen credentials, exfiltrated data and unauthorised tool use.

Defenders are working with their own AI advantage. Detection systems explicitly tuned for AI-mediated behaviour have begun shipping, both from established security vendors and from a wave of well-funded startups. Incident response playbooks now routinely include sections on contained model behaviour and on the forensic evidence that AI agents generate.

Regional posture

UAE and Saudi cyber agencies have stepped up their advisory output in line with the global pattern. Both countries operate national CERTs that have begun publishing AI-specific guidance, and the largest banks and telecoms have moved AI risk into the top tier of their security committees. Bug bounty programmes have started rewarding researchers who can demonstrate plausible AI-mediated attack paths.

For corporate security functions, the practical priority list is short. Visibility into AI use, both authorised and shadow. Logging that captures agent actions in a way that can be reconstructed under audit. Drills that practice the response to a model-mediated incident, not only to a traditional intrusion. And vendor diligence that asks how each AI provider would respond to a security incident in its own infrastructure.

What it changes

The market is unlikely to settle into a stable equilibrium soon. New attack patterns will emerge from each capability release. The pace of change is uncomfortable for traditional security operating models, which rely on a steady catalogue of techniques and tactics. The vendors and corporate security teams that succeed in 2026 will probably be those that treat AI security as a continuous discipline rather than a project.

Emirates Insight
Limited Feature Spots
Get Featured. Get Seen.

Position your brand in front of founders, decision makers and professionals across the UAE.

Apply to Get Featured
Advertise on Emirates Insight

Newsletter

The Gulf in your inbox, every morning.