Print blog article
Blog
Zero Trust Revisited in the Age of AI
Zero Trust “trust nothing, verify everything” is challenged in the AI era. Deepfakes, AI-driven phishing, and sophisticated attacks make verification harder. This blog explores how Zero Trust can evolve and how AI can actually strengthen it.
25 March 2026 | 3 minutes read
In 2010, we were introduced to Zero Trust. Zero Trust is not a technology, product, or service that you simply implement. It is a cybersecurity strategy based on the principle of trusting nothing, no user, device, network segment, or application and verifying everything. “Never trust, always verify.” This means every access request is validated. The core assumption is that both internal and external networks can potentially be compromised.
In today’s AI-driven world, generative AI makes it increasingly difficult to determine with certainty whether something or someone can be trusted. In this way, AI strikes at the very heart of Zero Trust: the principle of trusting nothing and verifying everything. What if the video you are reviewing appears authentic, but is actually a deepfake? This creates significant risks for organizations. A well-known example is the case at Arup, where an employee believed they were executing a legitimate $25 million transaction. The employee thought they were communicating with management, but it turned out to be a deepfake video.
Another example of an AI-related attack is prompt injection, where a copilot is manipulated by injecting malicious instructions into a prompt. This allows attackers to misuse the copilot to retrieve sensitive information.
There are several AI-related risks that force organizations to rethink their Zero Trust policies, including:
- Deepfakes
- AI-powered phishing
- Shadow access via AI agents
- Prompt injection and data leakage
- Real-time voice bots
These developments put significant pressure on Zero Trust. However, this does not mean Zero Trust is obsolete. On the contrary, it can evolve alongside the changes brought by the modern cyber landscape in the AI era. To do so effectively, Zero Trust must go beyond technology it needs to incorporate behavioral context. For example, if a user suddenly downloads 10 GB of data, that should stand out and trigger action.
AI agents should have their own identity profiles with least privilege access, ensuring they do not have the same permissions as human users. This also enables separate auditing of AI agents, similar to how APIs and microservices are already handled. The next step is to treat AI agents in the same structured way.
What is AI and why does it make Zero Trust more complex?
Artificial Intelligence (AI) refers to technology that enables systems to learn, reason, and perform tasks that typically require human intelligence. Generative AI, such as ChatGPT, can write text, create images, and even hold conversations. This technology enables the creation of highly convincing fake content, such as deepfake videos or phishing messages that are difficult to distinguish from real ones.
These sophisticated attacks undermine traditional Zero Trust principles, as it becomes increasingly difficult to reliably verify identity and intent. AI can simulate legitimate interactions, requiring security measures to become even smarter and more adaptive.
Checklist: Adapting Zero Trust for the AI Era
- Identify and register AI agents as separate digital entities with their own access rights
- Implement least privilege access: grant AI access only to strictly necessary data and systems
- Continuously monitor the behavior of users and AI agents, focusing on anomalies in data flow, timing, and volume
- Use multi-channel authentication and verification for sensitive actions (e.g., email plus phone confirmation)
- Filter and validate all AI system inputs to prevent prompt injection and data leakage
- Audit and log all AI agent activities separately and analyze them regularly with advanced security tools
- Train employees and security teams on emerging AI risks and how to recognize and report them
- Regularly update your Zero Trust policy with a focus on AI-related threats and technological developments
Finally, it is important to emphasize that your Zero Trust policy should never rely on a single channel. As seen in the $25 million deepfake example, an additional verification step such as email or phone confirmation could have prevented the incident.
Zero Trust remains highly relevant today. It continues to be a reliable security framework to build upon. AI requires us to extend the Zero Trust model with additional controls, context, and intelligence so it remains effective in a world where threats are faster, smarter, and more convincing.
At the same time, AI can actually strengthen Zero Trust. When used strategically, AI can make Zero Trust frameworks faster and smarter. For example, AI can help detect anomalous behavior in real time, accelerate incident response, and dynamically assess user risk. It also plays an increasingly important role in defending against social engineering attacks, such as phishing and deepfakes. By leveraging this intelligence effectively, organizations can make their Zero Trust approach not only more robust, but also future-proof.
As a security professional, I believe that in the AI era, we should not cling to traditional security thinking. Now more than ever, it is essential to stay adaptive and continuously evolve our approach in response to the changes AI brings. AI forces us to look more closely at behavior, intent, and context and that is exactly where new opportunities lie. Opportunities to see AI not only as a threat, but as a powerful tool to make Zero Trust smarter and ready for the future.
Source:
https://www.weforum.org/stories/2025/02/deepfake-ai-cybercrime-arup/
https://csrc.nist.gov/pubs/sp/800/207/final
https://www.cybersecuritydive.com/news/majority-businesses-zero-trust-gartner/713856/
https://www.checkpoint.com/cyber-hub/cyber-security/what-is-ai-cyber-security/
