Securing AI Platforms: Humans in the Loop vs. Humans on the Loop
AI is everywhere. New AI tools are rapidly becoming embedded in enterprise platforms, from security monitoring and data analytics to productivity tools and autonomous agents. While these technologies improve efficiency and innovation, they also introduce a new category of risks related to data sovereignty, model integrity, and automated decision-making.
How do we secure AI platforms? CISOs and security leaders must rethink traditional cybersecurity controls and establish clear governance over how AI systems use data and how humans supervise automated decisions.
New security challenges
AI systems operate differently from traditional IT systems. Instead of following predefined logic, they learn patterns from data and generate outputs dynamically. This creates a new attack surface that includes risks such as:
- Prompt manipulation, where attackers influence AI behavior through carefully crafted inputs
- Data leakage, where sensitive information is exposed through AI interactions
- Model poisoning, where malicious data affects model performance or decisions
- Opaque decision-making, making it difficult to explain how AI arrived at a conclusion
These risks are amplified when organisations rely on external AI platforms or cloud-based models that process enterprise data outside the organisation’s direct control.
At the same time, governance frameworks such as the General Data Protection Regulation and the EU AI Act increasingly emphasize transparency, accountability, and oversight of AI systems.
Human oversight in AI governance
One of the central questions in AI governance is how humans remain involved in decision-making processes powered by AI. Two models are widely used to manage this relationship: Human-in-the-Loop and Human-on-the-Loop. They define how AI systems interact with human decision-makers and how organisations maintain control over automated processes.
Human-in-the-Loop: direct decision control
In a Human-in-the-Loop (HITL) model, humans actively review or approve AI-generated outputs before the system takes action. The AI tool assists decision-making, but final authority remains with a human operator.
This model is commonly used in environments where decisions carry significant risk or regulatory implications. Examples include fraud detection systems that flag suspicious transactions for human review, AI-assisted medical diagnostics that require physician validation, or security systems that recommend actions but rely on analysts for approval.
The advantage of Human-in-the-Loop systems is clear: they provide strong accountability and oversight. Humans remain responsible for decisions, reducing the risk of unintended consequences from automated processes. However, this approach also has limitations. As AI systems scale and operate in real time, requiring human approval for every action can slow down operations and reduce efficiency.
Human-on-the-Loop: supervising autonomous systems
As AI platforms become more capable, many organisations move toward a Human-on-the-Loop (HOTL) model. In this approach, AI systems operate autonomously while humans supervise the system and intervene when necessary. Instead of approving each decision, humans monitor system behavior and step in if anomalies or risks arise.
Examples include AI-driven cybersecurity systems that automatically detect and respond to threats, autonomous infrastructure management platforms that optimize cloud resources, or AI agents that perform operational workflows across multiple systems.
Human-on-the-Loop systems allow organisations to scale automation and respond faster to dynamic environments. Yet, they also introduce new challenges. If oversight mechanisms are weak, organisations risk losing visibility into how decisions are made or how data is used within AI systems. Maintaining effective supervision requires robust monitoring, transparency, and clear governance policies.
Resilience in the age of AI
Not every AI decision requires the same level of human oversight. High-risk scenarios, such as decisions affecting safety or critical infrastructure, often require a Human-in-the-Loop approach. Operational automation, on the other hand, may benefit from Human-on-the-Loop supervision. The most resilient organisations adopt a hybrid governance model, combining both approaches depending on the risk level of the AI application.
In an increasingly automated world, the question is not whether AI will make decisions, but how organisations design the loop between humans, data, and machines to maintain control and resilience.
Join the conversation
These topics will be explored further during the upcoming session Highly Resilient Organisations Program - AI-Driven Security & Autonomous Agents on Tuesday April 24. This session explores how to secure AI platforms and maintain enterprise control over critical data assets.
This session is part of the Highly Resilient Organisations Program led by Research Supervisor Suzanne Janse, Lecturer and Research Supervisor at Erasmus School of Accounting and Assurance.
Do you want to join? Register here.


