<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Mathis SIGIER, Auteur</title>
	<atom:link href="https://www.riskinsight-wavestone.com/en/author/mathis-sigier/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.riskinsight-wavestone.com/en/author/mathis-sigier/</link>
	<description>The cybersecurity &#38; digital trust blog by Wavestone&#039;s consultants</description>
	<lastBuildDate>Thu, 09 Apr 2026 08:51:18 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Securing AI Agents: Why IAM Becomes Central</title>
		<link>https://www.riskinsight-wavestone.com/en/2026/04/securing-ai-agents-why-iam-becomes-central/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2026/04/securing-ai-agents-why-iam-becomes-central/#respond</comments>
		
		<dc:creator><![CDATA[Mathis SIGIER]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 08:51:16 +0000</pubDate>
				<category><![CDATA[Cyberrisk Management & Strategy]]></category>
		<category><![CDATA[Digital Identity]]></category>
		<category><![CDATA[Focus]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI agents]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[IAM]]></category>
		<category><![CDATA[identity and access management]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=29632</guid>

					<description><![CDATA[<p>The rise of AI agents is redefining enterprise security   Artificial intelligence has now become a structuring lever for companies: 70%¹ have already placed it at the heart of their strategy. So far, most deployments relied on conversational assistants capable...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2026/04/securing-ai-agents-why-iam-becomes-central/">Securing AI Agents: Why IAM Becomes Central</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 style="text-align: justify;">The rise of AI agents is redefining enterprise security</h2>
<p> </p>
<p style="text-align: justify;">Artificial intelligence has now become a structuring lever for companies: 70%<a href="https://www.wavestone.com/en/insight/global-ai-survey-2025-ai-adoption/" target="_blank" rel="noopener">¹</a> have already placed it at the heart of their strategy. So far, most deployments relied on conversational assistants capable of returning information—sometimes enriched with internal data—but whose interactions with the information system (IS) remained limited.</p>
<p style="text-align: justify;">A major shift is now underway with the emergence of agentic AI. Unlike simple chatbots, AI agents do not merely answer questions; they reason, decide to call tools, and trigger actions. They may send an email, schedule a meeting, update a record, initiate a transaction, or soon, carry out even more sensitive operations. Their promise in terms of automation is substantial—and so is their potential impact on the attack surface of the IS.</p>
<p style="text-align: justify;">Because once an AI system acts, central questions arise: on whose behalf is it acting, with which permissions, on what perimeter, and under whose control?</p>
<p style="text-align: justify;">Those questions are even more critical given the rapid evolution of use cases: 51%<a href="https://www.pagerduty.com/resources/ai/learn/companies-expecting-agentic-ai-roi-2025/" target="_blank" rel="noopener">²</a> of organizations have already deployed an AI agent for employees, while 59%<a href="https://cybernews.com/ai-news/ai-shadow-use-workplace-survey/" target="_blank" rel="noopener">³</a> of workers acknowledge using non‑approved AI agents. Beyond individual usage, each business unit may be tempted to deploy its own agents to fulfill local needs. This fuels a form of agentic Shadow IT, where agents multiply in a fragmented way, with heterogeneous architectures, variable controls, and frequently incomplete governance.</p>
<p style="text-align: justify;">In this context, Identity and Access Management (IAM) must return to the center of the security strategy. Every piece of data an agent can access, every resource it can modify, every action it can execute must fall under a centralized access control with, traceability, and a governance framework.</p>
<p style="text-align: justify;">This article analyzes the security of AI agents through the IAM lens—not as one brick among others, but as a structural safeguard required to frame their usage and sustainably protect the information system.</p>
<p> </p>
<h2 style="text-align: justify;">From conversational assistants to AI agents: how they interact with the IS</h2>
<p> </p>
<h3 style="text-align: justify;">How can an AI agent act on an application?</h3>
<p style="text-align: justify;">The ability of an AI agent to interact with enterprise applications relies on the emergence of new protocols, among which the Model Context Protocol (MCP) is gaining prominence. This type of protocol enables an AI agent to communicate with third‑party applications through an intermediate layer, often implemented as an MCP server.</p>
<p style="text-align: justify;">The MCP server acts as an exposure and orchestration component. It receives requests generated by the model, translates them into executable calls, and forwards them to the application’s API. To achieve this, the MCP server provides the model with tools, describing the actions it is authorized to invoke. Once the server is declared in the conversational interface or agent environment, the model can decide—based on user intent and its own reasoning—to call one or several of these tools.</p>
<p style="text-align: justify;">From a security perspective, this raises a key question: how is the end‑user authenticated, and how is this identity propagated—or not—to downstream services? In modern architectures, user authentication typically relies on OpenID Connect (OIDC), while API access authorization relies on OAuth 2.x through access tokens. The challenge for an agent is to ensure that tool invocations and API calls occur through a controlled delegation model.</p>
<p style="text-align: justify;">Is the agent acting with its own rights, with the user’s rights, or through a hybrid mechanism?</p>
<p><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-29634" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/IAMxIAPicture1-ENG.png" alt="Machanism of tools called by MCP server" width="624" height="358" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/IAMxIAPicture1-ENG.png 624w, https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/IAMxIAPicture1-ENG-333x191.png 333w, https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/IAMxIAPicture1-ENG-68x39.png 68w, https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/IAMxIAPicture1-ENG-120x70.png 120w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p style="text-align: justify;">Let’s illustrate this with a real-world use case: scheduling a meeting. The user asks: “Schedule a meeting with the team tomorrow at 10 a.m.” The AI agent interprets the request and uses the “Calendar” tool exposed by the MCP server. It sends the minimal structured request (participants, date, time, subject). The MCP server then calls the enterprise calendar API to create the event.</p>
<p style="text-align: justify;">The mechanism seems simple. In practice, it represents a major shift: the model is no longer a passive assistant but an active intermediary between human intention and technical execution.</p>
<p> </p>
<h3 style="text-align: justify;">An inherently opaque operating model</h3>
<p style="text-align: justify;">This architecture introduces an immediate security difficulty: in many cases, the integration layer only has partial visibility over the originating context. It receives a structured request but not the full initial prompt, the model’s internal reasoning, or why it selected a specific tool. The IS therefore sees an action without necessarily being able to reconstruct the chain linking user demand, agent reasoning, tool invocation, and final effect.</p>
<p style="text-align: justify;">This loss of context becomes even more problematic when the API call is made using an OAuth token: depending on the architecture, the target service may only see a technical identity (service account / application) rather than the real end‑user. This undermines attribution, abuse detection, and the ability to apply conditional policies differentiating human and agentic actions.</p>
<p style="text-align: justify;">In other words, the agent interacts with the IS in a partially opaque manner, breaking with traditional application patterns and complicating real‑time control, auditing, and accountability.</p>
<p> </p>
<h3 style="text-align: justify;">A fast‑emerging technology introducing new security challenges</h3>
<p style="text-align: justify;">AI agents introduce new use cases—and new risks—that must be addressed at the IAM level. Four challenges stand out.</p>
<p> </p>
<h4 style="text-align: justify;">Challenge 1: Inventory of AI agents</h4>
<p style="text-align: justify;">Most organizations lack a comprehensive inventory of deployed agents and the tools they connect to.</p>
<p style="text-align: justify;">This lack of visibility arises from two factors:</p>
<ul style="text-align: justify;">
<li>usage often develops outside traditional governance processes;</li>
<li>integration modalities are heterogeneous (MCP, proprietary connectors, local code execution, platform‑native features, etc.).</li>
</ul>
<p style="text-align: justify;">The issue is not only inventorying the agents themselves but understanding their entire execution chain: interface, exposed tools, target applications, accounts used, data processed, and flows generated. Without visibility, no meaningful governance is possible.</p>
<p> </p>
<h4 style="text-align: justify;">Challenge 2: Attribute and govern AI agent permissions</h4>
<p style="text-align: justify;">Traditional IAM systems often lack a native, standardized object to represent an AI agent as a fully governable non‑human identity.</p>
<p style="text-align: justify;">As a result, integration layers are registered as technical apps or service accounts. This leads to well‑known risks: excessive privileges, poor separation of duties, coarse controls, and inability to distinguish a human action from an agentic action.</p>
<p style="text-align: justify;">The risk becomes substantial as the agent may become a privileged indirect access vector into the IS.</p>
<p> </p>
<h4 style="text-align: justify;">Challenge 3: Authenticate AI agents</h4>
<p style="text-align: justify;">Authentication presents the third challenge, on two distinct levels. First, the end user must be properly authenticated to ensure that the agent is not operating without an identity. But the agent itself—or at the very least the component acting on its behalf—must also be authenticated so that specific policies, appropriate restrictions, and proportionate oversight requirements can be applied to it.</p>
<p style="text-align: justify;">This dual requirement is unprecedented in its complexity: with AI agents, the system must simultaneously manage the identity of the requester, the identity of the executing system, and the precise relationship between the two.</p>
<p> </p>
<h4 style="text-align: justify;">Challenge 4: Trace agent‑driven actions</h4>
<p style="text-align: justify;">The final challenge is that of traceability. In many current architectures, logs primarily allow us to observe the technical call sent to the target service. However, it remains difficult to reliably reconstruct:</p>
<ul style="text-align: justify;">
<li>which user originated the request;</li>
<li>which agent decided to execute it;</li>
<li>the business context;</li>
<li>the intermediate reasoning steps.</li>
</ul>
<p style="text-align: justify;">This lack of auditability undermines detection, investigation, and accountability. When a sensitive action is triggered, it must be possible to determine whether it resulted from a legitimate instruction, a misinterpretation, an autonomous deviation, an abuse of privilege, or a compromise of the input context—for example, through a prompt injection attack.</p>
<p> </p>
<h2 style="text-align: justify;">IAM as the reference framework for securing AI agents</h2>
<h3> </h3>
<h3 style="text-align: justify;">Core IAM principles remain unchanged</h3>
<p style="text-align: justify;">In light of this transformation, one point must be made clear: the fundamentals of IAM do not disappear with agent-based AI. On the contrary, they become essential once again.</p>
<p style="text-align: justify;">A well-managed information system is based on a few simple and robust principles:</p>
<ul style="text-align: justify;">
<li>centralize authentication via a reference IdP;</li>
<li>avoid generic accounts when nominative identities are possible;</li>
<li>enforce least privilege;</li>
<li>govern entitlements over time;</li>
<li>ensure robust logs;</li>
<li>clearly separate roles and execution perimeters.</li>
</ul>
<p style="text-align: justify;">AI agents do not invalidate these principles—they expose existing weaknesses and require adapting the IAM execution model to a new class of digital actors.</p>
<p> </p>
<h3 style="text-align: justify;">A four‑step security trajectory</h3>
<p> </p>
<h4>1. Inventory use cases and agents</h4>
<p style="text-align: justify;">Identify:</p>
<ul style="text-align: justify;">
<li>deployed agents,</li>
<li>environments,</li>
<li>tools,</li>
<li>target apps,</li>
<li>accounts and tokens,</li>
<li>accessible data.</li>
</ul>
<p style="text-align: justify;">This inventory exercise is not merely a secondary documentation task; it is a prerequisite for any coherent access control policy. To carry it out, commercial tools are emerging, such as Microsoft’s Agent 365 solution.</p>
<p> </p>
<h4>2. Introduce a dedicated identity type for AI agents</h4>
<p style="text-align: justify;">The second step involves recognizing AI agents as a specific category of non-human entities. This classification is essential because it enables the implementation of differentiated policies: prohibitions on certain actions, restrictions to specific areas, requirements for prior approval, enhanced monitoring, or conditional restrictions.</p>
<p style="text-align: justify;">This distinction is fundamental. A traditional application does not have the same level of autonomy, nor the same risk profile, as an AI agent capable of selecting a tool on its own, chaining together multiple actions, or reacting to an ambiguous context. IAM must therefore be able to determine not only who is acting, but also how the system is acting.</p>
<p style="text-align: justify;">For example, a user may have the right to send an email or create a change request. This does not mean that an agent can execute this action without safeguards. Depending on the sensitivity of the process, a dedicated policy may require human validation, a restricted scope, or a complete prohibition.</p>
<p> </p>
<h4 style="text-align: justify;">3. Link authentication and rights to a central IdP + the end‑user</h4>
<p style="text-align: justify;">The third step involves bringing authentication under the purview of a central identity provider, so that access rights are managed consistently. The goal is twofold: to prevent the uncontrolled use of over-privileged technical accounts, and to ensure that the agent operates, as much as possible, within the limits of the permissions held by the user who initiated the request.</p>
<p style="text-align: justify;">This does not mean that the agent must be transparent from a security standpoint. On the contrary, the challenge is to apply a logic such as: “even if the user has the right, the agent does not necessarily have the right to do so alone, in any context, and without additional oversight.</p>
<p> </p>
<h4 style="text-align: justify;">4. Introduce human approval for certain agent‑initiated actions</h4>
<p style="text-align: justify;">Securing AI agents cannot rely solely on authentication and authorization. It also requires defining the acceptable level of autonomy based on the criticality of the actions in question.</p>
<p style="text-align: justify;">Three models are typically distinguished</p>
<p style="text-align: justify;"><strong>Human</strong><strong>‑in</strong><strong>‑the</strong><strong>‑loop</strong></p>
<p style="text-align: justify;">This is the most secure mode. The agent prepares the action, but its execution is contingent upon explicit validation. This approach should be prioritized for sensitive operations: financial transactions, changes to permissions, external communications on behalf of the company, access to sensitive data, actions with irreversible consequences, etc.</p>
<p style="text-align: justify;">Its key advantage is that final validation is handled by a control interface independent of the agent’s reasoning. Even if the model has been influenced, manipulated, or simply deceived, the user or operator retains control over the decision.</p>
<p style="text-align: justify;"><strong>Human</strong><strong>‑over</strong><strong>‑the</strong><strong>‑loop</strong></p>
<p style="text-align: justify;">In this model, humans do not approve each action individually but oversee the execution and retain the ability to interrupt the process immediately. This approach may be suitable for frequent, well-defined, low-risk processes, provided that monitoring is effective, and the shutdown mechanism is fully operational.</p>
<p style="text-align: justify;"><strong>Human</strong><strong>‑out</strong><strong>‑of</strong><strong>‑the</strong><strong>‑loop</strong></p>
<p style="text-align: justify;">Here, the agent operates autonomously without immediate human intervention. This level of autonomy should only be considered for very low-criticality use cases, in strictly bounded environments with limited scopes of action, robust compensatory control mechanisms, and explicit tolerance for residual risk.</p>
<p style="text-align: justify;">For a CISO, the logic is simple: the greater the business, regulatory, or security impact, the closer the human oversight must be to the execution.</p>
<p> </p>
<h2 style="text-align: justify;">A clear target state—still constrained by several limitations</h2>
<p> </p>
<h3 style="text-align: justify;">Functional obstacles</h3>
<p style="text-align: justify;">The target security model can be clearly defined. Its implementation, however, encounters several major functional obstacles.</p>
<p style="text-align: justify;">The first obstacle concerns the lack of granular authorization mechanisms. Today, a user may want to ask an agent to perform a precise action on a precise resource. Yet available mechanisms often require permissions that are far broader than necessary. Processing an email may require opening access to an entire mailbox; scheduling a meeting may imply extended access to the user’s full calendar; interacting with a repository may require read or write permissions far beyond the expressed need. This mismatch is particularly problematic in an agentic context. Because an AI is inherently non‑deterministic in the way it selects and chains actions, overly broad access rights mechanically become a disproportionate risk. Secure adoption therefore requires moving toward finer‑grained, contextualized, temporary authorization mechanisms, proportionate to the specific request being made.</p>
<p style="text-align: justify;">The second obstacle concerns authentication and identity propagation. In many cases, current architectures still rely on technical accounts, shared secrets, or authentication mechanisms that fall short of mature IAM governance standards. The target state, in contrast, requires that each action be explicitly linked to (i) the user originating the request, and (ii) the fact that this action was executed by an agent — which implies distinguishing between the identity of the initiator and the identity of the executing system, while documenting the delegation relationship between the two. In practice, this refers to controlled delegation mechanisms such as OAuth “On-Behalf-Of (OBO)” flows: the agent (or its orchestration layer) calls an API while carrying an authorization derived from the user, but with additional constraints (limited scope, reduced duration, contextual checks, conditional access policies). The objective is to reduce reliance on over‑privileged technical accounts while preserving a usable chain of accountability. At this stage, however, the market does not yet offer a fully homogeneous and interoperable model that covers authentication, fine‑grained authorization, traceability, and agent governance at scale.</p>
<p style="text-align: justify;">A final foundational obstacle is traceability: every action must be linked explicitly to a clear and intelligible chain of responsibility. Without this capability, there can be no robust auditability, no effective control, and no defendable governance in front of business stakeholders, auditors, or regulators. And this obviously comes at a cost for SIEM platforms…</p>
<p> </p>
<h3 style="text-align: justify;">A fragmented market complicating security</h3>
<p style="text-align: justify;">From the perspective of enterprises, the difficulty is not only technical: it also relates to the overall maturity of the market. Agentic capabilities are proliferating faster than the security and governance standards needed to frame them in a consistent way. As a result, organizations must deal with heterogeneous solutions, in which identity models, audit capabilities, and control mechanisms vary significantly from one vendor to another.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-29636" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/Picture2ENG.png" alt="Responsibility in MCP actions" width="624" height="422" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/Picture2ENG.png 624w, https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/Picture2ENG-282x191.png 282w, https://www.riskinsight-wavestone.com/wp-content/uploads/2026/04/Picture2ENG-58x39.png 58w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p> </p>
<h3 style="text-align: justify;">Will MCP become the standard?</h3>
<p style="text-align: justify;">Some vendors expose their applications through MCP servers or comparable mechanisms, while others favor more closed, native integrations within their own ecosystems. In practice, there is still no fully homogeneous framework that satisfactorily covers authentication, authorization, traceability, governance, and the nomenclature of exposed capabilities.</p>
<p style="text-align: justify;">Two trajectories can be envisioned:</p>
<ul style="text-align: justify;">
<li>The first would be convergence toward a standardized foundation enabling interoperability across agents, tools, and platforms. Such evolution would facilitate large‑scale deployment, improve user experience, and enable more coherent enterprise‑wide governance.</li>
<li>The second would be persistent fragmentation. In this scenario, each vendor would continue to favor its own mechanisms, security objects, and integration models. The consequences for organizations would be significant: multiplication of blind spots, heterogeneous controls, difficulty centralizing supervision, and practical impossibility of applying a homogeneous IAM policy across the entire agentic perimeter.</li>
</ul>
<p style="text-align: justify;">In the short term, market signals point toward co‑existence: interoperability initiatives are emerging, but major vendors continue to build logically integrated ecosystems. For CISOs, this means thinking not only “tool by tool” but also in terms of the ability to govern a portfolio of agents spanning multiple vendors.</p>
<p> </p>
<h3 style="text-align: justify;">Toward enterprise AI agent registries</h3>
<p style="text-align: justify;">The rise of AI agents justifies the emergence of a new governance object: the AI agent registry. Because an agent is an autonomous system capable of triggering actions, it can no longer be treated as an invisible application component. It must be identified, qualified, assigned an owner, embedded in a lifecycle, evaluated according to its scope of action, and subjected to specific rules.</p>
<p style="text-align: justify;">Such a registry must ultimately be able to answer several fundamental questions:</p>
<ul style="text-align: justify;">
<li>Which agents exist within the organization?</li>
<li>Who is responsible for them?</li>
<li>In which environment do they operate?</li>
<li>Which tools and which data do they have access to?</li>
<li>Which authentication mechanisms do they use?</li>
<li>Which human validations are required?</li>
<li>Which logs do they produce?</li>
<li>When must they be reviewed, requalified, suspended, or retired?</li>
</ul>
<p style="text-align: justify;">Some identity providers are beginning to introduce capabilities dedicated to this new category of non‑human identities. This is an important signal. But market maturity remains early, and governance cannot be outsourced entirely to vendors. The real issue is fundamentally organizational: defining a model of responsibility, control, and security that is adapted to the growing autonomy of AI systems.</p>
<p> </p>
<h2 style="text-align: justify;">When should organizations address IAM for AI agents? Right now.</h2>
<p> </p>
<p style="text-align: justify;">The rise of AI agents marks a major evolution in the transformation of information systems. By shifting from a logic of assistance to a logic of action, these systems fundamentally reshape security concerns: the challenge is no longer limited to controlling the data an AI can access, but also the <strong>actions it can execute</strong>, the <strong>privileges it leverages</strong>, and the <strong>responsibilities it triggers</strong>.</p>
<p style="text-align: justify;">In this context, <strong>IAM becomes a structuring pillar</strong>. It provides the foundation needed to <strong>make agents visible</strong>, <strong>control their entitlements</strong>, <strong>trace their actions</strong>, and <strong>define the conditions under which their autonomy can be accepted</strong>. In other words, securing AI agents cannot rely on peripheral measures: it requires an integrated governance approach that combines identity, access control, supervision, and human validation.</p>
<p style="text-align: justify;">For organizations, the objective is not to slow down the adoption of agentic AI, but <strong>to frame it within a sustainable trust model</strong>. This means making structural decisions today: mapping use cases, integrating agents into IAM frameworks, distinguishing human and non‑human identities, adapting authorization policies, and defining safeguards proportionate to the criticality of the actions delegated.</p>
<p style="text-align: justify;">As architectures become standardized and market offerings mature, the organizations best prepared will be those that treat AI agents <strong>not as simple innovative assistants</strong>, but as <strong>new actors of the information system</strong>, subject to the same requirements of security, traceability, and governance as any other critical component.</p>
<p style="text-align: justify;">The question is therefore no longer whether AI agents will find their place in the enterprise, but <strong>under what</strong> <strong>conditions of control</strong>. For CISOs, the matter is clear: the ability to industrialize agentic AI will depend less on the performance of the models than on the <strong>robustness of the IAM and governance framework</strong> put in place to supervise them.</p>
<p style="text-align: justify;"> </p>
<p style="text-align: justify;">If you, too, are questioning how to manage access for AI agents or wish to deepen the security of these emerging use cases, we would be delighted to connect. Feel free to reach out to share your challenges or to explore together potential approaches tailored to your context.</p>
<p style="text-align: justify;"> </p>
<p style="text-align: justify;"> </p>
<ol style="text-align: justify;">
<li>Wavestone<em> &#8211; Global AI Survey 2025  &#8211; </em><a href="https://www.wavestone.com/en/insight/global-ai-survey-2025-ai-adoption/"><em>AI Adoption and Its Paradoxes: Global AI survey 2025 | Wavestone</em></a><em>)</em></li>
<li>PagerDuty (2025) <em>More than Half of Companies (51%) Already Deployed AI Agents</em>. Pager Duty, March 2025. Available at: <a href="https://www.pagerduty.com/resources/ai/learn/companies-expecting-agentic-ai-roi-2025/">2025 Agentic AI ROI Survey Results</a> (Accessed: 2 January 2026)</li>
<li>Cybernews (2025) <em>Unapproved AI Tools in the Workplace</em>. September 2025. Available at: <a href="https://cybernews.com/ai-news/ai-shadow-use-workplace-survey/">https://cybernews.com/ai-news/ai-shadow-use-workplace-survey/</a> (Accessed: 2 January 2026).</li>
</ol>




<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2026/04/securing-ai-agents-why-iam-becomes-central/">Securing AI Agents: Why IAM Becomes Central</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2026/04/securing-ai-agents-why-iam-becomes-central/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
