<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Valentine Tauzin, Auteur</title>
	<atom:link href="https://www.riskinsight-wavestone.com/en/author/valentine-tauzin/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.riskinsight-wavestone.com/en/author/valentine-tauzin/</link>
	<description>The cybersecurity &#38; digital trust blog by Wavestone&#039;s consultants</description>
	<lastBuildDate>Wed, 08 Jan 2025 16:45:16 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>DORA – The Challenges of Digital Resilience in the Financial Sector by 2025</title>
		<link>https://www.riskinsight-wavestone.com/en/2025/01/dora-the-challenges-of-digital-resilience-in-the-financial-sector-by-2025/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2025/01/dora-the-challenges-of-digital-resilience-in-the-financial-sector-by-2025/#respond</comments>
		
		<dc:creator><![CDATA[Valentine Tauzin]]></dc:creator>
		<pubDate>Wed, 08 Jan 2025 16:45:14 +0000</pubDate>
				<category><![CDATA[Cyber for Financial Services]]></category>
		<category><![CDATA[Focus]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[DORA]]></category>
		<category><![CDATA[finance]]></category>
		<category><![CDATA[Operational Resilience]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=25079</guid>

					<description><![CDATA[<p>The Digital Operational Resilience Act (DORA) is a European regulation designed to enhance the resilience of financial entities against IT and cybersecurity risks. Its ambitious objective is to improve organizations’ ability to anticipate and manage crises while optimizing their operational...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2025/01/dora-the-challenges-of-digital-resilience-in-the-financial-sector-by-2025/">DORA – The Challenges of Digital Resilience in the Financial Sector by 2025</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p style="text-align: justify;">The Digital Operational Resilience Act (DORA) is a European regulation designed to enhance the resilience of financial entities against IT and cybersecurity risks. Its ambitious objective is to improve organizations’ ability to anticipate and manage crises while optimizing their operational resilience.</p>
<p style="text-align: justify;">To learn more about the regulation’s details, you can refer to this article: <a href="https://www.riskinsight-wavestone.com/en/2020/12/decrypting-dora-what-does-it-mean-for-resilience-of-financial-organisations/">What does DORA mean for Resilience of financial organisations?</a></p>
<p style="text-align: justify;">The key deadline of January 17, 2025, marks the theoretical compliance date for financial entities. It also signals the beginning of supervisory operations by regulatory authorities.</p>
<p style="text-align: justify;">In this context, <strong>Damien LACHIVER</strong> and <strong>Etienne BOUET</strong>, Senior Managers at Wavestone and experts in DORA compliance, with extensive experience supporting CAC40 entities, share their insights into the practical challenges and opportunities brought by this regulation, as well as the regulators&#8217; expectations and essential actions for effective preparation.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>How does DORA go beyond mere regulatory compliance?</u></strong></h4>
<p style="text-align: justify;"><strong>E.BOUET:</strong> DORA should not be seen merely as a compliance exercise. Yes, there are regulatory requirements to meet, but the real challenge lies in building resilience. The question to ask is: how can compliance with DORA effectively enhance operational resilience? This connection is not always straightforward. For instance, gap analyses or cybersecurity audits often reveal vulnerabilities, and compliance alone is insufficient if it doesn’t come with genuine improvements in resilience.</p>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>Many entities are still focused on compliance since DORA addresses areas already well established, such as cybersecurity, business continuity, and IT risk management. Large organizations, in particular, already benefit from high compliance levels due to decades of experience.</p>
<p style="text-align: justify;">However, beyond this compliance phase, it is crucial to shift towards remediation and anticipation, implementing initiatives that will not be fundamentally different from the historical programs already initiated. The real focus should be on identifying new scenarios or solutions that can strengthen resilience.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>What are the critical scenarios to consider for improving resilience?</u></strong></h4>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>Two major scenarios require significant attention and investment:</p>
<ul style="text-align: justify;">
<li><strong>Total loss of internal IT systems:</strong> how can information systems be restored and fully rebuilt after a large scale cyberattack?</li>
<li><strong>The sudden loss of a critical third party:</strong> what happens if I lose a partner or service provider whose operational disruption has a significant structural impact on my business?</li>
</ul>
<p style="text-align: justify;"><strong>E.BOUET:</strong> The growing dependence on third parties has noy yet been fully recognized as a major risk. The associated scenarios are not sufficiently integrated into strategic priorities, leading to a lack of investment in preparedness.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>Will financial entities be ready by January 17, 2025?</u></strong></h4>
<p style="text-align: justify;"><strong>E.BOUET:</strong> It is unlikely that all companies will be fully ready by January. The market as a whole faces delays, although significant progress has been made. For instance, most of the normative documents required for compliance have been finalized, and priorities have been aligned with risk management needs.</p>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>Indeed, January 17, 2025, will mark more of a milestone than a conclusion. Most operational projects, such as third-party management, remain to be addressed and will require ongoing effort.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>What are the main challenges in implementing DORA?</u></strong></h4>
<p style="text-align: justify;"><strong>E.BOUET:</strong> Initially, the main challenge was mobilizing a wide range of stakeholders: cybersecurity, risk management, procurement, legal, business, IT… While the topics addressed by DORA were already familiar to these teams, the regulation raises expectations and introduces additional requirements to roles thar are already well-defined.</p>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>Historically, these areas have often been handled in a fragmented, siloed manner. However, DORA demands significant and measurable progress in resilience, which requires a more coherent and integrated approach. Today, two key priorities stand out:</p>
<ul style="text-align: justify;">
<li><strong>Third-party management</strong>, which represents a massive challenge.</li>
<li><strong>Threat-Led Penetration Testing (TLPT)</strong>, an ambitious but complex novelty.</li>
</ul>
<p> </p>
<h4 style="text-align: justify;"><strong><u>Why is third-party management such a significant challenge?</u></strong></h4>
<p style="text-align: justify;"><strong>E.BOUET:</strong> Third-party management (TPRM) is one of the key challenges posed by DORA. Third parties are everywhere, but they are often poorly managed. It’s not always clear whether they are critical or not, and relationships often lack proper structure. Managing reliance on critical third parties is common sense, but it goes far beyond contractualization: organizations need to identify their third parties, assess their criticality, and manage this dependency operationally, a challenge for many.</p>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>Historically, this has been a neglected area, often handled in silos by procurement, cybersecurity, business continuity, and other functions. There is a lack of a comprehensive view of third-party risks. DORA’s aims is precisely to move beyond this fragmented approach and build a cohesive end-to-end management framework throughout the contract lifecycle.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>What does “testing exit strategies” with critical third parties mean?</u></strong></h4>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>Testing exit strategies means anticipating how an organization would respond if a third party’s services were interrupted, whether voluntarily or involuntarily. For example, in the case of a cyberattack on a service provider, it may be necessary to sever the relationship to protect the organization’s own information systems.</p>
<p style="text-align: justify;"><strong>E.BOUET:</strong> Tabletop exercises help assess reliance on third parties and theoretically simulate the procedures to follow in different scenarios. They also encourage organizations to rethink their relationships with certain providers, particularly those unable to align with DORA’s requirements.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>What makes TLPT (<em>Threat-Led Penetration Testing</em>) a specific challenge?</u></strong></h4>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>TLPT is one of the key innovations introduced by DORA. It involves threat-led penetration tests guided by the DORA regulation, the theoretical TIBER framework and adapted by national authorities. While the theoretical framework is well-defined, practical implementation remains challenging, as these tests are not yet common in the financial sector. Their limited frequency (one test every three years) and the regulator&#8217;s resources reduce the immediate urgency, but they are crucial for strengthening resilience.</p>
<p style="text-align: justify;"><strong>E.BOUET:</strong> These tests still raise many questions, as they require a new approach for some players, especially those less experienced with this type of exercise. Currently, we are in a waiting phase, with a few dry-run initiatives underway. The actual implementation will depend on the regulator&#8217;s planning and the lessons learned from the first fully executed TLPTs in the coming months.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>How can DORA transform IT risk governance?</u></strong></h4>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>DORA promotes a unified approach to IT risk management by breaking down silos between various functions, such as cybersecurity, business continuity, and procurement. This involves:</p>
<ul style="text-align: justify;">
<li><strong>Harmonizing key terminologies and concepts</strong> (for example, ensuring that the concept of criticality is understood consistently across all functions) to streamline and improve interactions with business units.</li>
<li><strong>Implementing structural changes</strong> (such as adopting a CSO model – Chief Security Officer) to establish unified governance across functions, enabling more effective and coherent decision-making.</li>
</ul>
<p> </p>
<h4 style="text-align: justify;"><strong><u>What are the concrete requirements to comply with DORA by January 17, 2025, and beyond?</u></strong></h4>
<p style="text-align: justify;"><strong>E.BOUET: </strong>The first major expectation for January 17 is the ability to identify a major incident according to DORA’s criteria and notify the regulator. This requires well-defined operational processes to ensure rapid detection and reporting. This requirement is justified, given the history of IT and security teams in a sector accustomed to managing critical incidents.</p>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>Then, by April 30, 2025, financial entities will need to produce a register of information on their third parties. I believe organizations will be able to provide such a register by this date. However, additional work will likely be needed to improve its quality and completeness.</p>
<p style="text-align: justify;"><strong>E.BOUET: </strong>Finally, throughout 2025, what matters is demonstrating that entities are making progress. Regulators expect projects to be initiated, identified gaps to be gradually addressed, and tangible advancements to be made. The key is to have a clear and structured roadmap to meet DORA’s expectations.</p>
<p> </p>
<h4 style="text-align: justify;"><strong><u>What are the long-term benefits expected from DORA?</u></strong></h4>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>DORA has the potential to create a virtuous cycle by strengthening risk management, business alignment, and operational resilience within the sector. It encourages entities to go beyond compliance and integrate these priorities into their overall strategy.</p>
<p style="text-align: justify;"><strong>E.BOUET: </strong>One key aspect is the reaffirmed responsibility of executive leadership. Their involvement, particularly through regular risk validation, enhances overall awareness and drives the investments necessary to improve resilience.</p>
<p style="text-align: justify;"><strong>D.LACHIVER: </strong>This connection between operational teams and leadership aligns strategic and operational priorities, fostering a culture of continuous improvement. It also empowers IT risk teams and supports the transformation of organizations toward greater digital resilience.</p>
<p> </p>
<p style="text-align: justify;">For any support in achieving DORA compliance, you can contact:</p>
<ul style="text-align: justify;">
<li><a href="mailto:damien.lachiver@wavestone.com">damien.lachiver@wavestone.com</a></li>
<li><a href="mailto:etienne.bouet@wavestone.com">etienne.bouet@wavestone.com</a></li>
</ul>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2025/01/dora-the-challenges-of-digital-resilience-in-the-financial-sector-by-2025/">DORA – The Challenges of Digital Resilience in the Financial Sector by 2025</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2025/01/dora-the-challenges-of-digital-resilience-in-the-financial-sector-by-2025/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI: Discover the 5 most frequent questions asked by our clients!</title>
		<link>https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/#respond</comments>
		
		<dc:creator><![CDATA[Valentine Tauzin]]></dc:creator>
		<pubDate>Wed, 08 Nov 2023 11:00:00 +0000</pubDate>
				<category><![CDATA[Cyberrisk Management & Strategy]]></category>
		<category><![CDATA[Focus]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[attacks]]></category>
		<category><![CDATA[chatgpt]]></category>
		<category><![CDATA[Regulations]]></category>
		<category><![CDATA[risks]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=21818</guid>

					<description><![CDATA[<p>The dawn of generative Artificial Intelligence (GenAI) in the corporate sphere signals a turning point in the digital narrative. It is exemplified by pioneering tools like OpenAI’s ChatGPT (which found its way into Bing as “Bing Chat, leveraging the GPT-4...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/">AI: Discover the 5 most frequent questions asked by our clients!</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p style="text-align: justify;">The dawn of generative Artificial Intelligence (GenAI) in the corporate sphere signals a turning point in the digital narrative. It is exemplified by pioneering tools like OpenAI’s ChatGPT (which found its way into Bing as “Bing Chat, leveraging the GPT-4 language model) and Microsoft 365’s Copilot. These technologies have graduated from being mere experimental subjects or media fodder. Today, they lie at the heart of businesses, redefining workflows and outlining the future trajectory of entire industries.</p>
<p style="text-align: justify;">While there have been significant advancements, there are also challenges. For instance, Samsung’s sensitive data was exposed on ChatGPT by employees (the entire source code of a database download program)<a href="#_ftn1" name="_ftnref1">[1]</a>. Compounding these challenges, ChatGPT [OpenAI] itself underwent a security breach that affected over 100 000 users between June 2022 and May 2023, with those compromised credentials now being traded on the Dark web<a href="#_ftn2" name="_ftnref2">[2]</a>.</p>
<p style="text-align: justify;">At this digital crossroad, it’s no wonder that there’s both enthusiasm and caution about embracing the potential of generative AI. Given these complexities, it’s understandable why many grapple with determining the optimal approach to AI. With that in mind, the article aims to address the most representative questions asked by our clients.</p>
<h2 style="text-align: justify;"><span style="color: #732196;">Question 1: Is Generative AI just a buzz?</span></h2>
<p style="text-align: justify;">AI is a collection of theories and techniques implemented with the aim of creating machines capable of simulating the cognitive functions of human intelligence (vision, writing, moving&#8230;). A particularly captivating subfield of AI is “Generative AI”. This can be defined as a discipline that employs advanced algorithms, including artificial neural networks, to <strong>autonomously craft content</strong>, whether it’s text, images, or music. Moving on from your basic banking chatbot answering aside all your question, GenAI not only just mimics capabilities in a remarkable way, but in some cases, enhances them.</p>
<p style="text-align: justify;">Our observation on the market: the reach of generative AI is broad and profound. It contributes to diverse areas such as content creation, data analysis, decision-making, customer support and even cybersecurity (for example, by identifying abnormal data patterns to counter threats). We’ve observed 3 fields where GenAI is particularly useful.</p>
<p> </p>
<p style="text-align: justify;"><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-21820" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1.png" alt="" width="605" height="341" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1.png 605w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1-339x191.png 339w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1-69x39.png 69w" sizes="(max-width: 605px) 100vw, 605px" /></p>
<h3> </h3>
<h3>Marketing and customer experience personalisation</h3>
<p style="text-align: justify;">GenAI offers insights into customer behaviours and preferences. By analysing data patterns, it allows businesses to craft tailored messages and visuals, enhancing engagement, and ensuring personalized interactions.</p>
<h3>No-code solutions and enhanced customer support</h3>
<p style="text-align: justify;">In today’s rapidly changing digital world, the ideas of no-code solutions and improved customer service are increasingly at the forefront. Bouygues Telecom is a good example of a leveraging advanced tools. They are actively analysing voice interactions from recorded conversations between advisors and customers, aiming to improve customer relationships<a href="#_ftn3" name="_ftnref3">[3]</a>. On a similar note, Tesla employs the AI tool “<a href="https://www.youtube.com/watch?v=1mP5e5-dujg">Air AI</a>” for seamless customer interaction, handling sales calls with potential customers, even going so far as to schedule test drives.</p>
<p style="text-align: justify;">As for coding, an interesting experiment from one of our clients stands out. Involving 50 developers, the test found that 25% of the AI-generated code suggestions were accepted, leading to a significant 10% boost in productivity. It is still early to conclude on the actual efficiency of GenAI for coding, but the first results are promising and should be improved. However, the intricate issue of intellectual property rights concerning this AI-generated code continues to be a topic of discussion.</p>
<h3>Documentary watch and research tool</h3>
<p style="text-align: justify;">Using AI as a research tool can help save hours in domains where regulatory and documentary corpus are very extensive (e.g.: financial sector). At Wavestone, we internally developed two AI tools. The first, CISO GPT, allows users to ask specific security questions in their native language. Once a question is asked, the tool scans through extensive security documentation, efficiently extracting and presenting relevant information. The second one, a Library and credential GPT, provides specific CVs from Wavestone employees, as well as references from previous engagements for the writing of commercial proposals.</p>
<p style="text-align: justify;">However, while tools like ChatGPT (which draws data from public databases) are undeniably beneficial, the game-changing potential emerges when companies tap into their proprietary data. For this, companies need to implement GenAI capabilities internally or setup systems that ensure the protection of their data (cloud-based solution like Azure OpenAI or proprietary models). <strong>From our standpoint, GenAI is worth more than just the buzz around it and is here to stay. </strong>There are real business applications and true added value, but also security risks. Your company needs to kick-off the dynamic to be able to implement GenAI projects in a secure way.</p>
<p> </p>
<h2 style="text-align: justify;"><span style="color: #9727b3;"><span style="color: #732196;">Question 2: What is the market reaction to the use of ChatGPT?</span></span></h2>
<p style="text-align: justify;">To delve deeper into the perspective of those at the forefront of cybersecurity, we’ve asked our client’s CISO’s, their opinions on the implications and opportunities of GenAI. Therefore, the following graph illustrates the opinions of CISOs on this subject.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-21822" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2.png" alt="" width="601" height="279" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2.png 601w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2-411x191.png 411w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2-71x33.png 71w" sizes="(max-width: 601px) 100vw, 601px" /></p>
<p style="text-align: justify;">Based on our survey, the feedback from the CISOs can be grouped into three distinct categories:</p>
<h3>The Pragmatists (65%)</h3>
<p style="text-align: justify;">Most of our respondents recognize the potential data leakage risks with ChatGPT, but they equate them to risk encountered on forums or during exchanges on platforms or forums such as Stack Overflow (for developers). They believe that the risk of data leaks hasn’t significantly changed with ChatGPT. However, the current buzz justifies dedicated sensibilization campaigns to emphasize the importance of not using company-specific or sensitive data.</p>
<h3>The Visionaries (25%)</h3>
<p style="text-align: justify;">A quarter of the respondents view ChatGPT as a ground-breaking tool. They’ve noticed its adoption in departments such as communication and legal. They’ve taken proactive steps to understanding its use (which data, which use cases) and have subsequently established a set of guidelines. This is a more collaborative approach to define a use case framework.</p>
<h3>The Sceptics (10%)</h3>
<p style="text-align: justify;">A segment of the market has reservations about ChatGPT. To them, it’s a tool that’s too easy to misuse, receives excessive media attention and carries inherent risks, according to various business sectors. Depending on your activity, this can be relevant when judging that the risk of data leakage and loss of intellectual property is too high compared to the potential benefits.</p>
<p> </p>
<h2><span style="color: #9727b3;"><span style="color: #732196;">Question 3: What are the risks of Generative AI?</span></span></h2>
<p style="text-align: justify;">In evaluating the diverse perspectives on generative AI within organizations, we’ve classified the concerns into four distinct categories of risks, presented from the least severe to the most critical:</p>
<h3>Content alteration and misrepresentation</h3>
<p style="text-align: justify;">Organizations using generative AI must safeguard the integrity of their integrated systems. When AI is maliciously tampered with, it can distort genuine content, leading to misinformation. This can produce biased outputs, undermining the reliability and effectiveness of AI-driven solutions. Specifically, for Large Language Models (LLMs) like GenAI, there’s a notable concern of prompt injections. To mitigate this, organizations should:</p>
<ol style="text-align: justify;">
<li>Develop a malicious input classification system that assesses the legitimacy of a user’s input, ensuring that only genuine prompts are processed.</li>
<li>Limit the size and change the format of user inputs. By adjusting these parameters, the chances of successful prompt injection are significantly reduced.</li>
</ol>
<h3>Deceptive and manipulative threats</h3>
<p style="text-align: justify;">Even if an organization decides to prohibit the use of generative AI, it must remain vigilant about the potential surge in phishing, scams and deepfake attacks. While one might argue that these threats have been around in the cybersecurity realm for some time, the introduction of generative AI intensifies both their frequency and sophistication.</p>
<p style="text-align: justify;">This potential is vividly illustrated through a range of compelling examples. For instance, Deutsche Telekom released an awareness <a href="https://www.youtube.com/watch?v=F4WZ_k0vUDM">video</a> that demonstrates the ability, by using GenAI, to age a young girl’s image from photos/videos available on social media.</p>
<p style="text-align: justify;">Furthermore, HeyGen is a generative AI software capable of dubbing <a href="https://www.youtube.com/watch?v=gQYm_aia5No">videos</a> into multiple languages while retaining the original voice. It’s now feasible to hear Donald Trump articulating in French or Charles de Gaulle conversing in Portuguese.</p>
<p style="text-align: justify;">These instances highlight the potential for attackers to use these tools to mimic a CEO’s voice, create convincing phishing emails, or produce realistic video deepfakes, intensifying detection and defence challenges.</p>
<p style="text-align: justify;">For more information on the use of GenAI by cybercriminals, consult the dedicated RiskInsight <a href="https://www.riskinsight-wavestone.com/en/2023/10/the-industrialization-of-ai-by-cybercriminals-should-we-really-be-worried/">article</a>.</p>
<h3>Data confidentiality and privacy concerns</h3>
<p style="text-align: justify;">If organizations choose to allow the use of generative AI, they must consider that the vast data processing capabilities of this technology can pose unintended confidentiality and privacy risks. First, while these models excel in generating content, they might leak sensitive training data or replicate copyrighted content.</p>
<p style="text-align: justify;">Furthermore, concerning data privacy rights, if we examine ChatGPT’s privacy policy, the chatbot can gather information such as account details, identification data extracted from your device or browser, and information entered in the chatbot (that can be used to train the generative AI)<a href="#_ftn4" name="_ftnref4">[4]</a>. According to article 3 (a) of OpenAI’s general terms and conditions, input and output belong to the user. However, since these data are stored and recorded by Open AI, it poses risks related to intellectual property and potential data breaches (as previously noted in the Samsung case). Such risks can have significant reputational and commercial impact on your organization.</p>
<p style="text-align: justify;">Precisely for these reasons, OpenAI developed the ChatGPT Business subscription, which provides enhanced control over organizational data (such as AES-256 encryption for data at rest, TLS 1.2+ for data in transit, SSO SAML authentication, and a dedicated administration console)<a href="#_ftn5" name="_ftnref5">[5]</a>. But in reality, it&#8217;s all about the trust you have in your provider and the respect of contractual commitments. Additionally, there&#8217;s the option to develop or train internal AI models using one&#8217;s own data for a more tailored solution.</p>
<h3>Model vulnerabilities and attacks</h3>
<p style="text-align: justify;">As more organizations use machine learning models, it’s crucial to understand that these models aren’t fool proof. They can face threats that affect their reliability, accuracy or confidentiality, as it will be explained in the following section.</p>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #9727b3;"><span style="color: #732196;">Question 4: How can an AI model be attacked?</span></span></h2>
<p style="text-align: justify;">AI introduces added complexities atop existing network and infrastructure vulnerabilities. It’s crucial to note that these complexities are not specific to generative AI, but they are present in various AI models. Understanding these attack models is essential to reinforcing defences and ensuring the secure deployment of AI. There are three main attack models (non-exhaustive list):</p>
<p style="text-align: justify;">For detailed insights on vulnerabilities in Large Language Models and generative AI, refer to the <a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v05.pdf">“OWASP Top 10 for LLM”</a> by the Open Web Application Security Project (OWASP).</p>
<h3>Evasion attacks</h3>
<p style="text-align: justify;">These attacks target AI by manipulating the inputs of machine learning algorithms to introduce minor disturbances that result in significant alterations to the outputs. Such manipulations can cause the AI model to classify inaccurately or overlook certain inputs. A classic example would be altering signs to deceive AI self-driving cars (have identify a “stop” sign into a “priority” sign). However, evasion attacks can also apply to facial recognition. One might use subtle makeup patterns, strategically placed stickers, special glasses, or specific lighting conditions to confuse the system, leading to misidentification.</p>
<p style="text-align: justify;">Moreover, evasion attacks extend beyond visual manipulation. In voice command systems, attackers can embed malicious commands within regular audio content in such a way that they’re imperceptible to humans but recognizable by voice assistants. For instance, researchers have demonstrated adversarial audio techniques targeting speech recognition systems, like those in voice-activated smart speaker systems such as Amazon’s Alexa. In one scenario, a seemingly ordinary song or commercial could contain a concealed command instructing the voice assistant to make an unauthorized purchase or divulge personal information, all without the user’s awareness<a href="#_ftn6" name="_ftnref6">[6]</a>.</p>
<h3>Poisoning</h3>
<p style="text-align: justify;">Poisoning is a type of attack in which the attacker altered data or model to modify the ML algorithm’s behaviour in a chosen direction (e.g to sabotage its results, to insert a backdoor). It is as if the attacker conditioned the algorithm according to its motivations. Such attacks are also called causative attacks.</p>
<p style="text-align: justify;">In line with this definition, attackers use causative attacks to guide a machine learning algorithm towards their intended outcome. They introduced malicious samples into the training dataset, leading the algorithm to behave in unpredictable ways. A notorious example is Microsoft’s chatbot, TAY, that was unveiled on Twitter in 2016. Designed to emulate and converse with American teenagers, it soon began acting like a far-right activist<a href="#_ftn7" name="_ftnref7">[7]</a>. This highlights the fact that, in their early learning stages, AI systems are susceptible to the data they encounter. 4Chan users intentionally poisoned TAY’s data with their controversial humour and conversations.</p>
<p style="text-align: justify;">However, data poisoning can also be unintentional, stemming from biases inherent in the data sources or the unconscious prejudices of those curating the datasets. This became evident when early facial recognition technology had difficulties identifying darker skin tones. This underscores the need for diverse and unbiased training data to guard against both deliberate and inadvertent data distortions.</p>
<p style="text-align: justify;">Finally, the proliferation of open-source AI algorithms online, such as those on platforms like Hugging Face, presents another risk. Malicious actors could modify and poison these algorithms to favour specific biases, leading unsuspecting developers to inadvertently integrate tainted algorithms into their projects, further perpetuating biases or malicious intents.</p>
<h3>Oracle attacks</h3>
<p style="text-align: justify;">This type of attack involves probing a model with a sequence of meticulously designed inputs while analysing the outputs. Through the application of diverse optimization strategies and repeated querying, attackers can deduce confidential information, thereby jeopardizing both user privacy, overall system security, or internal operating rules.</p>
<p style="text-align: justify;">A pertinent example is the case of Microsoft’s AI-powered Bing chatbot. Shortly after its unveiling, a Stanford student, Kevin Liu, exploited the chatbot using a prompt injection attack, leading it to reveal its internal guidelines and code name “Sidney”, even though one of the fundamental internal operating rules of the system was to never reveal such information<a href="#_ftn8" name="_ftnref8">[8]</a>.</p>
<p style="text-align: justify;">A previous RiskInsight <a href="https://www.riskinsight-wavestone.com/en/2023/06/attacking-ai-a-real-life-example/">article</a> showed an example of Evasion and Oracle attacks and explained other attack models that are not specific to AI, but that are nonetheless an important risk for these technologies.</p>
<p> </p>
<h2 style="text-align: justify;"><span style="color: #732196;">Question 5: What is the status of regulations? How is generative AI regulated?</span></h2>
<p style="text-align: justify;">Since our <a href="https://www.riskinsight-wavestone.com/en/2022/06/artificial-intelligence-soon-to-be-regulated/">2022 article</a>, there has been significant development in AI regulations across the globe.</p>
<h3 style="text-align: justify;">EU</h3>
<p style="text-align: justify;">The EU’s digital strategy aims to regulate AI, ensuring its innovative development and use, as well as the safety and fundamental rights of individuals and businesses regarding AI. On June 14, 2023, the European Parliament adopted and amended the proposal for a regulation on Artificial Intelligence, categorizing AI risks into four distinct levels: unacceptable, high, limited, and minimal<a href="#_ftn9" name="_ftnref9">[9]</a>.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-21824" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3.png" alt="" width="605" height="322" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3.png 605w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3-359x191.png 359w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3-71x39.png 71w" sizes="(max-width: 605px) 100vw, 605px" /></p>
<h3 style="text-align: justify;">US</h3>
<p style="text-align: justify;">The White House Office of Science and Technology Policy, guided by diverse stakeholder insights, presented the “Blueprint for an AI Bill of Rights”<a href="#_ftn10" name="_ftnref10">[10]</a>. Although non-binding, it underscores a commitment to civil rights and democratic values in AI’s governance and deployment.</p>
<h3 style="text-align: justify;">China</h3>
<p style="text-align: justify;">China’s Cyberspace Administration, considering rising AI concerns, proposed the Administrative Measures for Generative Artificial Intelligence Services. Aimed at securing national interests and upholding user rights, these measures offer a holistic approach to AI governance. Additionally, the measures seek to mitigate potential risks associated with Generative AI services, such as the spread of misinformation, privacy violations, intellectual property infringement, and discrimination. However, its territorial reach might pose challenges for foreign AI service providers in China<a href="#_ftn11" name="_ftnref11">[11]</a>.</p>
<h3 style="text-align: justify;">UK</h3>
<p style="text-align: justify;">The United Kingdom is charting a distinct path, emphasizing a pro-innovation approach in its National AI Strategy. The Department for Science, Innovation &amp; Technology released a white paper titled “AI Regulation: A Pro-Innovation Approach”, with a focus on fostering growth through minimal regulations and increased AI investments. The UK framework doesn’t prescribe rules or risk levels to specific sectors or technologies. Instead, it focuses on regulating the outcomes AI produces in specific applications. This approach is guided by five core principles: safety &amp; security, transparency, fairness, accountability &amp; governance, and contestability &amp; redress<a href="#_ftn12" name="_ftnref12">[12]</a>.</p>
<h3 style="text-align: justify;">Frameworks</h3>
<p style="text-align: justify;">Besides formal regulations, there are several guidance documents, such as NIST’s AI Risk Management Framework and ISO/IEC 23894, that provide recommendations to manage AI-associated risks. They focus on criteria aimed at trusting the algorithms in fine, and this is not just about cybersecurity! It’s about trust.</p>
<p> </p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-21826" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4.png" alt="" width="605" height="340" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4.png 605w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4-340x191.png 340w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4-69x39.png 69w" sizes="auto, (max-width: 605px) 100vw, 605px" /></p>
<p> </p>
<p style="text-align: justify;">With such a broad regulatory landscape, organizations might feel overwhelmed. To assist, we suggest focusing on key considerations when integrating AI into operations, in order to setup the roadmap towards being compliant.</p>
<ul style="text-align: justify;">
<li><strong>Identify all existing AI systems</strong> within the organization and establish a procedure/protocol to identify new AI endeavours.</li>
<li><strong>Evaluate AI systems</strong> using criteria derived from reference frameworks, such as NIST.</li>
<li><strong>Categorize AI systems according to the AI Act’s classification</strong> (unacceptable, high, low or minimal).</li>
<li><strong>Determine the tailored risk management approach</strong> for each category.</li>
</ul>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #732196;">Bonus Question: This being said, what can I do right now?</span></h2>
<p style="text-align: justify;">As the digital landscape evolves, Wavestone emphasizes a comprehensive approach to generative AI integration. We advocate that every AI deployment undergo a rigorous sensitivity analysis, ranging from outright prohibition to guided implementation and stringent compliance. For systems classified as high risk, it’s paramount to apply a detailed risk analysis anchored in the standards set by ENISA and NIST. While AI introduces a sophisticated layer, foundational IT hygiene should never be side lined. We recommend the following approach:</p>
<ul style="text-align: justify;">
<li><span style="color: #732196;"><strong><em>Pilot &amp; Validate:</em></strong></span> Begin by gauging the transformative potential of generative AI within your organizational context. Moreover, it’s essential to understand the tools at your disposal, navigate the array of available choices, and make informed decisions based on specific needs and use cases.</li>
<li><span style="color: #732196;"><strong><em>Strategic Insight:</em></strong> </span>Based on our client CISO survey, ascertain your ideal AI adoption intensity. Do you resonate with the 10%, 65% or 25% adoption benchmarks shared by your industry peers?</li>
<li><span style="color: #732196;"><strong><em>Risk Mitigation: </em></strong></span>Ground your strategy in a comprehensive risk assessment, proportional to your intended adoption intensity.</li>
<li><span style="color: #732196;"><strong><em>Policy Formulation:</em> </strong></span>Use your risk-benefit analysis as a foundation to craft AI policies that are both robust and agile.</li>
<li><span style="color: #732196;"><strong><em>Continuous Learning &amp; Regulatory Vigilance:</em> </strong></span>Maintain an unwavering commitment to staying updated with the evolving regulatory landscape. Both locally and globally, it’s crucial to stay informed about the latest tools, attack methods, and defensive strategies.</li>
</ul>
<p style="text-align: justify;"><a href="#_ftnref1" name="_ftn1">[1]</a>  <a href="https://www.rfi.fr/fr/technologies/20230409-des-donn%C3%A9es-sensibles-de-samsung-divulgu%C3%A9s-sur-chatgpt-par-des-employ%C3%A9s">Des données sensibles de Samsung divulgués sur ChatGPT par des employés (rfi.fr)</a></p>
<p style="text-align: justify;"><a href="#_ftnref2" name="_ftn2">[2]</a> <a href="https://www.phonandroid.com/chatgpt-100-000-comptes-pirates-se-retrouvent-en-vente-sur-le-dark-web.html">https://www.phonandroid.com/chatgpt-100-000-comptes-pirates-se-retrouvent-en-vente-sur-le-dark-web.html</a></p>
<p style="text-align: justify;"><a href="#_ftnref3" name="_ftn3">[3]</a> <a href="https://www.cio-online.com/actualites/lire-bouygues-telecom-mise-sur-l-ia-generative-pour-transformer-sa-relation-client-14869.html">Bouygues Telecom mise sur l&#8217;IA générative pour transformer sa relation client (cio-online.com)</a></p>
<p style="text-align: justify;"><a href="#_ftnref4" name="_ftn4">[4]</a> <a href="https://www.bitdefender.fr/blog/hotforsecurity/quelles-donnees-chat-gpt-collecte-a-votre-sujet-et-pourquoi-est-ce-important-pour-votre-confidentialite-numerique/">Quelles données Chat GPT collecte à votre sujet et pourquoi est-ce important pour votre vie privée en ligne ? (bitdefender.fr)</a></p>
<p style="text-align: justify;"><a href="#_ftnref5" name="_ftn5">[5]</a> <a href="https://www.lemondeinformatique.fr/actualites/lire-openai-lance-un-chatgpt-plus-securise-pour-les-entreprises-91387.html">OpenAI lance un ChatGPT plus sécurisé pour les entreprises &#8211; Le Monde Informatique</a></p>
<p style="text-align: justify;"><a href="#_ftnref6" name="_ftn6">[6]</a> <a href="https://ieeexplore.ieee.org/document/8747397">Selective Audio Adversarial Example in Evasion Attack on Speech Recognition System | IEEE Journals &amp; Magazine | IEEE Xplore</a></p>
<p style="text-align: justify;"><a href="#_ftnref7" name="_ftn7">[7]</a> <a href="https://www.washingtonpost.com/news/the-intersect/wp/2016/03/25/not-just-tay-a-recent-history-of-the-internets-racist-bots/">Not just Tay: A recent history of the Internet’s racist bots &#8211; The Washington Post</a></p>
<p style="text-align: justify;"><a href="#_ftnref8" name="_ftn8">[8]</a> <a href="https://www.phonandroid.com/microsoft-comment-un-etudiant-a-oblige-lia-de-bing-a-reveler-ses-secrets.html">Microsoft : comment un étudiant a obligé l&#8217;IA de Bing à révéler ses secrets (phonandroid.com)</a></p>
<p style="text-align: justify;"><a href="#_ftnref9" name="_ftn9">[9]</a> <a href="https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf">Artificial intelligence act (europa.eu)</a></p>
<p style="text-align: justify;"><a href="#_ftnref10" name="_ftn10">[10]</a> <a href="https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf">https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf</a></p>
<p style="text-align: left;"><a href="#_ftnref11" name="_ftn11">[11]</a> <a href="https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/">https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/</a></p>
<p style="text-align: justify;"><a href="#_ftnref12" name="_ftn12">[12]</a> <a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper">A pro-innovation approach to AI regulation &#8211; GOV.UK (www.gov.uk)</a></p>
<p style="text-align: justify;"> </p>


<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/">AI: Discover the 5 most frequent questions asked by our clients!</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
