<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Regulations - RiskInsight</title>
	<atom:link href="https://www.riskinsight-wavestone.com/en/tag/regulations/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.riskinsight-wavestone.com/en/tag/regulations/</link>
	<description>The cybersecurity &#38; digital trust blog by Wavestone&#039;s consultants</description>
	<lastBuildDate>Wed, 09 Jul 2025 13:47:01 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Navigating Cybersecurity Compliance: Managing the Complexity of Expanding Regulatory Layers</title>
		<link>https://www.riskinsight-wavestone.com/en/2025/07/navigating-cybersecurity-compliance-managing-the-complexity-of-expanding-regulatory-layers/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2025/07/navigating-cybersecurity-compliance-managing-the-complexity-of-expanding-regulatory-layers/#respond</comments>
		
		<dc:creator><![CDATA[Perrine Viard]]></dc:creator>
		<pubDate>Wed, 09 Jul 2025 12:45:43 +0000</pubDate>
				<category><![CDATA[Digital Compliance]]></category>
		<category><![CDATA[Focus]]></category>
		<category><![CDATA[Cyber compliance]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[règlementation]]></category>
		<category><![CDATA[Regulations]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=26592</guid>

					<description><![CDATA[<p>Cybersecurity regulations have been multiplying since the 2010s, and this trend continues, driven by the intensification of threats, the rapid rise of new technologies, the growing dependence of businesses on IT, and an unstable geopolitical context. While this trend aims...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2025/07/navigating-cybersecurity-compliance-managing-the-complexity-of-expanding-regulatory-layers/">Navigating Cybersecurity Compliance: Managing the Complexity of Expanding Regulatory Layers</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p style="text-align: justify;">Cybersecurity regulations have been multiplying since the 2010s, and this trend continues, driven by the intensification of threats, the rapid rise of new technologies, the growing dependence of businesses on IT, and an unstable geopolitical context. While this trend aims to better protect economic actors and critical infrastructures, it also creates increasing complexity for companies, particularly those with a significant international footprint, which must navigate a patchwork of often heterogeneous regulations. In this context, more than 76% of CISOs believe that the fragmentation of regulations across jurisdictions significantly affects their organizations&#8217; ability to maintain compliance<a href="#_ftn1" name="_ftnref1">[1]</a>.</p>
<p style="text-align: justify;">In this article, we review the latest cybersecurity regulatory updates and the challenges they pose, and we propose two approaches to best manage the accumulation of regulations.</p>
<p> </p>
<h2 style="text-align: justify;">Current landscape: A continuing proliferation of cybersecurity regulations</h2>
<p> </p>
<h3 style="text-align: justify;">In Europe, a strengthening of cybersecurity laws and an expansion of scope</h3>
<p> </p>
<p style="text-align: justify;">In recent years, <strong>the European Union has continued its regulatory momentum</strong> in cybersecurity and resilience, following the implementation of structuring regulations such as DORA, NIS2, CRA, and the AI Act. These regulations also concern a larger number of actors, particularly with an extension of the regulated sectors.</p>
<p style="text-align: justify;">The first is the <strong>DORA regulation</strong>. Entered into force in January 2025, it imposes obligations on financial entities to strengthen their digital resilience, focusing on four main areas: ICT risk management, incident management, operational resilience testing, and ICT service provider risk management.</p>
<p style="text-align: justify;">The <strong>NIS2 directive</strong>, which came into force in October 2024, expands the objectives and scope of NIS1. It now applies to two types of entities:</p>
<ul style="text-align: justify;">
<li><strong>Essential Entities (EE) &#8211; </strong>previously known as Operators of Essential Services (OES) in NIS1. However, the list of applicable sectors has significantly expanded.</li>
<li><strong>Important Entities (IE) &#8211;</strong> this new category aims to support the development of digital uses in society. It includes, for example, the manufacturing sector of IT equipment. IEs are considered less critical than EEs, so the obligations imposed on them at the national level will be less stringent.</li>
</ul>
<p style="text-align: justify;">Meanwhile, the EU also adopted the <strong>Directive on the Resilience of Critical Entities (REC)</strong>, also effective from October 2024. It requires critical infrastructure operators to implement measures to prevent, protect against, and manage risks, ensuring continuity of vital services essential to the Union’s economic and social stability.</p>
<p style="text-align: justify;">The <strong>NIS2 and REC directives</strong> had to be transposed into national laws by <strong>17 October 2024</strong>. As of now, only a few Member States have completed this process. In France, following a first vote in the Senate on 12 March 2025, the bill is now before the National Assembly, with a public session scheduled for mid-September.</p>
<p style="text-align: justify;">To further address cybersecurity risks linked to digital products, the EU adopted the <strong>Cyber Resilience Act</strong>, effective since 10 December 2024. This regulation applies to both standard digital products (e.g. consumer devices, smart cities) and critical digital products (e.g. firewalls, industrial control systems). It requires these to be free of known vulnerabilities, properly documented, and subject to structured vulnerability management.</p>
<p style="text-align: justify;">Outside the EU, the <strong>United Kingdom</strong> has also strengthened its regulatory framework. Faced with rising cyberattacks on critical sectors like the NHS and Ministry of Defence and recognizing a lag in legislative adaptation, the UK government presented the <strong>Cyber Security and Resilience Bill</strong> in April 2025. The bill draws inspiration from NIS2 and aims to boost national resilience against growing cyber threats.</p>
<p> </p>
<h3 style="text-align: justify;">A similar dynamic in Asia</h3>
<p style="text-align: justify;"><strong> </strong></p>
<p style="text-align: justify;">Cybersecurity regulations have also been strengthened in Asia in recent years, particularly in China and Hong Kong.</p>
<p style="text-align: justify;"><strong>In China</strong>, the <strong>Network Data Security Management Regulations</strong> came into effect on January 1<sup>st</sup>, 2025. It complements, clarifies, and extends the obligations arising from previous regulations (CSL, DSL, PIPL). It covers all <strong>electronic data processed via networks, including non-personal data</strong>, and is structured around three main axes:</p>
<ul style="text-align: justify;">
<li>The protection of personal data, with a focus on explicit consent, transferability, and transparency;</li>
<li>The management of important data, requiring their identification, documentation, and security;</li>
<li>The accountability of large digital platforms, subject to enhanced obligations in terms of governance, transparency, and algorithmic ethics.</li>
</ul>
<p style="text-align: justify;"> </p>
<p style="text-align: justify;"><strong>In Hong Kong</strong>, a new measure aimed at strengthening the security of critical infrastructure was adopted on March 19<sup>th</sup>, 2025, and is set to come into effect on January 1<sup>st</sup>, 2026. The main requirements of the Computer Systems Bill are centered around four themes: an enhanced <strong>organizational structure</strong> (local presence, cybersecurity unit, change reporting), <strong>threat prevention</strong> (security plan, annual assessment, audit), <strong>incident management</strong> (rapid notification, response plan, written report), and <strong>reporting obligations</strong> to the authorities.</p>
<p> </p>
<h3 style="text-align: justify;">Divergent approaches between the European Union and the United States, complicating compliance management </h3>
<p> </p>
<h5 style="text-align: justify;">A. Weakening of the PCLOB: What future for data transfers between the EU and the United States? </h5>
<p> </p>
<p style="text-align: justify;">The agreements for the transfer of personal data between the EU and the United States have experienced several disruptions, marked by the Schrems I and Schrems II rulings, which successively invalidated the transatlantic agreements due to non-compliance with the requirements of the CJEU. Then, in 2023, the European Commission adopted the Data Privacy Framework (DPF), intended to re-establish a compliant legal framework, relying notably on the PCLOB, an independent body responsible for overseeing U.S. intelligence practices. </p>
<p style="text-align: justify;">However, on January 27<sup>th</sup>, 2025, the Trump administration revoked several members of the PCLOB, rendering the body inoperative. This decision undermines the validity of the DPF, pushing companies to revert to Transfer Impact Assessments (TIA), which are complex, costly, and legally uncertain.</p>
<p> </p>
<p><img fetchpriority="high" decoding="async" class="aligncenter wp-image-26603 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2025/07/Capture-decran-2025-07-09-154612.png" alt="" width="1165" height="619" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2025/07/Capture-decran-2025-07-09-154612.png 1165w, https://www.riskinsight-wavestone.com/wp-content/uploads/2025/07/Capture-decran-2025-07-09-154612-359x191.png 359w, https://www.riskinsight-wavestone.com/wp-content/uploads/2025/07/Capture-decran-2025-07-09-154612-71x39.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2025/07/Capture-decran-2025-07-09-154612-768x408.png 768w" sizes="(max-width: 1165px) 100vw, 1165px" /></p>
<p> </p>
<p style="text-align: center;"><em>Historical Overview of EU-US Relations in Personal Data Transfers</em></p>
<p> </p>
<p style="text-align: justify;">An invalidation of the DPF would once again raise questions about the legal framework for personal data transfers between the EU and the United States. In this context of legal instability, a sustainable solution might emerge from technology rather than law. One such example could be homomorphic encryption, which, although not yet fully mature, represents a promising avenue for ensuring data security, provided that sovereign European solutions are developed.</p>
<p> </p>
<h5 style="text-align: justify;">B. Divergent Approaches to Regulating Artificial Intelligence</h5>
<p> </p>
<p style="text-align: justify;">In recent years, artificial intelligence has experienced rapid growth, bringing with it new cybersecurity risks and threats. To address these challenges, the European Union and the United States have adopted opposing regulatory approaches.</p>
<p style="text-align: justify;">The European Union has chosen to implement regulations to govern the development of artificial intelligence. <strong>The AI Act</strong> was adopted in May 2024, imposing security measures to be implemented according to the risk levels of the systems.</p>
<p style="text-align: justify;">The United States, on the other hand, is focusing on a strategy centered on technological competitiveness and industrial sovereignty, with minimal regulation. This approach was formalized with <strong>Executive Order 14179</strong> on January 23<sup>rd</sup>, 2025, titled &#8220;<strong>Removing Barriers to American Leadership in Artificial Intelligence</strong>&#8221; This order mandates the development of an action plan to strengthen the United States&#8217; dominant position in artificial intelligence. It also repeals measures deemed restrictive to innovation and aims to eliminate any ideological bias or social agenda in the development of AI systems.</p>
<p> </p>
<h2 style="text-align: justify;">In this context of strengthening regulations, what approach should be adopted to manage the accumulation of regulations?</h2>
<p style="text-align: justify;"> </p>
<p style="text-align: justify;">The dynamic of strengthening international regulations contributes to a layering of multiple regulations, complicating compliance management, especially for companies with a significant international footprint. Faced with this complexity, two main approaches can be considered, depending on the context, organization, and international footprint of the companies.</p>
<p> </p>
<h3 style="text-align: justify;">Centralized Approach </h3>
<p> </p>
<p style="text-align: justify;">The first approach is <strong>based on the development of a global framework of measures</strong>. This framework can be based on recognized international standards such as ISO/IEC 27001 or NIST CSF 2.0, or on a regulation deemed key and particularly comprehensive. All applicable regulations are then <strong>mapped to this framework</strong>, ensuring a cross-cutting coverage of obligations through a <strong>single standard</strong>.</p>
<p style="text-align: justify;">The responsibility for implementing compliance measures is carried out by central or local teams, depending on the nature of the measures, with always strong control at the central level.</p>
<p style="text-align: justify;">This approach is particularly suitable for companies with a <strong>centralized organization and information system</strong>, and with a <strong>limited international footprint</strong>.</p>
<p> </p>
<h3 style="text-align: justify;">Decentralized Approach </h3>
<p> </p>
<p style="text-align: justify;">The second approach favors a <strong>decentralized organization</strong> of compliance, relying on local teams. In this framework, a <strong>global regulatory framework</strong> is defined at the central level, which constitutes a <strong>minimum compliance base for all regions</strong>. It generally covers <strong>85 to 90%</strong> of the requirements of all regulations that can be found at the local level.</p>
<p style="text-align: justify;">However, in this approach, the aim is not to complete the global framework based on the analysis of all local regulations. The <strong>responsibility for adjusting to local or regional</strong> requirements lies with local CISOs, who ensure compliance with local measures, particularly the 10 to 15% of measures not covered in the global framework. This organization <strong>allows for differentiated implementation according to regions</strong>, while maintaining a central normative framework.</p>
<p style="text-align: justify;">This model is particularly suited to decentralized structures, characterized by strong local autonomy and an extensive international footprint. It offers greater agility in the face of regulatory changes, relying on a fine understanding of national contexts, while reducing the central management burden.</p>
<p> </p>
<h3 style="text-align: justify;">Practical Case of Supporting a Client with a Strong International Presence </h3>
<p> </p>
<p style="text-align: justify;">A recently implemented cybersecurity program within an international group illustrates a decentralized approach with strong group control.</p>
<p style="text-align: justify;">The <strong>compliance framework, defined by the headquarters, is based on security objectives founded on threat scenarios</strong> and relies on a common foundation integrating the main applicable regulations. This <strong>foundation</strong> <strong>is structured from a multi-framework matrix</strong> (DORA, NIS2, ISO 27001). <strong>Local entities ensure the operational deployment</strong> of the measures defined at the group level, as well as their internal control, under the coordination of a local CISO responsible for consolidating information and ensuring its reporting. The system also provides for <strong>local adjustment capabilities</strong>, allowing feedback on the central strategy, particularly to avoid potential contradictions with local regulations.</p>
<p style="text-align: justify;">The <strong>group CISO plays a transversal supervisory role</strong>. They verify that the requirements defined at the central level are well taken into account by the local CISOs, even though the latter are responsible for their implementation. They also ensure that the deployed systems are aligned with both group requirements and local regulations. Their role is not to challenge local choices but to <strong>verify their coherence with the global framework</strong>.</p>
<p style="text-align: justify;">In <strong>terms of control governance</strong>, each regulatory requirement, whether local or group-originated, is associated with a specific control. Clear governance between the group and local levels is therefore essential to manage a coherent control catalog, avoid redundancies, and ensure good articulation in the compliance system.</p>
<p style="text-align: justify;">This model ensures a <strong>homogeneous security foundation while preserving the flexibility needed to adapt to local regulations.</strong> However, it also has certain limitations. Its centralized structure, while ensuring overall coherence, introduces<strong> some complexity in daily management</strong>, particularly when it comes to evolving the system or quickly integrating new regulatory requirements.</p>
<p> </p>
<h3 style="text-align: justify;">Possibility of Decoupling Information Systems </h3>
<p> </p>
<p style="text-align: justify;">Beyond these approaches, some companies choose to decouple their information systems. This decision <strong>is made in a context where geopolitical tensions increasingly influence cybersecurity strategies</strong>. In this context, the growing importance of sovereignty and protectionism in cybersecurity regulations creates contradictions between regulations, making it difficult, if not impossible, to ensure the compliance of a single information system with regulations from different geographic areas.</p>
<p style="text-align: justify;">Decoupling addresses these issues <strong>by providing dedicated infrastructures, applications, and teams for different geographic areas</strong>, typically the US, EU, and Asia, with<strong> strict filtering between zones</strong>.</p>
<p> </p>
<h2 style="text-align: justify;">Towards a Phase of Consolidation and Rationalization? </h2>
<p style="text-align: justify;"> </p>
<p style="text-align: justify;">In this context, we seem to be heading towards a phase of <strong>regulatory consolidation</strong>, with the implementation of recently adopted texts and a slowdown in the publication of new regulations. However, developments could still occur to consider the emergence of new technologies, particularly quantum computing.</p>
<p style="text-align: justify;">Moreover, in the face of increasing regulatory complexity in the EU, the European Commission seems to be initiating a new phase of <strong>rationalization</strong>, aiming to lighten certain obligations deemed unsuitable. This desire for rationalization is notably reflected in a targeted project to ease GDPR requirements for SMEs.</p>
<p style="text-align: justify;">Another avenue for <strong>simplification</strong> involves the establishment of mutual <strong>recognition mechanisms</strong> between regulations in different countries. Regulatory compliance for companies could then be simplified, provided that states explicitly integrate this logic into their national regulations. France, for example, is considering integrating this mechanism into the bill on the resilience of critical infrastructures and the strengthening of cybersecurity. However, mutual recognition could lead to a risk of regulatory dumping: some companies might choose the least stringent frameworks to reduce the cost and complexity of compliance, to the detriment of security.</p>
<p style="text-align: justify;">This principle is not entirely new: the GDPR already recognizes third countries as having an &#8220;adequate&#8221; level of protection (e.g., Japan, Canada, Argentina), thus facilitating data transfers with these countries.</p>
<p style="text-align: justify;"><a href="#_ftnref1" name="_ftn1">[1]</a> https://www.weforum.org/publications/global-cybersecurity-outlook-2025/</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2025/07/navigating-cybersecurity-compliance-managing-the-complexity-of-expanding-regulatory-layers/">Navigating Cybersecurity Compliance: Managing the Complexity of Expanding Regulatory Layers</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2025/07/navigating-cybersecurity-compliance-managing-the-complexity-of-expanding-regulatory-layers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI: Discover the 5 most frequent questions asked by our clients!</title>
		<link>https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/#respond</comments>
		
		<dc:creator><![CDATA[Florian Pouchet]]></dc:creator>
		<pubDate>Wed, 08 Nov 2023 11:00:00 +0000</pubDate>
				<category><![CDATA[Cyberrisk Management & Strategy]]></category>
		<category><![CDATA[Focus]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[attacks]]></category>
		<category><![CDATA[chatgpt]]></category>
		<category><![CDATA[Regulations]]></category>
		<category><![CDATA[risks]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=21818</guid>

					<description><![CDATA[<p>The dawn of generative Artificial Intelligence (GenAI) in the corporate sphere signals a turning point in the digital narrative. It is exemplified by pioneering tools like OpenAI’s ChatGPT (which found its way into Bing as “Bing Chat, leveraging the GPT-4...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/">AI: Discover the 5 most frequent questions asked by our clients!</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p style="text-align: justify;">The dawn of generative Artificial Intelligence (GenAI) in the corporate sphere signals a turning point in the digital narrative. It is exemplified by pioneering tools like OpenAI’s ChatGPT (which found its way into Bing as “Bing Chat, leveraging the GPT-4 language model) and Microsoft 365’s Copilot. These technologies have graduated from being mere experimental subjects or media fodder. Today, they lie at the heart of businesses, redefining workflows and outlining the future trajectory of entire industries.</p>
<p style="text-align: justify;">While there have been significant advancements, there are also challenges. For instance, Samsung’s sensitive data was exposed on ChatGPT by employees (the entire source code of a database download program)<a href="#_ftn1" name="_ftnref1">[1]</a>. Compounding these challenges, ChatGPT [OpenAI] itself underwent a security breach that affected over 100 000 users between June 2022 and May 2023, with those compromised credentials now being traded on the Dark web<a href="#_ftn2" name="_ftnref2">[2]</a>.</p>
<p style="text-align: justify;">At this digital crossroad, it’s no wonder that there’s both enthusiasm and caution about embracing the potential of generative AI. Given these complexities, it’s understandable why many grapple with determining the optimal approach to AI. With that in mind, the article aims to address the most representative questions asked by our clients.</p>
<h2 style="text-align: justify;"><span style="color: #732196;">Question 1: Is Generative AI just a buzz?</span></h2>
<p style="text-align: justify;">AI is a collection of theories and techniques implemented with the aim of creating machines capable of simulating the cognitive functions of human intelligence (vision, writing, moving&#8230;). A particularly captivating subfield of AI is “Generative AI”. This can be defined as a discipline that employs advanced algorithms, including artificial neural networks, to <strong>autonomously craft content</strong>, whether it’s text, images, or music. Moving on from your basic banking chatbot answering aside all your question, GenAI not only just mimics capabilities in a remarkable way, but in some cases, enhances them.</p>
<p style="text-align: justify;">Our observation on the market: the reach of generative AI is broad and profound. It contributes to diverse areas such as content creation, data analysis, decision-making, customer support and even cybersecurity (for example, by identifying abnormal data patterns to counter threats). We’ve observed 3 fields where GenAI is particularly useful.</p>
<p> </p>
<p style="text-align: justify;"><img decoding="async" class="aligncenter size-full wp-image-21820" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1.png" alt="" width="605" height="341" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1.png 605w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1-339x191.png 339w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture1-69x39.png 69w" sizes="(max-width: 605px) 100vw, 605px" /></p>
<h3> </h3>
<h3>Marketing and customer experience personalisation</h3>
<p style="text-align: justify;">GenAI offers insights into customer behaviours and preferences. By analysing data patterns, it allows businesses to craft tailored messages and visuals, enhancing engagement, and ensuring personalized interactions.</p>
<h3>No-code solutions and enhanced customer support</h3>
<p style="text-align: justify;">In today’s rapidly changing digital world, the ideas of no-code solutions and improved customer service are increasingly at the forefront. Bouygues Telecom is a good example of a leveraging advanced tools. They are actively analysing voice interactions from recorded conversations between advisors and customers, aiming to improve customer relationships<a href="#_ftn3" name="_ftnref3">[3]</a>. On a similar note, Tesla employs the AI tool “<a href="https://www.youtube.com/watch?v=1mP5e5-dujg">Air AI</a>” for seamless customer interaction, handling sales calls with potential customers, even going so far as to schedule test drives.</p>
<p style="text-align: justify;">As for coding, an interesting experiment from one of our clients stands out. Involving 50 developers, the test found that 25% of the AI-generated code suggestions were accepted, leading to a significant 10% boost in productivity. It is still early to conclude on the actual efficiency of GenAI for coding, but the first results are promising and should be improved. However, the intricate issue of intellectual property rights concerning this AI-generated code continues to be a topic of discussion.</p>
<h3>Documentary watch and research tool</h3>
<p style="text-align: justify;">Using AI as a research tool can help save hours in domains where regulatory and documentary corpus are very extensive (e.g.: financial sector). At Wavestone, we internally developed two AI tools. The first, CISO GPT, allows users to ask specific security questions in their native language. Once a question is asked, the tool scans through extensive security documentation, efficiently extracting and presenting relevant information. The second one, a Library and credential GPT, provides specific CVs from Wavestone employees, as well as references from previous engagements for the writing of commercial proposals.</p>
<p style="text-align: justify;">However, while tools like ChatGPT (which draws data from public databases) are undeniably beneficial, the game-changing potential emerges when companies tap into their proprietary data. For this, companies need to implement GenAI capabilities internally or setup systems that ensure the protection of their data (cloud-based solution like Azure OpenAI or proprietary models). <strong>From our standpoint, GenAI is worth more than just the buzz around it and is here to stay. </strong>There are real business applications and true added value, but also security risks. Your company needs to kick-off the dynamic to be able to implement GenAI projects in a secure way.</p>
<p> </p>
<h2 style="text-align: justify;"><span style="color: #9727b3;"><span style="color: #732196;">Question 2: What is the market reaction to the use of ChatGPT?</span></span></h2>
<p style="text-align: justify;">To delve deeper into the perspective of those at the forefront of cybersecurity, we’ve asked our client’s CISO’s, their opinions on the implications and opportunities of GenAI. Therefore, the following graph illustrates the opinions of CISOs on this subject.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-21822" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2.png" alt="" width="601" height="279" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2.png 601w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2-411x191.png 411w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture2-71x33.png 71w" sizes="(max-width: 601px) 100vw, 601px" /></p>
<p style="text-align: justify;">Based on our survey, the feedback from the CISOs can be grouped into three distinct categories:</p>
<h3>The Pragmatists (65%)</h3>
<p style="text-align: justify;">Most of our respondents recognize the potential data leakage risks with ChatGPT, but they equate them to risk encountered on forums or during exchanges on platforms or forums such as Stack Overflow (for developers). They believe that the risk of data leaks hasn’t significantly changed with ChatGPT. However, the current buzz justifies dedicated sensibilization campaigns to emphasize the importance of not using company-specific or sensitive data.</p>
<h3>The Visionaries (25%)</h3>
<p style="text-align: justify;">A quarter of the respondents view ChatGPT as a ground-breaking tool. They’ve noticed its adoption in departments such as communication and legal. They’ve taken proactive steps to understanding its use (which data, which use cases) and have subsequently established a set of guidelines. This is a more collaborative approach to define a use case framework.</p>
<h3>The Sceptics (10%)</h3>
<p style="text-align: justify;">A segment of the market has reservations about ChatGPT. To them, it’s a tool that’s too easy to misuse, receives excessive media attention and carries inherent risks, according to various business sectors. Depending on your activity, this can be relevant when judging that the risk of data leakage and loss of intellectual property is too high compared to the potential benefits.</p>
<p> </p>
<h2><span style="color: #9727b3;"><span style="color: #732196;">Question 3: What are the risks of Generative AI?</span></span></h2>
<p style="text-align: justify;">In evaluating the diverse perspectives on generative AI within organizations, we’ve classified the concerns into four distinct categories of risks, presented from the least severe to the most critical:</p>
<h3>Content alteration and misrepresentation</h3>
<p style="text-align: justify;">Organizations using generative AI must safeguard the integrity of their integrated systems. When AI is maliciously tampered with, it can distort genuine content, leading to misinformation. This can produce biased outputs, undermining the reliability and effectiveness of AI-driven solutions. Specifically, for Large Language Models (LLMs) like GenAI, there’s a notable concern of prompt injections. To mitigate this, organizations should:</p>
<ol style="text-align: justify;">
<li>Develop a malicious input classification system that assesses the legitimacy of a user’s input, ensuring that only genuine prompts are processed.</li>
<li>Limit the size and change the format of user inputs. By adjusting these parameters, the chances of successful prompt injection are significantly reduced.</li>
</ol>
<h3>Deceptive and manipulative threats</h3>
<p style="text-align: justify;">Even if an organization decides to prohibit the use of generative AI, it must remain vigilant about the potential surge in phishing, scams and deepfake attacks. While one might argue that these threats have been around in the cybersecurity realm for some time, the introduction of generative AI intensifies both their frequency and sophistication.</p>
<p style="text-align: justify;">This potential is vividly illustrated through a range of compelling examples. For instance, Deutsche Telekom released an awareness <a href="https://www.youtube.com/watch?v=F4WZ_k0vUDM">video</a> that demonstrates the ability, by using GenAI, to age a young girl’s image from photos/videos available on social media.</p>
<p style="text-align: justify;">Furthermore, HeyGen is a generative AI software capable of dubbing <a href="https://www.youtube.com/watch?v=gQYm_aia5No">videos</a> into multiple languages while retaining the original voice. It’s now feasible to hear Donald Trump articulating in French or Charles de Gaulle conversing in Portuguese.</p>
<p style="text-align: justify;">These instances highlight the potential for attackers to use these tools to mimic a CEO’s voice, create convincing phishing emails, or produce realistic video deepfakes, intensifying detection and defence challenges.</p>
<p style="text-align: justify;">For more information on the use of GenAI by cybercriminals, consult the dedicated RiskInsight <a href="https://www.riskinsight-wavestone.com/en/2023/10/the-industrialization-of-ai-by-cybercriminals-should-we-really-be-worried/">article</a>.</p>
<h3>Data confidentiality and privacy concerns</h3>
<p style="text-align: justify;">If organizations choose to allow the use of generative AI, they must consider that the vast data processing capabilities of this technology can pose unintended confidentiality and privacy risks. First, while these models excel in generating content, they might leak sensitive training data or replicate copyrighted content.</p>
<p style="text-align: justify;">Furthermore, concerning data privacy rights, if we examine ChatGPT’s privacy policy, the chatbot can gather information such as account details, identification data extracted from your device or browser, and information entered in the chatbot (that can be used to train the generative AI)<a href="#_ftn4" name="_ftnref4">[4]</a>. According to article 3 (a) of OpenAI’s general terms and conditions, input and output belong to the user. However, since these data are stored and recorded by Open AI, it poses risks related to intellectual property and potential data breaches (as previously noted in the Samsung case). Such risks can have significant reputational and commercial impact on your organization.</p>
<p style="text-align: justify;">Precisely for these reasons, OpenAI developed the ChatGPT Business subscription, which provides enhanced control over organizational data (such as AES-256 encryption for data at rest, TLS 1.2+ for data in transit, SSO SAML authentication, and a dedicated administration console)<a href="#_ftn5" name="_ftnref5">[5]</a>. But in reality, it&#8217;s all about the trust you have in your provider and the respect of contractual commitments. Additionally, there&#8217;s the option to develop or train internal AI models using one&#8217;s own data for a more tailored solution.</p>
<h3>Model vulnerabilities and attacks</h3>
<p style="text-align: justify;">As more organizations use machine learning models, it’s crucial to understand that these models aren’t fool proof. They can face threats that affect their reliability, accuracy or confidentiality, as it will be explained in the following section.</p>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #9727b3;"><span style="color: #732196;">Question 4: How can an AI model be attacked?</span></span></h2>
<p style="text-align: justify;">AI introduces added complexities atop existing network and infrastructure vulnerabilities. It’s crucial to note that these complexities are not specific to generative AI, but they are present in various AI models. Understanding these attack models is essential to reinforcing defences and ensuring the secure deployment of AI. There are three main attack models (non-exhaustive list):</p>
<p style="text-align: justify;">For detailed insights on vulnerabilities in Large Language Models and generative AI, refer to the <a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v05.pdf">“OWASP Top 10 for LLM”</a> by the Open Web Application Security Project (OWASP).</p>
<h3>Evasion attacks</h3>
<p style="text-align: justify;">These attacks target AI by manipulating the inputs of machine learning algorithms to introduce minor disturbances that result in significant alterations to the outputs. Such manipulations can cause the AI model to classify inaccurately or overlook certain inputs. A classic example would be altering signs to deceive AI self-driving cars (have identify a “stop” sign into a “priority” sign). However, evasion attacks can also apply to facial recognition. One might use subtle makeup patterns, strategically placed stickers, special glasses, or specific lighting conditions to confuse the system, leading to misidentification.</p>
<p style="text-align: justify;">Moreover, evasion attacks extend beyond visual manipulation. In voice command systems, attackers can embed malicious commands within regular audio content in such a way that they’re imperceptible to humans but recognizable by voice assistants. For instance, researchers have demonstrated adversarial audio techniques targeting speech recognition systems, like those in voice-activated smart speaker systems such as Amazon’s Alexa. In one scenario, a seemingly ordinary song or commercial could contain a concealed command instructing the voice assistant to make an unauthorized purchase or divulge personal information, all without the user’s awareness<a href="#_ftn6" name="_ftnref6">[6]</a>.</p>
<h3>Poisoning</h3>
<p style="text-align: justify;">Poisoning is a type of attack in which the attacker altered data or model to modify the ML algorithm’s behaviour in a chosen direction (e.g to sabotage its results, to insert a backdoor). It is as if the attacker conditioned the algorithm according to its motivations. Such attacks are also called causative attacks.</p>
<p style="text-align: justify;">In line with this definition, attackers use causative attacks to guide a machine learning algorithm towards their intended outcome. They introduced malicious samples into the training dataset, leading the algorithm to behave in unpredictable ways. A notorious example is Microsoft’s chatbot, TAY, that was unveiled on Twitter in 2016. Designed to emulate and converse with American teenagers, it soon began acting like a far-right activist<a href="#_ftn7" name="_ftnref7">[7]</a>. This highlights the fact that, in their early learning stages, AI systems are susceptible to the data they encounter. 4Chan users intentionally poisoned TAY’s data with their controversial humour and conversations.</p>
<p style="text-align: justify;">However, data poisoning can also be unintentional, stemming from biases inherent in the data sources or the unconscious prejudices of those curating the datasets. This became evident when early facial recognition technology had difficulties identifying darker skin tones. This underscores the need for diverse and unbiased training data to guard against both deliberate and inadvertent data distortions.</p>
<p style="text-align: justify;">Finally, the proliferation of open-source AI algorithms online, such as those on platforms like Hugging Face, presents another risk. Malicious actors could modify and poison these algorithms to favour specific biases, leading unsuspecting developers to inadvertently integrate tainted algorithms into their projects, further perpetuating biases or malicious intents.</p>
<h3>Oracle attacks</h3>
<p style="text-align: justify;">This type of attack involves probing a model with a sequence of meticulously designed inputs while analysing the outputs. Through the application of diverse optimization strategies and repeated querying, attackers can deduce confidential information, thereby jeopardizing both user privacy, overall system security, or internal operating rules.</p>
<p style="text-align: justify;">A pertinent example is the case of Microsoft’s AI-powered Bing chatbot. Shortly after its unveiling, a Stanford student, Kevin Liu, exploited the chatbot using a prompt injection attack, leading it to reveal its internal guidelines and code name “Sidney”, even though one of the fundamental internal operating rules of the system was to never reveal such information<a href="#_ftn8" name="_ftnref8">[8]</a>.</p>
<p style="text-align: justify;">A previous RiskInsight <a href="https://www.riskinsight-wavestone.com/en/2023/06/attacking-ai-a-real-life-example/">article</a> showed an example of Evasion and Oracle attacks and explained other attack models that are not specific to AI, but that are nonetheless an important risk for these technologies.</p>
<p> </p>
<h2 style="text-align: justify;"><span style="color: #732196;">Question 5: What is the status of regulations? How is generative AI regulated?</span></h2>
<p style="text-align: justify;">Since our <a href="https://www.riskinsight-wavestone.com/en/2022/06/artificial-intelligence-soon-to-be-regulated/">2022 article</a>, there has been significant development in AI regulations across the globe.</p>
<h3 style="text-align: justify;">EU</h3>
<p style="text-align: justify;">The EU’s digital strategy aims to regulate AI, ensuring its innovative development and use, as well as the safety and fundamental rights of individuals and businesses regarding AI. On June 14, 2023, the European Parliament adopted and amended the proposal for a regulation on Artificial Intelligence, categorizing AI risks into four distinct levels: unacceptable, high, limited, and minimal<a href="#_ftn9" name="_ftnref9">[9]</a>.</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-21824" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3.png" alt="" width="605" height="322" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3.png 605w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3-359x191.png 359w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture3-71x39.png 71w" sizes="auto, (max-width: 605px) 100vw, 605px" /></p>
<h3 style="text-align: justify;">US</h3>
<p style="text-align: justify;">The White House Office of Science and Technology Policy, guided by diverse stakeholder insights, presented the “Blueprint for an AI Bill of Rights”<a href="#_ftn10" name="_ftnref10">[10]</a>. Although non-binding, it underscores a commitment to civil rights and democratic values in AI’s governance and deployment.</p>
<h3 style="text-align: justify;">China</h3>
<p style="text-align: justify;">China’s Cyberspace Administration, considering rising AI concerns, proposed the Administrative Measures for Generative Artificial Intelligence Services. Aimed at securing national interests and upholding user rights, these measures offer a holistic approach to AI governance. Additionally, the measures seek to mitigate potential risks associated with Generative AI services, such as the spread of misinformation, privacy violations, intellectual property infringement, and discrimination. However, its territorial reach might pose challenges for foreign AI service providers in China<a href="#_ftn11" name="_ftnref11">[11]</a>.</p>
<h3 style="text-align: justify;">UK</h3>
<p style="text-align: justify;">The United Kingdom is charting a distinct path, emphasizing a pro-innovation approach in its National AI Strategy. The Department for Science, Innovation &amp; Technology released a white paper titled “AI Regulation: A Pro-Innovation Approach”, with a focus on fostering growth through minimal regulations and increased AI investments. The UK framework doesn’t prescribe rules or risk levels to specific sectors or technologies. Instead, it focuses on regulating the outcomes AI produces in specific applications. This approach is guided by five core principles: safety &amp; security, transparency, fairness, accountability &amp; governance, and contestability &amp; redress<a href="#_ftn12" name="_ftnref12">[12]</a>.</p>
<h3 style="text-align: justify;">Frameworks</h3>
<p style="text-align: justify;">Besides formal regulations, there are several guidance documents, such as NIST’s AI Risk Management Framework and ISO/IEC 23894, that provide recommendations to manage AI-associated risks. They focus on criteria aimed at trusting the algorithms in fine, and this is not just about cybersecurity! It’s about trust.</p>
<p> </p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-21826" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4.png" alt="" width="605" height="340" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4.png 605w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4-340x191.png 340w, https://www.riskinsight-wavestone.com/wp-content/uploads/2023/11/Picture4-69x39.png 69w" sizes="auto, (max-width: 605px) 100vw, 605px" /></p>
<p> </p>
<p style="text-align: justify;">With such a broad regulatory landscape, organizations might feel overwhelmed. To assist, we suggest focusing on key considerations when integrating AI into operations, in order to setup the roadmap towards being compliant.</p>
<ul style="text-align: justify;">
<li><strong>Identify all existing AI systems</strong> within the organization and establish a procedure/protocol to identify new AI endeavours.</li>
<li><strong>Evaluate AI systems</strong> using criteria derived from reference frameworks, such as NIST.</li>
<li><strong>Categorize AI systems according to the AI Act’s classification</strong> (unacceptable, high, low or minimal).</li>
<li><strong>Determine the tailored risk management approach</strong> for each category.</li>
</ul>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #732196;">Bonus Question: This being said, what can I do right now?</span></h2>
<p style="text-align: justify;">As the digital landscape evolves, Wavestone emphasizes a comprehensive approach to generative AI integration. We advocate that every AI deployment undergo a rigorous sensitivity analysis, ranging from outright prohibition to guided implementation and stringent compliance. For systems classified as high risk, it’s paramount to apply a detailed risk analysis anchored in the standards set by ENISA and NIST. While AI introduces a sophisticated layer, foundational IT hygiene should never be side lined. We recommend the following approach:</p>
<ul style="text-align: justify;">
<li><span style="color: #732196;"><strong><em>Pilot &amp; Validate:</em></strong></span> Begin by gauging the transformative potential of generative AI within your organizational context. Moreover, it’s essential to understand the tools at your disposal, navigate the array of available choices, and make informed decisions based on specific needs and use cases.</li>
<li><span style="color: #732196;"><strong><em>Strategic Insight:</em></strong> </span>Based on our client CISO survey, ascertain your ideal AI adoption intensity. Do you resonate with the 10%, 65% or 25% adoption benchmarks shared by your industry peers?</li>
<li><span style="color: #732196;"><strong><em>Risk Mitigation: </em></strong></span>Ground your strategy in a comprehensive risk assessment, proportional to your intended adoption intensity.</li>
<li><span style="color: #732196;"><strong><em>Policy Formulation:</em> </strong></span>Use your risk-benefit analysis as a foundation to craft AI policies that are both robust and agile.</li>
<li><span style="color: #732196;"><strong><em>Continuous Learning &amp; Regulatory Vigilance:</em> </strong></span>Maintain an unwavering commitment to staying updated with the evolving regulatory landscape. Both locally and globally, it’s crucial to stay informed about the latest tools, attack methods, and defensive strategies.</li>
</ul>
<p style="text-align: justify;"><a href="#_ftnref1" name="_ftn1">[1]</a>  <a href="https://www.rfi.fr/fr/technologies/20230409-des-donn%C3%A9es-sensibles-de-samsung-divulgu%C3%A9s-sur-chatgpt-par-des-employ%C3%A9s">Des données sensibles de Samsung divulgués sur ChatGPT par des employés (rfi.fr)</a></p>
<p style="text-align: justify;"><a href="#_ftnref2" name="_ftn2">[2]</a> <a href="https://www.phonandroid.com/chatgpt-100-000-comptes-pirates-se-retrouvent-en-vente-sur-le-dark-web.html">https://www.phonandroid.com/chatgpt-100-000-comptes-pirates-se-retrouvent-en-vente-sur-le-dark-web.html</a></p>
<p style="text-align: justify;"><a href="#_ftnref3" name="_ftn3">[3]</a> <a href="https://www.cio-online.com/actualites/lire-bouygues-telecom-mise-sur-l-ia-generative-pour-transformer-sa-relation-client-14869.html">Bouygues Telecom mise sur l&#8217;IA générative pour transformer sa relation client (cio-online.com)</a></p>
<p style="text-align: justify;"><a href="#_ftnref4" name="_ftn4">[4]</a> <a href="https://www.bitdefender.fr/blog/hotforsecurity/quelles-donnees-chat-gpt-collecte-a-votre-sujet-et-pourquoi-est-ce-important-pour-votre-confidentialite-numerique/">Quelles données Chat GPT collecte à votre sujet et pourquoi est-ce important pour votre vie privée en ligne ? (bitdefender.fr)</a></p>
<p style="text-align: justify;"><a href="#_ftnref5" name="_ftn5">[5]</a> <a href="https://www.lemondeinformatique.fr/actualites/lire-openai-lance-un-chatgpt-plus-securise-pour-les-entreprises-91387.html">OpenAI lance un ChatGPT plus sécurisé pour les entreprises &#8211; Le Monde Informatique</a></p>
<p style="text-align: justify;"><a href="#_ftnref6" name="_ftn6">[6]</a> <a href="https://ieeexplore.ieee.org/document/8747397">Selective Audio Adversarial Example in Evasion Attack on Speech Recognition System | IEEE Journals &amp; Magazine | IEEE Xplore</a></p>
<p style="text-align: justify;"><a href="#_ftnref7" name="_ftn7">[7]</a> <a href="https://www.washingtonpost.com/news/the-intersect/wp/2016/03/25/not-just-tay-a-recent-history-of-the-internets-racist-bots/">Not just Tay: A recent history of the Internet’s racist bots &#8211; The Washington Post</a></p>
<p style="text-align: justify;"><a href="#_ftnref8" name="_ftn8">[8]</a> <a href="https://www.phonandroid.com/microsoft-comment-un-etudiant-a-oblige-lia-de-bing-a-reveler-ses-secrets.html">Microsoft : comment un étudiant a obligé l&#8217;IA de Bing à révéler ses secrets (phonandroid.com)</a></p>
<p style="text-align: justify;"><a href="#_ftnref9" name="_ftn9">[9]</a> <a href="https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf">Artificial intelligence act (europa.eu)</a></p>
<p style="text-align: justify;"><a href="#_ftnref10" name="_ftn10">[10]</a> <a href="https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf">https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf</a></p>
<p style="text-align: left;"><a href="#_ftnref11" name="_ftn11">[11]</a> <a href="https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/">https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/</a></p>
<p style="text-align: justify;"><a href="#_ftnref12" name="_ftn12">[12]</a> <a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper">A pro-innovation approach to AI regulation &#8211; GOV.UK (www.gov.uk)</a></p>
<p style="text-align: justify;"> </p>


<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/">AI: Discover the 5 most frequent questions asked by our clients!</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2023/11/ai-discover-the-5-most-frequent-questions-asked-by-our-clients/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Artificial Intelligence soon to be regulated?</title>
		<link>https://www.riskinsight-wavestone.com/en/2022/06/artificial-intelligence-soon-to-be-regulated/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2022/06/artificial-intelligence-soon-to-be-regulated/#respond</comments>
		
		<dc:creator><![CDATA[Morgane Nicolas]]></dc:creator>
		<pubDate>Wed, 22 Jun 2022 15:00:00 +0000</pubDate>
				<category><![CDATA[Cloud & Next-Gen IT Security]]></category>
		<category><![CDATA[Deep-dive]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Regulations]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=18102</guid>

					<description><![CDATA[<p>Since the beginning of its theorisation in the 1950s at the Dartmouth Conference[1] , Artificial Intelligence (AI) has undergone significant development. Today, thanks to advancements and progress in various technological fields such as cloud computing, we find it in various...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2022/06/artificial-intelligence-soon-to-be-regulated/">Artificial Intelligence soon to be regulated?</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p style="text-align: justify;">Since the beginning of its theorisation in the 1950s at the Dartmouth Conference<a href="#_ftn1" name="_ftnref1">[1]</a> , Artificial Intelligence (AI) has undergone significant development. Today, thanks to advancements and progress in various technological fields such as cloud computing, we find it in various everyday uses. AI can compose music, recognise voices, anticipates our needs, drive cars, monitor our health, etc.</p>
<p style="text-align: justify;">Naturally, the development of AI gives rise to many fears. For example, that AI will make innacurate computations leading to accidents and other incidents (autonomous car accidents for example), or that it will lead to a violation of the personal data and could potentially manipulate that data (fear largely fuelled by the scandals surrounding major market players<a href="#_ftn2" name="_ftnref2">[2]</a> ).</p>
<p style="text-align: justify;">In the absence of clear regulations in the field of AI, Wavestone wanted to study, for the purpose of anticipating future needs, who are the actors at the forefront of publishing and developing texts on the framework of AI, what are these texts, the ideas developed in them and what impacts on the security of AI systems can be anticipated.</p>
<h1> </h1>
<h1>AI regulation: the global picture</h1>
<h2>AI legislation</h2>
<p>In the body of texts relating to AI regulation, there are no legislative texts to date <a href="#_ftn3" name="_ftnref1">[3]</a><a href="#_ftn4" name="_ftnref2">[4]</a>. Nevertheless, some texts generally formalize a set of broad guidelines for developing a normative framework for AI. There are, for example, guidelines/recommendations, strategic plans, or white papers.</p>
<p>They emerge mainly from the United States, Europe, Asia, or major international entities:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18104 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image1b.png" alt="" width="848" height="509" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image1b.png 848w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image1b-318x191.png 318w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image1b-65x39.png 65w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image1b-768x461.png 768w" sizes="auto, (max-width: 848px) 100vw, 848px" /></p>
<p style="text-align: center;"><em>Figure 1 Global overview of AI texts<a href="#_ftn5" name="_ftnref2">[5]</a></em></p>
<p>And their pace has not slowed down in recent years. Since 2019, more and more texts on AI regulation have been produced:</p>
<p> </p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18306 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/2new.png" alt="" width="1005" height="538" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/2new.png 1005w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/2new-357x191.png 357w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/2new-71x39.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/2new-768x411.png 768w" sizes="auto, (max-width: 1005px) 100vw, 1005px" /></p>
<p style="text-align: center;"><em>Figure 2 Chronology of the main texts</em></p>
<h2>Two types of actors carry these texts with varying perspectives of cybersecurity</h2>
<p style="text-align: justify;">The texts are generally carried by two types of actors:</p>
<ul style="text-align: justify;">
<li>Decision makers. That is, bodies whose objective is to formalise the regulations and requirements that AI systems will have to meet.</li>
<li>That is, bodies/organisations that have some authority in the field of AI.</li>
</ul>
<p style="text-align: justify;">At the EU level, decision-makers such as the European Commission or influencers such as ENISA are of key importance in the development of regulations or best practices in the field of AI development.</p>
<p> </p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18308 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/3new.png" alt="" width="918" height="512" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/3new.png 918w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/3new-342x191.png 342w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/3new-71x39.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/3new-768x428.png 768w" sizes="auto, (max-width: 918px) 100vw, 918px" /></p>
<p style="text-align: center;"><em>Figure 3 Key players in Europe</em></p>
<p style="text-align: justify;">In general, the texts address a few different issues. For example, they provide strategies which can be adopted or guidelines on AI ethics. They are addressed to both governments and companies and occasionally target specific sectors such as the banking sector.</p>
<p style="text-align: justify;">From a cyber security point of view, the texts are heterogeneous. The following graph represents the cyber appetence of the texts:  </p>
<p> </p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18310 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/4new.png" alt="" width="971" height="460" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/4new.png 971w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/4new-403x191.png 403w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/4new-71x34.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/4new-768x364.png 768w" sizes="auto, (max-width: 971px) 100vw, 971px" /></p>
<p style="text-align: center;"><em>Figure 4 Text corpus between 2018 and 2021</em></p>
<h1> </h1>
<h1>What the texts say about Cybersecurity</h1>
<p>As shown in Figure 4, a significant number of texts propose requirements related to cyber security. This is partly because AI has functional specificities that need to be addressed by cyber requirements. To go into the technical details of the texts, let us reduce AI to one of its most uses today: Machine Learning (Details of how Machine Learning works are provided in <em>Annex I : Machine Learning</em>).</p>
<p>Numerous cyber requirements exist to protect the assets support applications using Machine Learning (ML) throughout the project lifecycle. On a macroscopic scale, these requirements can be categorised into the classic cybersecurity pillars<a href="#_ftn6" name="_ftnref1"><sup>[6]</sup></a><sup> </sup> extracted from the NIST Framework<a href="#_ftn7" name="_ftnref2">[7]</a> :</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18112 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image5b.png" alt="" width="1431" height="641" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image5b.png 1431w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image5b-426x191.png 426w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image5b-71x32.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image5b-768x344.png 768w" sizes="auto, (max-width: 1431px) 100vw, 1431px" /></p>
<p><a href="#_ftnref6" name="_ftn1"></a></p>
<p style="text-align: center;"><em>Figure 5 Cybersecurity pillars</em></p>
<p>The following diagram shows different texts with their cyber components:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18114 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image6b.png" alt="" width="932" height="474" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image6b.png 932w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image6b-376x191.png 376w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image6b-71x36.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image6b-768x391.png 768w" sizes="auto, (max-width: 932px) 100vw, 932px" /></p>
<p style="text-align: center;"><em>Figure 6 Cyber specificities of some important texts</em></p>
<p style="text-align: justify;">In general, if we cross-reference the results of the Figure 6 with those of the study of all the texts, it appears that three requirements are particularly addressed:</p>
<ul style="text-align: justify;">
<li>Analyse the risks on ML systems considering their specificities, to identify both &#8220;classical&#8221; and ML-specific security measures. To do this, the following steps should generally be followed:
<ul>
<li>Understand the interests of attackers in attacking the ML system.</li>
<li>Identify the sensitivity of the data handled in the life cycle of the ML system (e.g., personal, medical, military etc.).</li>
<li>Framing the legal and intellectual property rights requirements (who owns the model and the data manipulated in the case of cloud hosting for example).</li>
<li>Understand where the different supporting assets of applications using Machine Learning are hosted throughout the life cycle of the Machine Learning system. For example, some applications may be hosted in the cloud, other on-premises. The cyber risk strategy should be adjusted accordingly (management of service providers, different flows etc.).</li>
<li>Understand the architecture and exposure of the model. Some models are more exposed than others to Machine Learning-specific attacks. For example, some models are publicly exposed and thus may be subject to a thorough reconnaissance phase by an attacker (e.g. by dragging inputs and observing outputs).</li>
<li>Include specific attacks on Machine Learning algorithms. There are three main types of attack: evasion attacks (which target integrity), oracle attacks (which target confidentiality) and poisoning attacks (which target integrity and availability).</li>
</ul>
</li>
<li>Track and monitor actions. This includes at least two levels:
<ul>
<li>Traceability (log of actions) to allow monitoring of access to resources used by the ML system.</li>
<li>More &#8220;business&#8221; detection rules to check that the system is still performing and possibly detect if an attack is underway on it.</li>
</ul>
</li>
<li>Have data governance. As explained in <em>Annex I : Machine Learning</em>, data is the raw material of ML systems. Therefore, a set of measures should be taken to protect it such as:
<ul>
<li>Ensure integrity throughout the entire data life cycle.</li>
<li>Secure access to data.</li>
<li>Ensure the quality of the data collected.</li>
</ul>
</li>
</ul>
<p style="text-align: justify;">It is likely that these points will be present in the first published regulations.</p>
<p> </p>
<h1>The AI Act: will Europe take the lead as with the RGPD?</h1>
<p>In the context of this study, we looked more closely at what has been done in the European Union and one text caught our attention.</p>
<p>The claim that there is no legislation yet is only partly true. In 2021, the European Commission published the AI Act <a href="#_ftn8" name="_ftnref1">[8]</a> : a legislative proposal that aims to address the risks associated with certain uses of AI. Its objectives, to quote the document, are to:</p>
<ul>
<li>Ensure that AI systems placed on the EU market and used are safe and respect existing fundamental rights legislation and EU values.</li>
<li>Ensuring legal certainty to facilitate investment and innovation in AI.</li>
<li>Strengthen governance and effective enforcement of existing legislation on fundamental rights and security requirements for AI systems.</li>
<li>Facilitate the development of a single market for legal, safe, and trustworthy AI applications and prevent market fragmentation.</li>
</ul>
<p>The AI Act is in line with the texts listed above. It adopts a risk-based approach with requirements that depend on the risk levels of AI systems. The regulation thus defines four levels of risk:</p>
<ul>
<li>AI systems with unacceptable risks.</li>
<li>AI systems with high risks.</li>
<li>AI systems with specific risks.</li>
<li>AI systems with minimal risks.</li>
</ul>
<p>Each of these levels is the subject of an article in the legislative proposal to define them precisely and to construct the associated regulation.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18116 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image7b.png" alt="" width="923" height="342" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image7b.png 923w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image7b-437x162.png 437w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image7b-71x26.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image7b-768x285.png 768w" sizes="auto, (max-width: 923px) 100vw, 923px" /></p>
<p style="text-align: center;"><em>Figure 7 The risk hierarchy in the IA Act<a href="#_ftn9" name="_ftnref1">[9]</a></em></p>
<p>For high-risk AI systems, the AI Act proposes cyber requirements along the lines of those presented above. For example, if we use the NIST-inspired categorization presented in Figure 5 The AI Act proposes the following requirements:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18118 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image8b.png" alt="" width="3761" height="2420" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image8b.png 3761w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image8b-297x191.png 297w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image8b-61x39.png 61w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image8b-768x494.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image8b-1536x988.png 1536w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image8b-2048x1318.png 2048w" sizes="auto, (max-width: 3761px) 100vw, 3761px" /></p>
<p style="text-align: justify;">Even if the text is only a proposal (it may be adopted within 1 to 5 years), we note that the European Union is taking the lead by proposing a bold regulation to accompany the development of AI, as it is with personal data and the RGPD.</p>
<p> </p>
<h1>What future for AI regulation and cybersecurity?  </h1>
<p style="text-align: justify;">In recent years, numerous texts on the regulation of AI systems have been published. Although there is no legislation to date, the pressure is mounting with numerous texts, such as the AI Act, a European Union proposal, being published. These proposals provide requirements in terms of AI development strategy, ethics and cyber security. For the latter, the requirements mainly concern topics such as cyber risk management, monitoring, governance and data protection. Moreover, it is likely that the first regulations will propose a risk-based approach with requirements adapted according to the level of risk.</p>
<p style="text-align: justify;">In view of its analysis of the situation, Wavestone can only encourage the development of an approach such as that proposed by the AI Act by adopting a risk-based methodology. This means identifying the risks posed by projects and implementing appropriate security measures. This would allow us to get started and avoid having to comply with the law after the fact.</p>
<p> </p>
<h3>Annex I: Machine Learning</h3>
<p style="text-align: justify;">Machine Learning (ML) is defined as the opportunity for systems<a href="#_ftn10" name="_ftnref1">[10]</a> to learn to solve a task using data without being explicitly programmed to do so. Heuristically, an ML system learns to give an &#8220;adequate output&#8221;, e.g. does a scanner image show a tumour, from input data (i.e. the scanner image in our example).</p>
<p style="text-align: justify;">To quote ENISA<a href="#_ftn11" name="_ftnref2"><sup>[11]</sup></a> , the specific features on which Machine Learning is based are the following:</p>
<ul style="text-align: justify;">
<li>The data. It is at the heart of Machine Learning. Data is the raw material consumed by ML systems to learn to solve a task and then to perform it once in production.</li>
<li>A model. That is, a mathematical and algorithmic model that can be seen as a box with a large set of adjustable parameters used to give an output from input data. In a phase called learning, the model uses data to learn how to solve a task by automatically adjusting its parameters, and then once in production it will be able to complete the task using the adjusted parameters.</li>
<li>Specific processes. These specific processes address the entire life cycle of the ML system. They concern, for example, the data (processing the data to make it usable, for example) or the parameterisation of the model itself (how the model adjusts its parameters based on the data it uses).</li>
<li>Development tools and environments. For example, many models are trained and then stored directly on cloud platforms as they require a lot of resources to perform the model calculations.</li>
<li>Notably because new jobs have been created with the rise of Machine Learning, such as the famous Data Scientists.</li>
</ul>
<p style="text-align: justify;">Generally, the life cycle of a Machine Learning project can be broken down into the following stages:</p>
<p><a href="#_ftnref10" name="_ftn1"></a></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-18120 size-full" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image9b.png" alt="" width="378" height="318" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image9b.png 378w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image9b-227x191.png 227w, https://www.riskinsight-wavestone.com/wp-content/uploads/2022/06/Image9b-46x39.png 46w" sizes="auto, (max-width: 378px) 100vw, 378px" /></p>
<p style="text-align: center;"><em>Figure 8 Life cycle of a Machine Learning project<a href="#_ftn12" name="_ftnref2"><sup>[12]</sup></a></em></p>
<h3> </h3>
<h3>Annex 2 Non-exhaustive list of texts relating to AI and the framework for its development</h3>
<table style="border-style: solid; width: 101.478%; border-color: #000000; background-color: #ffffff;" width="652">
<tbody>
<tr>
<td style="width: 15.8779%;" width="105">
<p>Country or international entities</p>
</td>
<td style="width: 40%;" width="270">
<p>Title of the document<a href="#_ftn13" name="_ftnref1">[13]</a></p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Published by</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>Date of publication</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" rowspan="4" width="105">
<p><strong>France </strong></p>
</td>
<td style="width: 40%;" width="270">
<p>Making sense of AI: for a national and European strategy</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Cédric Villani</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>March 2018</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>National AI Research Strategy</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Ministry of Higher Education, Research and Innovation, Ministry of Economy and Finance, General Directorate of Enterprises, Ministry of Health, Ministry of the Armed Forces, INRIA, DINSIC</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>November 2018</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Algorithms: preventing the automation of discrimination</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Defenders of rights &#8211; CNIL</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>May 2020</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>AI safety</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>CNIL</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>April 2022</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" rowspan="7" width="105">
<p><strong>Europe</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>Artificial Intelligence for Europe</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>European Commission</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>April 2018</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Ethical Guidelines for Trustworthy AI</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>High-level freelancers on artificial intelligence</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>April 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Building confidence in human-centred artificial intelligence</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>European Commission</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>April 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Policy and Investment Recommendations for Trustworthy AI</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>High-level freelancers on artificial intelligence</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>June 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>White Paper &#8211; AI: a European approach based on excellence and trust</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>European Commission</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>February 2020</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>AI Act</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>European Commission</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>April 2021</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Securing Machine Learning Algorithms</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>ENISA</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>November 2021</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" width="105">
<p><strong>Belgium</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>AI 4 Belgium</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>AI 4 Belgium Coalition</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>March 2019</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" width="105">
<p><strong>Luxembourg</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>Artificial intelligence: a strategic vision for Luxembourg</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Digital Luxembourg, Government of the Grand Duchy of Luxembourg</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>May 2019</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" rowspan="9" width="105">
<p><strong>United States</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>A Vision for Safety 2.0: Automated Driving Systems</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Department of Transportation</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>August 2017</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Preparing for the Future of Transportation: Automated Vehicles 3.0</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Department of Transportation</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>October 2018</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>The AIM Initiative: A Strategy for Augmenting Intelligence Using Machines</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Department of Defense</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>January 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance our Security and Prosperity</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Department of Defense</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>February 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>National Science &amp; Technology Council</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>June 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>A Plan for Federal Engagement in Developing Technical Standards and Related Tools</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>NIST (National Institute of Standards and Technology)</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>August 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Department of Transportation</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>January 2020</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Aiming for truth, fairness, and equity in your company&#8217;s use of AI</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Federal trade commission</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>April 2021</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>AI Risk Management framework: Initial Draft</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>NIST</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>March 2022</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" rowspan="8" width="105">
<p><strong>United Kingdom</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>AI Sector Deal</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Department for Business, Energy &amp; Industrial Strategy; Department for Digital, Culture, Media &amp; Sport</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>May 2018</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Data Ethics Framework</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Department for Digital, Culture Media &amp; Sport</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>June 2018</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Intelligent security tools: Assessing intelligent tools for cyber security</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>National Cyber Security Center</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>April 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Understanding Artificial Intelligence Ethics and Safety</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>The Alan Turing Institute</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>June 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Guidelines for AI Procurement</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Office for Artificial Intelligence</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>June 2020</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>A guide to using artificial intelligence in the public sector</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Office for Artificial Intelligence</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>January 2020</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>AI Roadmap</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>UK AI Council</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>January 2021</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>National AI Strategy</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>HM Government</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>September 2021</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" rowspan="2" width="105">
<p><strong>Hong Kong</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>High-level Principles on Artificial Intelligence</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Hong Kong Monetary Authority</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>November 2019</p>
</td>
</tr>
<tr>
<td style="width: 40%;" width="270">
<p>Reshaping banking witth Artificial Intelligence</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Hong Kong Monetary Authority</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>December 2019</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" width="105">
<p><strong>OECD</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>Recommendation of the Council on Artificial Intelligence</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>OECD</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>May 2019</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" width="105">
<p><strong>United Nations</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>System-wide Approach and Road map for Supporting Capacity Development on AI</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>UN System Chief Executives Board for Coordination</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>June 2019</p>
</td>
</tr>
<tr>
<td style="width: 15.8779%;" width="105">
<p><strong>Brazil</strong></p>
</td>
<td style="width: 40%;" width="270">
<p>Brazilian Legal Framework for Artificial Intelligence</p>
</td>
<td style="width: 29.6183%;" width="200">
<p>Brazilian congress</p>
</td>
<td style="width: 42.1374%;" width="76">
<p>September 2021</p>
</td>
</tr>
</tbody>
</table>
<p> </p>
<p> </p>
<p><a href="#_ftnref1" name="_ftn1"></a></p>
<p><a href="#_ftnref1" name="_ftn1">[1]</a> Summer school that brought together scientists such as the famous John McCarthy. However, the origins of AI can be attributed to different researchers. For example, in the literature, names like the computer scientist Alan Turing can also be found.</p>
<p><a href="#_ftnref2" name="_ftn2">[2]</a> For example, Amazon was accused in October 2021 of not complying with Article 22 of the GDPR. For more information: https:<a href="https://www.usine-digitale.fr/article/le-fonctionnement-de-l-algorithme-de-paiement-differe-d-amazon-violerait-le-rgpd.N1154412">//www.usine-digitale.fr/article/le-fonctionnement-de-l-algorithme-de-paiement-differe-d-amazon-violerait-le-rgpd.N1154412</a></p>
<p><a href="#_ftnref3" name="_ftn1">[3]</a> AI does not escape certain laws and regulations such as the RGPD for the countries concerned. We note for example this text from the CNIL: https://www.cnil.fr/fr/intelligence-artificielle/ia-comment-etre-en-conformite-avec-le-rgpd.</p>
<p><a href="#_ftnref4" name="_ftn2">[4]</a> Except for legislative proposals as we shall see later for the European Union. The case of Brazil is not treated in this article.</p>
<p><a href="#_ftnref5" name="_ftn2">[5]</a> This list is not exhaustive. The figures given give orders of magnitude on the main publishers of texts on the development of AI.</p>
<p>The texts on which the study is based are available in Annex 2 page 9</p>
<p><a href="#_ftnref6" name="_ftn1">[6]</a> We have chosen to merge the identification and protection phase for the purposes of this article.</p>
<p><a href="#_ftnref7" name="_ftn2">[7]</a> National Institute of Standards and Technology (NIST), Framework for improving Critical Infrastructure Cybersecurity, 16 April 2018, available at https://www.nist.gov/cyberframework/framework</p>
<p><a href="#_ftnref8" name="_ftn1">[8]</a> Available at: https:<a href="https://artificialintelligenceact.eu/the-act/">//artificialintelligenceact.eu/the-act/</a></p>
<p><a href="#_ftnref9" name="_ftn1">[9]</a> Loosely based on : Eve Gaumond, Artificial Intelligence Act: What is the European Approach for AI? in Lawfare, June 2021, available at: https:<a href="https://www.lawfareblog.com/artificial-intelligence-act-what-european-approach-ai">//www.lawfareblog.com/artificial-intelligence-act-what-european-approach-ai</a></p>
<p><a href="#_ftnref10" name="_ftn1">[10]</a> We talk about systems so as not to reduce AI.</p>
<p><a href="#_ftnref11" name="_ftn2">[11]</a><a href="https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges"> https://www.enisa.europa.eu/publications/artificial-intelligence-cybersecurity-challenges</a></p>
<p><a href="#_ftnref12" name="_ftn2">[12]</a><a href="https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms">  https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms</a></p>
<p><a href="#_ftnref13" name="_ftn2">[13]</a> Note that some titles have been translated in English.</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2022/06/artificial-intelligence-soon-to-be-regulated/">Artificial Intelligence soon to be regulated?</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2022/06/artificial-intelligence-soon-to-be-regulated/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
