<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>european union - RiskInsight</title>
	<atom:link href="https://www.riskinsight-wavestone.com/en/tag/european-union/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.riskinsight-wavestone.com/en/tag/european-union/</link>
	<description>The cybersecurity &#38; digital trust blog by Wavestone&#039;s consultants</description>
	<lastBuildDate>Wed, 26 Jun 2024 10:22:20 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Cybersecurity at the Heart of the AI ​​Act: Key Elements for Compliance</title>
		<link>https://www.riskinsight-wavestone.com/en/2024/06/cybersecurity-at-the-heart-of-the-ai-act-key-elements-for-compliance/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2024/06/cybersecurity-at-the-heart-of-the-ai-act-key-elements-for-compliance/#respond</comments>
		
		<dc:creator><![CDATA[Perrine Viard]]></dc:creator>
		<pubDate>Wed, 26 Jun 2024 10:22:18 +0000</pubDate>
				<category><![CDATA[Cloud & Next-Gen IT Security]]></category>
		<category><![CDATA[Digital Compliance]]></category>
		<category><![CDATA[Focus]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ai act]]></category>
		<category><![CDATA[AIS]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[artificial intelligence act]]></category>
		<category><![CDATA[european union]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=23375</guid>

					<description><![CDATA[<p>Here we are, on May 21, 2024, the European regulations on AI see the light of day after 4 years of negotiations. Since February 2020, the European Union (EU) has been interested in Artificial Intelligence Systems (AIS) with the publication...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2024/06/cybersecurity-at-the-heart-of-the-ai-act-key-elements-for-compliance/">Cybersecurity at the Heart of the AI ​​Act: Key Elements for Compliance</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p style="text-align: justify;">Here we are, on May 21, 2024, the European regulations on AI see the light of day after 4 years of negotiations. Since February 2020, the European Union (EU) has been interested in Artificial Intelligence Systems (AIS) with the publication of the first white paper on AI by the European Commission. Four years later, on March 13, 2024, the European Parliament approved the regulation on artificial intelligence (AI Act) by a large majority of 523 votes out of 618 and Europe became the first continent to set clear rules for use of AI.</p>
<p style="text-align: justify;">To arrive at this favorable vote, the European Parliament had to face heavy opposition from lobbyists, in particular certain AI companies, which, until now, could benefit from a very large panel of training data, without worrying about Copyright. Some governments, like French, have also tried to block it the act. In the case of the French State, they feared that regulations could slow down the development of French Tech.</p>
<p style="text-align: justify;">On December 9, 2023, the Parliament and the Council agreed on a text, after three days of “marathon talks” and months of negotiations. An almost record number of 771 amendments were integrated into the text of the law, this is more than required for the passing of GDPR, which displays the difficulties encountered in the adoption of the AI Act.</p>
<p style="text-align: justify;">The regulation on artificial intelligence (AI Act) was approved on March 13, 2024 by the European Parliament, then on May 21, 2024 by the European Council. This is the final step in the decision-making process, paving the way for the implementation of the act. As it is a regulation, it is directly applicable to all EU member countries. The next deadlines are given in Figure 6, at the end of this article.</p>
<p style="text-align: justify;"><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-23380" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-1-EN.png" alt="" width="3659" height="1954" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-1-EN.png 3659w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-1-EN-358x191.png 358w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-1-EN-71x39.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-1-EN-768x410.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-1-EN-1536x820.png 1536w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-1-EN-2048x1094.png 2048w" sizes="(max-width: 3659px) 100vw, 3659px" /></p>
<p style="text-align: center;"><em>Figure 1: Timeline of adoption of the AI ​​Act</em></p>
<p style="text-align: justify;"><em> </em></p>
<h2 style="text-align: justify;"><span style="color: #50067a;"><strong>Who are the stakeholders and supervisory authorities?</strong></span></h2>
<p style="text-align: justify;">The AI ​​Act essentially concerns five main types of actors: suppliers, integrators, importers, distributors, and organizations using AINaturally, suppliers, distributors, and user organizations are the most targeted by regulation.</p>
<p style="text-align: justify;">Each EU state is responsible for “the application and implementation of the regulation” and must designate a national supervisory authority. In France, the CNIL could be a good candidate<a href="#_ftn1" name="_ftnref1">[1]</a> which created, in January 2023, an “Artificial Intelligence Service”.</p>
<h2 style="text-align: justify;"> </h2>
<h2><span style="color: #50067a;">A new hierarchy of risks that brings cybersecurity requirements.</span></h2>
<p style="text-align: justify;">The AI ​​Act defines an AIS as an automated system that is designed to operate at different levels of autonomy and that, based on input data, infers recommendations or decisions that can influence physical or virtual environments.</p>
<p style="text-align: justify;">AISs are classified into four levels according to the risk they represent: unacceptable risks, high risks, limited risks, and low risks.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-23383" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-2-EN.png" alt="" width="3882" height="948" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-2-EN.png 3882w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-2-EN-437x107.png 437w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-2-EN-71x17.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-2-EN-768x188.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-2-EN-1536x375.png 1536w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-2-EN-2048x500.png 2048w" sizes="(max-width: 3882px) 100vw, 3882px" /></p>
<p style="text-align: center;"><em>Figure 2: Risk classification, requirements and sanctions</em></p>
<p style="text-align: justify;"> </p>
<ol style="text-align: justify;">
<li><span style="color: #53548a;"><strong>AISs at unacceptable risk</strong></span> are those generating risks that contravene EU values ​​and undermine fundamental rights. These AISs are quite simply prohibited; they cannot be marketed within the EU or exported. The various risks deemed unacceptable and therefore leading to an AIS being prohibited are cited in the figure below. Marketing this type of AIS is punishable by a fine of 7% of the company&#8217;s annual turnover or €35 million.</li>
</ol>
<p><img decoding="async" class="aligncenter wp-image-23385" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-3-EN.png" alt="" width="500" height="329" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-3-EN.png 2121w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-3-EN-290x191.png 290w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-3-EN-59x39.png 59w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-3-EN-768x505.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-3-EN-1536x1011.png 1536w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-3-EN-2048x1348.png 2048w" sizes="(max-width: 500px) 100vw, 500px" /></p>
<p style="text-align: center;"><em>Figure 3: Use cases of unacceptable risks</em>                 </p>
<ol style="text-align: justify;" start="2">
<li><span style="color: #53548a;"><strong>High risk AISs</strong></span> present a risk of negative impact on security or fundamental rights. These include, for example, biometric identification or workforce management systems. They are the target of almost all of the requirements mentioned in the text of the AI Act. For these AISs, a declaration of conformity and their registration in the EU database are required. In addition, they are subject to cybersecurity requirements which are presented in Figure 4. Failure to comply with the given criteria is sanctioned at a maximum of 3% of the company&#8217;s annual turnover or €15 million in fine.</li>
<li><span style="color: #53548a;"><strong>Limited risk AISs</strong></span> are AI systems interacting with natural persons and being neither at unacceptable risk nor at high risk. For example, we find deepfakes with artistic or educational purposes. In this case, users must be informed that the content was generated by AI. A lack of transparency can be penalized at €7.5M or 1% of turnover.</li>
<li><span style="color: #53548a;"><strong>Low risk AISs</strong></span> are those that do not fall into the categories cited above. These include, for example, video game AI or spam filters. No sanctions are provided for these systems, they are subject to the voluntary application of codes of conduct and represent the majority of AIS currently used in the EU.</li>
</ol>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #50067a;"><strong>Cybersecurity requirements addressed to high-risk AISs.</strong></span></h2>
<p style="text-align: justify;">Although the AI ​​Act Regulation is not solely focused on cybersecurity, it sets a number of requirements in this area:</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-23387" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-4-EN.png" alt="" width="1934" height="1895" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-4-EN.png 1934w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-4-EN-195x191.png 195w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-4-EN-40x39.png 40w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-4-EN-768x753.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-4-EN-1536x1505.png 1536w" sizes="auto, (max-width: 1934px) 100vw, 1934px" /></p>
<p style="text-align: center;"><em>Figure 4: The AI ​​Act’s cybersecurity requirements</em></p>
<p style="text-align: justify;">We have identified <span style="color: #53548a;"><strong>seven main categories</strong></span>:</p>
<p style="text-align: justify;"><strong><span style="color: #53548a;">Risk Management</span>:</strong> The text imposes, for high-risk AISs, a risk management system which takes place throughout the life cycle of the AIS. It must provide, among other things, for the identification and analysis of current and future risks and the control of residual risks.</p>
<p style="text-align: justify;"><strong><span style="color: #53548a;">Security by Design</span>:</strong> The AI ​​Act requires high-risk AISs to take into account the level of risk. Risks must be reduced “as much as possible through appropriate design and development”. The regulation also mentions the control of feedback loops in the case of an AIS which continues its learning after being placed on the market.</p>
<p style="text-align: justify;"><strong><span style="color: #53548a;">Documentation</span>:</strong> Each AIS must be accompanied by technical documentation which proves that the requirements indicated in Annex 4 of the law are respected. In addition to this technical documentation addressed to national authorities, the AI ​​Act requires the drafting of instructions for use that can be understood by users. It contains, for example, the measures put in place for system maintenance and log collection.</p>
<p style="text-align: justify;"><strong><span style="color: #53548a;">Data Governance</span>:</strong> The AI ​​Act regulates the choice of training data<a href="#_ftn2" name="_ftnref2">[2]</a> on the one hand and the security of user data on the other. Training data must be reviewed so that it does not contain any bias<a href="#_ftn3" name="_ftnref3">[3]</a> or inadequacy that could lead to discrimination or affect the health and safety of individuals. This data must be representative of the environment in which the AIS will be used. For the protection of personal data, the resolution of problems linked to bias (presented earlier), to the extent that it cannot be handled otherwise, serves as the only exemption for access to sensitive data (origins, beliefs policies, biometric or health data, etc.). This access is subject to several confidentiality obligations and the deletion of this data once the bias is corrected.</p>
<p style="text-align: justify;"><strong><span style="color: #53548a;">Record Keeping</span>:</strong> Automatic logging is part of the cyber requirements of the AI ​​Act. The latter must, throughout their life cycle, identify the relevant elements for the identification of risk situations and to enable the facilitation of post-market surveillance.</p>
<p style="text-align: justify;"><strong><span style="color: #53548a;">Resilience</span>:</strong> The AI ​​Act requires high-risk AIS to be resistant to attempts by outsiders to alter their use or performance. The text emphasizes in particular the risk of “poisoning” of data<a href="#_ftn4" name="_ftnref4">[4]</a>. Additionally, redundant technical solutions, such as backup plans or post-failure safety measures, must be integrated into the program to ensure the robustness of high-risk AI systems.</p>
<p style="text-align: justify;"><strong><span style="color: #53548a;">Human Monitoring</span>: </strong>The AI ​​Act introduces an obligation for human monitoring of AIS. This begins with a design adapted to human surveillance and control. Then, it is required that the design of the model ensures that no action or decision is taken by the deployment manager without the approval of two competent individuals, with a few exceptions.</p>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #50067a;"><strong>The new case for general-purpose AI: specific requirements.</strong></span></h2>
<p style="text-align: justify;">Since the April 2021 bill, negotiations have led to the appearance of a new term in the regulation: that of Gen AI or “general purpose AI model”. The latter is defined in the text as an AI model that exhibits significant generality and is capable of competently performing a wide range of distinct tasks. These models form a very distinct category of AIS and must meet specific requirements. The new chapter V of the regulation is dedicated to them. There are mainly bonds of transparency towards the EU, suppliers and users as well as respect for copyright. Finally, suppliers must designate an agent responsible for compliance with these requirements. But the new version of the AI ​​Act also introduced a new concept: that of Gen AI with “systemic risk”, which are the most regulated.</p>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #50067a;"><strong>What is systemic risk Gen AI?</strong></span></h2>
<p style="text-align: justify;">The AI ​​Act defines “systemic risk” as “a high-impact risk of general-purpose AI models, having a significant impact on the European Union market due to their scope or negative effects on the public health, safety, public security, fundamental rights or society as a whole, which can be spread on a large scale.” Concretely, a Gen AI is considered to present a systemic risk if it has a high impact capacity according to the following criteria:</p>
<ol style="text-align: justify;">
<li>A quantity of calculation used for its training greater than 10^25 FLOPS<a href="#_ftn5" name="_ftnref5">[5]</a> ;</li>
<li>A decision by the Commission based on various criteria defined in Annex XIII such as the complexity of the model parameters or its reach among businesses and consumers.</li>
</ol>
<p style="text-align: justify;"> </p>
<h2><span style="color: #50067a;"><strong>What measures should be implemented?</strong></span></h2>
<p style="text-align: justify;">If the AIS falls into these categories, it will have to comply with numerous requirements, particularly in terms of cybersecurity. For example, Section 55(1a) requires providers of these AISs to implement adversarial testing of models with a view to identifying and mitigating systemic risk. In addition, systemic risk Gen AIs must present, in the same way as high-risk AISs, an appropriate level of cybersecurity protection and protection of the physical infrastructure of the model. Finally, like the GDPR with personal data breaches, the AI ​​Act requires, in the event of a serious incident, to contact the AI ​​Office<a href="#_ftn6" name="_ftnref6">[6]</a> as well as the competent national authority. Corrective measures to resolve the incident must also be communicated.</p>
<p style="text-align: justify;">The following diagram summarizes the different requirements based on the general-purpose AI model:</p>
<p style="text-align: justify;"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-23389" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-5-EN.png" alt="" width="3314" height="2180" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-5-EN.png 3314w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-5-EN-290x191.png 290w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-5-EN-59x39.png 59w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-5-EN-768x505.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-5-EN-1536x1010.png 1536w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-5-EN-2048x1347.png 2048w" sizes="auto, (max-width: 3314px) 100vw, 3314px" /></p>
<p style="text-align: center;"><em>Figure 5: The requirements of the different GenIA models</em></p>
<p style="text-align: justify;"><strong> </strong></p>
<h2 style="text-align: justify;"><span style="color: #50067a;"><strong>Is it possible to ease certain requirements?</strong></span></h2>
<p style="text-align: justify;">In the case of a general-purpose AI model that does not present systemic risk, it is possible to significantly reduce the obligations of the regulation by making it free to consult, modify and distribute (Open Source<a href="#_ftn7" name="_ftnref7">[7]</a>). In this case, the provider is obliged to respect the copyrights and to make available to the public a sufficiently detailed summary of the content used to train the AI ​​model.</p>
<p style="text-align: justify;">On the other hand, a Gen AI with systemic risk will necessarily have to respect the requirements set out above. However, it is possible to request a reassessment of your AI model by proving that it no longer presents a systemic risk in order to get rid of the additional requirements. This re-evaluation is possible twice a year and is validated by the European Commission on objective criteria (Annex XIII).</p>
<p style="text-align: justify;"> </p>
<h2 style="text-align: justify;"><span style="color: #50067a;"><strong>How to prepare for AI Act compliance?</strong></span></h2>
<p style="text-align: justify;">To prepare well, you should respect <span style="color: #53548a;"><strong>the risk-based approach which is imposed by the text</strong>.</span> The first step is to do the <span style="color: #53548a;"><strong>inventory of its use cases</strong></span>, in other words, identify all AISs that the organization develops or employs. Secondly, it is about <strong><span style="color: #53548a;">classifying your AISs by risk level</span> </strong>(for example through a heat map). The applicable measures will then be identified according to the risk level of the AIS. The AI ​​Act also requires the implementation of a <span style="color: #53548a;"><strong>security integration process in AI projects </strong></span>which allows, as with any project, to assess the risks of the project in relation to the organization and to develop a relevant plan to remediate these risks.</p>
<p style="text-align: justify;">To initiate compliance with applicable measures, it is appropriate to start by updating existing documentation and tools, in particular:</p>
<ul style="text-align: justify;">
<li><span style="color: #53548a;"><strong>Security Policies </strong></span>to define requirements specific to AI security;</li>
<li><span style="color: #53548a;"><strong>Evaluation questionnaire </strong></span>the sensitivity of projects targeting questions relevant to AI projects;</li>
<li>Library of risk scenarios with attacks specific to AI;</li>
<li>Library of security measures to be inserted into AI projects.</li>
</ul>
<p style="text-align: justify;"><strong> </strong></p>
<h2 style="text-align: justify;"><span style="color: #50067a;"><strong>What are the next steps?</strong></span></h2>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-23391" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-6-EN.png" alt="" width="2000" height="800" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-6-EN.png 2000w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-6-EN-437x175.png 437w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-6-EN-71x28.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-6-EN-768x307.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/06/AI-Act-Figure-6-EN-1536x614.png 1536w" sizes="auto, (max-width: 2000px) 100vw, 2000px" /></p>
<p style="text-align: center;"><em>Figure 6: Implementation timeline of the AI ​​Act</em></p>
<p style="text-align: justify;"><strong> </strong></p>
<p style="text-align: justify;"><em> &#8212;</em></p>
<p style="text-align: justify;"><a href="#_ftnref1" name="_ftn1">[1]</a> The CNIL and its European equivalents could use their experience to contribute to more harmonized governance (between Member States and between the texts themselves).</p>
<p style="text-align: justify;"><a href="#_ftnref2" name="_ftn2">[2]</a> Training data: Large set of example data used to teach AI to make predictions or decisions.</p>
<p style="text-align: justify;"><a href="#_ftnref3" name="_ftn3">[3]</a> Bias: Algorithmic bias means that the result of an algorithm is not neutral, fair or equitable, whether unconsciously or deliberately.</p>
<p style="text-align: justify;"><a href="#_ftnref4" name="_ftn4">[4]</a> Data poisoning: Poisoning attacks aim to modify the AI system&#8217;s behavior by introducing corrupted data during the training (or learning) phase.</p>
<p style="text-align: justify;"><a href="#_ftnref5" name="_ftn5">[5]</a> FLOPS: Unit of measurement of the power of a computer corresponding to the number of floating point operations it performs per second, for example, GPT-4 was trained with a computing power of the order of 10^ 28 FLOPs compared to 10^22 for GPT-1.</p>
<p style="text-align: justify;"><a href="#_ftnref6" name="_ftn6">[6]</a> AI Office: European organization responsible for implementing the regulation. As such, he is entrusted with numerous tasks such as the development of tools or methodologies or even cooperation with the various actors involved in this regulation.</p>
<p style="text-align: justify;"><a href="#_ftnref7" name="_ftn7">[7]</a> Open Source: AI models that allow their free consultation, modification and distribution are considered under a free and open license (Open Source). Their parameters and information on the use of the model must be made public.</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2024/06/cybersecurity-at-the-heart-of-the-ai-act-key-elements-for-compliance/">Cybersecurity at the Heart of the AI ​​Act: Key Elements for Compliance</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2024/06/cybersecurity-at-the-heart-of-the-ai-act-key-elements-for-compliance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The AI Act: The Keys to Understanding the World&#8217;s First Legislation on Artificial Intelligence.</title>
		<link>https://www.riskinsight-wavestone.com/en/2024/04/the-ai-act-the-keys-to-understanding-the-worlds-first-legislation-on-artificial-intelligence/</link>
					<comments>https://www.riskinsight-wavestone.com/en/2024/04/the-ai-act-the-keys-to-understanding-the-worlds-first-legislation-on-artificial-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[Chirine Gurgoz]]></dc:creator>
		<pubDate>Mon, 08 Apr 2024 15:12:25 +0000</pubDate>
				<category><![CDATA[Cloud & Next-Gen IT Security]]></category>
		<category><![CDATA[Focus]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ai act]]></category>
		<category><![CDATA[artificial intelligence act]]></category>
		<category><![CDATA[european union]]></category>
		<category><![CDATA[gpai]]></category>
		<category><![CDATA[sia]]></category>
		<guid isPermaLink="false">https://www.riskinsight-wavestone.com/?p=22938</guid>

					<description><![CDATA[<p>On March 13, 2024, the European Parliament adopted the final version of the European Artificial Intelligence Act, also known as the “AI Act”[1]. Nearly three years after the publication of the first version of the text, the twenty-seven countries of...</p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2024/04/the-ai-act-the-keys-to-understanding-the-worlds-first-legislation-on-artificial-intelligence/">The AI Act: The Keys to Understanding the World&#8217;s First Legislation on Artificial Intelligence.</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p style="text-align: justify;">On March 13, 2024, the European Parliament adopted the final version of the European Artificial Intelligence Act, also known as the “AI Act”<a href="#_ftn1" name="_ftnref1">[1]</a>. Nearly three years after the publication of the first version of the text, the twenty-seven countries of the European Union reached an historic agreement on the world&#8217;s first harmonized rules on artificial intelligence. The final version of the text is expected on April 22, 2024, prior to publication in the Official Journal of the European Union.</p>
<p style="text-align: justify;">The AI Act aims to ensure that artificial intelligence systems and models marketed within the European Union are used ethically, safely, and <span style="color: #53548a;"><strong>in compliance with EU fundamental rights</strong></span>. The Act has also been drafted to strengthen the competitiveness and innovation of AI companies. The AI Act will reduce the risk of abuses, reinforcing user confidence in its use and adoption.</p>
<p style="text-align: justify;">France Digitale, Europe&#8217;s largest startup association, Gide, an international French business law firm, and Wavestone, have joined forces to co-author a white paper to help you understand and apply the European AI Act: <a href="https://www.wavestone.com/en/insight/ai-act-keys-to-understanding-and-implementing-the-european-law-on-artificial-intelligence/">AI Act: Keys to Understanding and Implementing the European Law on Artificial Intelligence</a>.</p>
<p style="text-align: justify;">In this publication, France Digitale, Gide, and Wavestone share their vision of the AI Act, from the types of systems affected to the major stages of compliance.</p>
<p style="text-align: justify;"> </p>
<h3 style="text-align: justify;"><span style="color: #50067a;"><strong>A few definitions to get you started</strong></span></h3>
<p style="text-align: justify;">The AI Act makes a distinction between artificial intelligence systems and models, which it defines as follows:</p>
<ul style="text-align: justify;">
<li>An <span style="color: #53548a;"><strong>Artificial Intelligence System</strong></span> (AIS) is an automated system designed to operate at different levels of autonomy and which can generate predictions, recommendations, or decisions that influence physical or virtual environments.</li>
<li>A <span style="color: #53548a;"><strong>General-Purpose AI system</strong></span> (GPAI) is a versatile AI system capable of performing a wide range of distinct tasks. It can be integrated into a variety of systems or applications, demonstrating great flexibility and adaptability.</li>
</ul>
<p style="text-align: justify;"> </p>
<h3 style="text-align: justify;"><span style="color: #50067a;"><strong>Players concerned</strong></span></h3>
<p style="text-align: justify;">The AI Act concerns all <span style="color: #53548a;"><strong>suppliers, distributors, or deployers</strong></span> of AI systems and models, including <span style="color: #53548a;"><strong>legal entities</strong></span> (companies, foundations, associations, research laboratories, etc.), headquartered in the European Union or outside the European Union, who market their AI system or model within the European Union.</p>
<p style="text-align: justify;">The level of regulation and associated obligations depend on the<span style="color: #53548a;"><strong> level of risk presented by the AI system or model.</strong></span></p>
<p style="text-align: justify;"> </p>
<h3 style="text-align: justify;"><span style="color: #50067a;"><strong>Classification of AIS According to Risk Level</strong></span></h3>
<p style="text-align: justify;">The AI Act introduces a classification of artificial intelligence systems. AIS must be analysed and prioritized according to the risk they present to users:<span style="color: #53548a;"> <strong>minimal, low, high, </strong></span>and<span style="color: #53548a;"><strong> unacceptable</strong></span>. The different levels of risk imply more or less obligations.</p>
<p style="text-align: justify;"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-22933" src="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/04/IA-Act-EN-v3.png" alt="" width="4201" height="2227" srcset="https://www.riskinsight-wavestone.com/wp-content/uploads/2024/04/IA-Act-EN-v3.png 4201w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/04/IA-Act-EN-v3-360x191.png 360w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/04/IA-Act-EN-v3-71x39.png 71w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/04/IA-Act-EN-v3-768x407.png 768w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/04/IA-Act-EN-v3-1536x814.png 1536w, https://www.riskinsight-wavestone.com/wp-content/uploads/2024/04/IA-Act-EN-v3-2048x1086.png 2048w" sizes="auto, (max-width: 4201px) 100vw, 4201px" /></p>
<p style="text-align: justify;">Unacceptable-risk AIS are prohibited by the AI Act, while minimal-risk AIS are not subject to the Act. <span style="color: #53548a;"><strong>High-risk and low-risk AIS are therefore the focus of most of the measures set out in the regulations.</strong></span></p>
<p style="text-align: justify;">Specific obligations apply to generative AI and to the development of general-purpose AI models (e.g., Large Language Models or “LLMs”), depending on various factors: computing power, number of users, use of an open-source model, etc.</p>
<p style="text-align: justify;">In order to meet the new challenges posed by the emergence of generative artificial intelligence, the AI Act includes specific cybersecurity measures aimed at reducing the risks generated by the development of generative artificial intelligence.</p>
<p style="text-align: justify;"> </p>
<p style="text-align: justify;">In a future publication, we&#8217;ll be taking a closer look at the cybersecurity aspects of the AI Act. In the meantime, you can find our latest publications on AI and cybersecurity: “<a href="https://www.riskinsight-wavestone.com/en/2024/03/securing-ai-the-new-cybersecurity-challenges/">Securing AI: The New Cybersecurity Challenges</a>”, “<a href="https://www.riskinsight-wavestone.com/en/2023/10/the-industrialization-of-ai-by-cybercriminals-should-we-really-be-worried/">The industrialization of AI by cybercriminals: should we really be worried?</a>”, “<a href="https://www.riskinsight-wavestone.com/en/2023/10/language-as-a-sword-the-risk-of-prompt-injection-on-ai-generative/">Language as a sword: the risk of prompt injection on AI Generative</a>”.</p>
<p style="text-align: justify;"><a href="#_ftnref1" name="_ftn1">[1]</a> <a href="https://www.lemonde.fr/en/economy/article/2024/02/03/france-agrees-to-ratify-the-eu-artificial-intelligence-act-after-seven-months-of-opposition_6489701_19.html">France agrees to ratify the EU Artificial Intelligence Act after seven months of resistance (lemonde.fr).</a></p>
<p>Cet article <a href="https://www.riskinsight-wavestone.com/en/2024/04/the-ai-act-the-keys-to-understanding-the-worlds-first-legislation-on-artificial-intelligence/">The AI Act: The Keys to Understanding the World&#8217;s First Legislation on Artificial Intelligence.</a> est apparu en premier sur <a href="https://www.riskinsight-wavestone.com/en/">RiskInsight</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.riskinsight-wavestone.com/en/2024/04/the-ai-act-the-keys-to-understanding-the-worlds-first-legislation-on-artificial-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
