{"id":21035,"date":"2023-08-22T16:00:00","date_gmt":"2023-08-22T15:00:00","guid":{"rendered":"https:\/\/www.riskinsight-wavestone.com\/?p=21035"},"modified":"2023-08-18T16:55:29","modified_gmt":"2023-08-18T15:55:29","slug":"chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers","status":"publish","type":"post","link":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/","title":{"rendered":"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0"},"content":{"rendered":"\n<p><span data-contrast=\"auto\">In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it&#8217;s an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became the fastest-growing application in history, with nearly 100 million active users per month (a record later surpassed by Threads).<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">As users have adopted this product en masse, it now raises several fundamental cybersecurity questions.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Should companies allow their employees \u2013 specifically development teams \u2013 to continue using this tool without any restrictions? Should they suspend its usage until security teams address the issue? Or should it be outright banned?<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Some companies like J.P. Morgan or Verizon have chosen to prohibit its usage. Apple initially decided to <a href=\"https:\/\/www.businessinsider.com\/chatgpt-companies-issued-bans-restrictions-openai-ai-amazon-apple-2023-7\">allow the tool for its employees before reversing its decision and prohibiting it<\/a><\/span><span data-contrast=\"auto\">. Amazon and Microsoft have simply asked their employees to be cautious about the information shared with OpenAI.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">The most restrictive approach of blocking the platform avoids all cybersecurity questions but raises other concerns, including team performance, productivity, and the overall competitiveness of companies in rapidly changing markets.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Today, the question of blocking AI in IT remains relevant. We propose to provide some answers to this question for a <\/span><b><span data-contrast=\"auto\">population particularly concerned with the issue: development teams.<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p>\u00a0<\/p>\n<h2 aria-level=\"3\"><b><span data-contrast=\"none\">ChatGPT, Personal Information Collection, and GDPR<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">OpenAI&#8217;s product is freely accessible and usable under the condition of creating a user account. It&#8217;s a known trend: if an online tool is free, its source of revenue doesn&#8217;t come from access to the tool. For the specific case of ChatGPT, the information from the history of millions of users helps improve the platform and the quality of the language model. ChatGPT is a preview service: any data entered by the user may be reviewed by a human to improve the services.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Currently, ChatGPT doesn&#8217;t seem compliant with GDPR and data protection laws, but no legal decision has been made. The terms and conditions currently don&#8217;t mention the right to limitation of processing, the right to data portability, or the right to object. The US-based company OpenAI doesn&#8217;t mention GDPR but emphasizes that ChatGPT complies with &#8220;CALIFORNIA PRIVACY RIGHTS.&#8221; However, this regulation only applies to California residents and doesn&#8217;t extend beyond the United States of America. OpenAI also doesn&#8217;t provide a solution for individuals to verify if the editor stores their personal data or to request its deletion.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">When we delve into ChatGPT&#8217;s <\/span><a href=\"https:\/\/openai.com\/policies\/privacy-policy\"><span data-contrast=\"none\">privacy policy<\/span><\/a><span data-contrast=\"auto\"> \u00a0we can understand that:<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<ol>\n<li data-leveltext=\"%1.\" data-font=\"Calibri\" data-listid=\"17\" data-list-defn-props=\"{&quot;335552541&quot;:0,&quot;335559684&quot;:-1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769242&quot;:[65533,0],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;%1.&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"1\" data-aria-level=\"1\"><span data-contrast=\"auto\">OpenAI collects user IP addresses, their web browser type, and data and interactions with the website. For example, this includes the type of content generated with AI, use cases, and functions used.<\/span><\/li>\n<li data-leveltext=\"%1.\" data-font=\"Calibri\" data-listid=\"17\" data-list-defn-props=\"{&quot;335552541&quot;:0,&quot;335559684&quot;:-1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769242&quot;:[65533,0],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;%1.&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"1\" data-aria-level=\"1\"><span data-contrast=\"auto\">OpenAI also collects information about users&#8217; browsing activity on the web. It reserves the right to share this personal information with third parties, without specifying which ones.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/li>\n<\/ol>\n<p><span data-contrast=\"auto\">All of this is done with the goal of improving existing services or developing new features.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Turning back to developer populations, today we observe that the majority of code is written collaboratively using Git tools. Thus, it&#8217;s not uncommon for a developer to have to understand a piece of code they didn&#8217;t write themselves. Instead of asking the original author, which can take several minutes (at best), a developer might turn to ChatGPT to get an instant answer. The response might even be more detailed than what the code&#8217;s author could provide.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<table style=\"width: 100%; border-collapse: collapse; background-color: #b8bab8;\">\n<tbody>\n<tr>\n<td style=\"width: 100%;\">\n<p><span style=\"color: #ffffff;\">As a result, it&#8217;s more than necessary to anonymize the elements shared with the Chatbot. Otherwise, some individuals might gain unauthorized access to confidential data. Thus, if a developer wants to understand the functionalities of a piece of code they&#8217;re not familiar with using ChatGPT&#8217;s help, they should:\u00a0<\/span><\/p>\n<ul style=\"list-style-type: circle;\">\n<li data-leveltext=\"\u2022\" data-font=\"Calibri\" data-listid=\"19\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559684&quot;:-2,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Calibri&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;\u2022&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span style=\"color: #ffffff;\">Break down the code to avoid revealing complete functionalities,\u00a0<\/span><\/li>\n<li data-leveltext=\"\u2022\" data-font=\"Calibri\" data-listid=\"19\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559684&quot;:-2,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Calibri&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;\u2022&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span style=\"color: #ffffff;\">Remove all secrets and potential passwords present in the code (a good practice to follow even without using ChatGPT),\u00a0<\/span><\/li>\n<li data-leveltext=\"\u2022\" data-font=\"Calibri\" data-listid=\"19\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559684&quot;:-2,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Calibri&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;\u2022&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span style=\"color: #ffffff;\" data-contrast=\"auto\">Change the names of variables that are too explicit.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\u00a0<\/p>\n<h2 aria-level=\"3\"><b><span data-contrast=\"none\">Classic Attacks on AI Still Apply<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">Today, over half of companies are ready and willing to invest in and equip themselves with tools based on artificial intelligence. Consequently, it will become increasingly important for attackers to exploit this kind of technology. This is especially considering that cybersecurity as a notion\u00a0is often overlooked when discussing artificial intelligence.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">OpenAI&#8217;s AI isn&#8217;t immune to <\/span><b><span data-contrast=\"auto\">poisoning attacks<\/span><\/b><span data-contrast=\"auto\">. Even if the AI is trained on a substantial knowledge base, it&#8217;s unlikely that all of that knowledge has undergone manual review. If we return to the topic of <\/span><b><span data-contrast=\"auto\">code generation, it&#8217;s plausible that based on certain specific inputs, the AI might suggest code containing a backdoor.<\/span><\/b><span data-contrast=\"auto\"> While this scenario hasn&#8217;t been observed, it&#8217;s not possible to prove that it won&#8217;t occur for a specific user input.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">We can also assume that the tool has been trained only on relatively safe web sources. The Large Language Model (LLM) on which ChatGPT is based: GPT3, could be susceptible to &#8220;self-poisoning.&#8221; As GPT3 is used by millions of users, it&#8217;s highly likely that text generated by GPT3 ends up in trusted internet content. The training of GPT4 could theoretically contain text generated by GPT3. Thus, the AI might learn from knowledge generated by previous versions of the same LLM model. It will be interesting to see how OpenAI addresses the poisoning issue as the model evolves.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Poisoning is one technique for adding backdoors to AI-generated code, but this isn&#8217;t the only attack vector. It&#8217;s also possible that compromising OpenAI&#8217;s systems could allow modifying ChatGPT&#8217;s configuration to suggest code containing backdoors under specific conditions. A malicious attacker might even filter based on the user account identity of ChatGPT (e.g., an account ending with @internationalfirm.com) to decide whether to generate code containing backdoors and other vulnerabilities. Thus, it&#8217;s necessary to remain vigilant about OpenAI&#8217;s security level to prevent any rebound compromise.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p>\u00a0<\/p>\n<h2 aria-level=\"3\"><b><span data-contrast=\"none\">ChatGPT and Code Generation<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/h2>\n<p><span data-contrast=\"auto\">Code generation via ChatGPT is one of the features that can save developers the most time on a daily basis. For instance, a developer could ask to write a code skeleton for a function and then complete\/correct the AI&#8217;s errors as needed. The main risk introduced by this practice is the insertion of malicious code into an application.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">However, the risk existed well before ChatGPT. A malicious developer could very well obfuscate their code and deliberately insert a backdoor into an application. However, the introduction of AI brings a new dimension to the risk since a well-intentioned user might <\/span><b><span data-contrast=\"auto\">inadvertently<\/span><\/b><span data-contrast=\"auto\"> introduce a backdoor. This needs to be considered in the context of the <\/span><b><span data-contrast=\"auto\">organization&#8217;s maturity regarding its CI\/CD pipeline. Conducting SAST, DAST scans, and various audits before production helps reduce the risk.<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">We have observed that code generation via ChatGPT does not follow security best practices by default. The tool can generate code using <\/span><b><span data-contrast=\"auto\">insecure functions like scanf in C programming language<\/span><\/b><span data-contrast=\"auto\">. We provided the following query to the tool: &#8220;Can you write a function in C language that creates a list of integers using user inputs?&#8221; (initially prompted in French).<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-21041 size-full\" src=\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT1.png\" alt=\"xtrait de code - Code g\u00e9n\u00e9r\u00e9 par ChatGPT suite \u00e0 l\u2019entr\u00e9e utilisateur d\u00e9crite ci-dessus\u00a0\" width=\"732\" height=\"624\" srcset=\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT1.png 732w, https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT1-224x191.png 224w, https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT1-46x39.png 46w\" sizes=\"auto, (max-width: 732px) 100vw, 732px\" \/><\/p>\n<p style=\"text-align: center;\"><i><span data-contrast=\"none\">Code generated by ChatGPT following the described user input<\/span><\/i><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335551550&quot;:2,&quot;335551620&quot;:2,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Analyzing the code generated by ChatGPT, among other things, we notice three significant vulnerabilities:<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<ol>\n<li><span data-contrast=\"auto\">To begin, the use of the scanf function allows the user to enter any input length (int overflow&#8230;). There&#8217;s no validation of the user&#8217;s input, which remains a key vulnerability type highlighted by the OWASP TOP10.<\/span><\/li>\n<li>Additionally, the function is sensitive to buffer overflow: beyond the 100th input, the list &#8220;list&#8221; no longer has space to store additional data, which can either end execution with an error or allow a malicious user to write data in a memory area that&#8217;s not authorized,<b style=\"font-size: revert; color: initial;\"><span data-contrast=\"auto\"> to take control of program execution.<\/span><\/b><\/li>\n<li>Finally, ChatGPT allocates memory to the list via the malloc function but forgets to free the memory once the list is no longer used, which could lead to <b style=\"font-size: revert; color: initial;\"><span data-contrast=\"auto\">memory leaks.<\/span><\/b><span style=\"font-size: revert; color: initial;\" data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/li>\n<\/ol>\n<p><span data-contrast=\"auto\">So, by default, Chat GPT does not generate code securely, unlike an experienced developer. <\/span><b><span data-contrast=\"auto\">The tool proposes code containing critical vulnerabilities<\/span><\/b><span data-contrast=\"auto\">. If the user is cybersecurity-aware, they can ask ChatGPT to identify vulnerabilities in their own code. ChatGPT is fully capable of detecting some vulnerabilities in the code generated by itself.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-21046 size-full\" src=\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT3.png\" alt=\"\" width=\"815\" height=\"339\" srcset=\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT3.png 815w, https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT3-437x182.png 437w, https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT3-71x30.png 71w, https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/Article-ChatGPT3-768x319.png 768w\" sizes=\"auto, (max-width: 815px) 100vw, 815px\" \/><\/p>\n<p style=\"text-align: center;\"><em>ChatGPT is able to detect vulnerabilities in code it has generated.<\/em><\/p>\n<p><span data-contrast=\"auto\">To summarize, code generation via ChatGPT doesn&#8217;t introduce new risks but <\/span><b><span data-contrast=\"auto\">increases the probability of a vulnerability appearing in production<\/span><\/b><span data-contrast=\"auto\">. Recommendations can vary based on the organization&#8217;s maturity and confidence in securing code delivered to production. A robust CI\/CD pipeline and strong processes with automatic security scans (SAST, DAST, FOSS&#8230;) have a good chance of detecting the most critical vulnerabilities.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p aria-level=\"3\">\u00a0<\/p>\n<p><span data-contrast=\"auto\">ChatGPT isn&#8217;t the only online resource accessible to users that can lead to data exfiltration (Google Drive, WeTransfer&#8230;). The risk of data leakage already looms over any organization that hasn&#8217;t implemented an allow-list on its users&#8217; internet proxy. The differentiating factor in the case of ChatGPT is that the user doesn&#8217;t necessarily realize the public nature of the data posted on the platform. The benefits and time saved by the tool are often too tempting for the user, making them forget best practices. In this sense, ChatGPT doesn&#8217;t introduce new risks but increases the likelihood of data leakage.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">An organization therefore has two options to prevent data leakage via ChatGPT: (1) train and educate its users and trust them, or (2) block the tool.<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">For developer populations, once again, code generation via ChatGPT doesn&#8217;t introduce new risks but increases the probability of a vulnerability appearing in production. It&#8217;s up to the organization to assess the capabilities of its CI\/CD pipeline and production processes to evaluate residual risks, particularly concerning false negatives from integrated security tools (SAST, DAST&#8230;).<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">To make an informed decision, a <\/span><b><span data-contrast=\"auto\">risk analysis remains a valuable tool for deciding whether to potentially block access to ChatGPT<\/span><\/b><span data-contrast=\"auto\">. The following aspects should be considered: user awareness level, sensitivity of manipulated data, internet filtering paradigm, maturity of the CI\/CD pipeline&#8230; These analyses should, of course, be balanced against potential productivity gains for teams.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it&#8217;s an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became&#8230;<\/p>\n","protected":false},"author":1372,"featured_media":21032,"comment_status":"open","ping_status":"closed","sticky":true,"template":"page-templates\/tmpl-one.php","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[3266,3977],"tags":[3279,4282,3997],"coauthors":[3524,4224,4280],"class_list":["post-21035","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cloud-next-gen-it-security-en","category-focus","tag-artificial-intelligence-en","tag-chatgpt-2","tag-devsecops-2"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.0 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0 - RiskInsight<\/title>\n<meta name=\"description\" content=\"In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it&#039;s an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became the fastest-growing application in history, with nearly 100 million active users per month (a record later surpassed by Threads).\u00a0\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0 - RiskInsight\" \/>\n<meta property=\"og:description\" content=\"In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it&#039;s an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became the fastest-growing application in history, with nearly 100 million active users per month (a record later surpassed by Threads).\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/\" \/>\n<meta property=\"og:site_name\" content=\"RiskInsight\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-22T15:00:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"853\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Emma Barfety, Valentin Vie, Robin REMAUD\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Emma Barfety, Valentin Vie, Robin REMAUD\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/\"},\"author\":{\"name\":\"Emma Barfety\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/person\/c7fd6deaa2f9ea997b4758ea2361bc41\"},\"headline\":\"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0\",\"datePublished\":\"2023-08-22T15:00:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/\"},\"wordCount\":1710,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp\",\"keywords\":[\"artificial intelligence\",\"chatgpt\",\"DevSecOps\"],\"articleSection\":[\"Cloud &amp; Next-Gen IT Security\",\"Focus\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/\",\"url\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/\",\"name\":\"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0 - RiskInsight\",\"isPartOf\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp\",\"datePublished\":\"2023-08-22T15:00:00+00:00\",\"description\":\"In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it's an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became the fastest-growing application in history, with nearly 100 million active users per month (a record later surpassed by Threads).\u00a0\",\"breadcrumb\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage\",\"url\":\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp\",\"contentUrl\":\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp\",\"width\":1280,\"height\":853,\"caption\":\"cover - brain and network\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.riskinsight-wavestone.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#website\",\"url\":\"https:\/\/www.riskinsight-wavestone.com\/en\/\",\"name\":\"RiskInsight\",\"description\":\"The cybersecurity &amp; digital trust blog by Wavestone&#039;s consultants\",\"publisher\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.riskinsight-wavestone.com\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#organization\",\"name\":\"Wavestone\",\"url\":\"https:\/\/www.riskinsight-wavestone.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2021\/08\/Monogramme\u2013W\u2013NEGA-RGB-50x50-1.png\",\"contentUrl\":\"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2021\/08\/Monogramme\u2013W\u2013NEGA-RGB-50x50-1.png\",\"width\":50,\"height\":50,\"caption\":\"Wavestone\"},\"image\":{\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/person\/c7fd6deaa2f9ea997b4758ea2361bc41\",\"name\":\"Emma Barfety\",\"url\":\"https:\/\/www.riskinsight-wavestone.com\/en\/author\/emma-barfety\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0 - RiskInsight","description":"In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it's an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became the fastest-growing application in history, with nearly 100 million active users per month (a record later surpassed by Threads).\u00a0","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/","og_locale":"en_US","og_type":"article","og_title":"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0 - RiskInsight","og_description":"In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it's an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became the fastest-growing application in history, with nearly 100 million active users per month (a record later surpassed by Threads).\u00a0","og_url":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/","og_site_name":"RiskInsight","article_published_time":"2023-08-22T15:00:00+00:00","og_image":[{"width":1280,"height":853,"url":"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp","type":"image\/webp"}],"author":"Emma Barfety, Valentin Vie, Robin REMAUD","twitter_misc":{"Written by":"Emma Barfety, Valentin Vie, Robin REMAUD","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#article","isPartOf":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/"},"author":{"name":"Emma Barfety","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/person\/c7fd6deaa2f9ea997b4758ea2361bc41"},"headline":"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0","datePublished":"2023-08-22T15:00:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/"},"wordCount":1710,"commentCount":0,"publisher":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#organization"},"image":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage"},"thumbnailUrl":"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp","keywords":["artificial intelligence","chatgpt","DevSecOps"],"articleSection":["Cloud &amp; Next-Gen IT Security","Focus"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/","url":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/","name":"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0 - RiskInsight","isPartOf":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage"},"image":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage"},"thumbnailUrl":"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp","datePublished":"2023-08-22T15:00:00+00:00","description":"In November 2022, the conversational agent ChatGPT developed by OpenAI was made accessible to the general public. Since then, it's an understatement to say that this new tool has garnered interest. Just two months after its launch, the tool became the fastest-growing application in history, with nearly 100 million active users per month (a record later surpassed by Threads).\u00a0","breadcrumb":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#primaryimage","url":"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp","contentUrl":"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2023\/08\/couverture.webp","width":1280,"height":853,"caption":"cover - brain and network"},{"@type":"BreadcrumbList","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/2023\/08\/chatgpt-devsecops-what-are-the-new-cybersecurity-risks-introduced-by-the-use-of-ai-by-developers\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.riskinsight-wavestone.com\/en\/"},{"@type":"ListItem","position":2,"name":"ChatGPT &amp; DevSecOps \u2013 What are the new cybersecurity risks introduced by the use of AI by developers?\u00a0"}]},{"@type":"WebSite","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#website","url":"https:\/\/www.riskinsight-wavestone.com\/en\/","name":"RiskInsight","description":"The cybersecurity &amp; digital trust blog by Wavestone&#039;s consultants","publisher":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.riskinsight-wavestone.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#organization","name":"Wavestone","url":"https:\/\/www.riskinsight-wavestone.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2021\/08\/Monogramme\u2013W\u2013NEGA-RGB-50x50-1.png","contentUrl":"https:\/\/www.riskinsight-wavestone.com\/wp-content\/uploads\/2021\/08\/Monogramme\u2013W\u2013NEGA-RGB-50x50-1.png","width":50,"height":50,"caption":"Wavestone"},"image":{"@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/www.riskinsight-wavestone.com\/en\/#\/schema\/person\/c7fd6deaa2f9ea997b4758ea2361bc41","name":"Emma Barfety","url":"https:\/\/www.riskinsight-wavestone.com\/en\/author\/emma-barfety\/"}]}},"_links":{"self":[{"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/posts\/21035","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/users\/1372"}],"replies":[{"embeddable":true,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/comments?post=21035"}],"version-history":[{"count":3,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/posts\/21035\/revisions"}],"predecessor-version":[{"id":21050,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/posts\/21035\/revisions\/21050"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/media\/21032"}],"wp:attachment":[{"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/media?parent=21035"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/categories?post=21035"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/tags?post=21035"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.riskinsight-wavestone.com\/en\/wp-json\/wp\/v2\/coauthors?post=21035"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}