Are GenAI Systems Leaving Your Business Vulnerable to Attacks?

Businesses integrating generative AI (GenAI) systems into their operations face an unsettling reality: these advanced technologies may be creating significant security risks. The Cobalt State of Pentesting Report highlights a pronounced gap between identifying and resolving vulnerabilities in GenAI systems, with only 21% of identified flaws being remedied. This resolution rate is notably lower than that for other types of vulnerabilities, such as those in APIs and cloud services, which have correction rates of over 75% and 68%, respectively. This disparity underscores a critical issue in the rapid deployment of GenAI features by organizations aiming to gain a competitive edge, often without fully addressing security concerns.

Business Velocity vs Security Preparedness

One of the main factors contributing to the security gap in GenAI systems is the pace at which businesses are adopting these technologies. Companies are rushing to leverage the business advantages offered by GenAI, potentially at the expense of robust security measures. According to Cobalt CTO Gunter Ollmann, this problem stems partly from the late involvement of security teams in the development process and a considerable lack of security knowledge among AI development teams. Additionally, heavy reliance on third-party and open-source models complicates the situation further by making timely patching dependent on external providers.

Despite the low resolution rates, it is worth noting that GenAI vulnerabilities generally get fixed more quickly than other vulnerabilities. These flaws are often resolved within a week to a month, compared to the median period of 67 days for other types of vulnerabilities. An encouraging sign is the significant reduction in the median time required to address severe vulnerabilities, which has dropped to 37 days from 112 days reported earlier. This improvement can be partially attributed to better engagement from business leadership and the integration of security measures early in the development stages.

High-Risk Issues in GenAI Systems

The report also reveals that a staggering 98% of businesses surveyed are incorporating GenAI into their products and services. However, only 66% of these companies are performing regular security assessments, highlighting another significant gap in security practices. GenAI systems are particularly prone to high-risk issues, with 32% of pentest findings in large language models being categorized as high risk, in contrast to an overall average of just 13%. This discrepancy further emphasizes the potential threats associated with integrating these advanced AI systems.

Structured penetration testing (pentesting) emerges as a vital approach to bridge the security gaps. Ollmann advocates for a structured and regular pentesting framework to effectively validate defensive controls. Organizations can significantly mitigate risks by integrating security early in the development process, planning appropriately, and investing in training. This proactive approach can address common vulnerabilities such as legacy issues and inadequate input/output validation before GenAI systems are deployed to users. These strategies form the foundation for a comprehensive security posture, protecting businesses from the high-risk nature of GenAI vulnerabilities.

Addressing the Challenges and Moving Forward

It is clear that GenAI systems offer immense potential and business advantages, but they come with unique security challenges that need urgent attention. By adopting proactive and structured security strategies, organizations can improve their vulnerability resolution rates and overall security posture significantly. The narrative from the Cobalt report underscores the necessity for balanced development and security efforts, allowing businesses to maintain their competitive edge without compromising on security.

To ensure the ongoing integrity of GenAI systems, companies must commit to regular security assessments and engage security teams early in the development process. Investing in training and resources to enhance security knowledge among AI development teams is crucial. Reliance on third-party and open-source models should be managed with careful oversight to ensure timely updates and patching. By addressing these issues, organizations can better protect themselves against potential attacks and close the gap between GenAI deployment and security preparedness.

The Path to Secure GenAI Integration

Businesses that integrate generative AI (GenAI) systems into their operations are confronting a troubling reality: these advanced technologies might be introducing significant security vulnerabilities. The Cobalt State of Pentesting Report reveals a stark gap between identifying and fixing vulnerabilities in GenAI systems, with only 21% of discovered issues being addressed. This correction rate is substantially lower compared to other vulnerabilities, such as those in APIs and cloud services, which have correction rates exceeding 75% and 68%, respectively. This discrepancy highlights a critical issue where organizations rushing to deploy GenAI features for competitive advantage often neglect thorough security measures. The hurried integration of GenAI systems could, therefore, leave businesses exposed to potential threats, underscoring the importance of balancing innovation with proper security protocols. Addressing these security gaps is crucial for the sustainable and secure adoption of GenAI technologies in the business sphere.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later