Everything old is new again: AI-driven development, open source

I show You how To Make Huge Profits In A Short Time With Cryptos!

By Fred Bals

REMEMBER how quickly open source software went from niche to normal? The new “Global State of DevSecOps” report from Black Duck argues that there are clear parallels between the current surge in AI-assisted development and the historic embrace of open source software by developers.

Get the latest news


delivered to your inbox

Sign up for The Manila Times newsletters

By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

As the report notes, both movements have helped to revolutionize software development, but both have introduced unique security challenges. The report, based on a survey of over 1,000 software security stakeholders, highlights that while AI adoption by development teams is nearly universal, securing AI-generated code lags, mirroring the early days of unmanaged — and unsecured — open source use.

AI coding adoption, security concerns

Just as open source challenged traditional software development models, AI-assisted coding is transforming how code is written and used. Both movements have disrupted established software development practices, promising increased efficiency and development speed. The open source revolution democratized software development by providing freely available code and collaborative platforms. Similarly, AI coding assistants are democratizing programming knowledge, making it easier for developers of all skill levels to tackle complex coding tasks.

However, the report underscores the fact that using AI coding assistants introduces risks when not properly managed, much like the early days of open source adoption. Just as with open source use, bringing AI-assisted coding tools into software development presents unique intellectual property (IP), licensing and security challenges that, without careful management by development teams, can really trip up unprepared organizations.

For example, both unmanaged open source and AI-generated code can create ambiguity about IP ownership and licensing — especially when the AI model uses datasets that might include open source or other third-party code without attribution. If an AI coding assistant suggests a code snippet without noting its license obligations it can become a legal minefield for anyone using that code. Although it might only be a snippet, users of the software must still comply with any license associated with the snippet.

AI-assisted coding tools also have the definite potential to introduce security vulnerabilities into codebases. A study by researchers at Stanford University found that developers who used AI coding assistants were more likely to introduce security vulnerabilities into their code. This mirrors the concerns long associated with open source software, where the “many eyes” approach to security doesn’t always prevent vulnerabilities from slipping through. One researcher cited in the report flatly concludes that “auto-generated code cannot be blindly trusted, and still requires a security review to avoid introducing software vulnerabilities.”

According to the report, over 90 percent of organizations are now using AI tools in some capacity for software development. Yet 21 percent of respondents admit that their teams bypass corporate policies to use unsanctioned AI tools, making oversight difficult (if not impossible). This echoes the early days of open source use, when few executives were aware that their development teams were incorporating open source libraries into proprietary code, let alone the extent of that use.

Amplifying the noise

The Black Duck report also highlights a significant challenge in application security testing: tool proliferation. 82 percent of respondents stated that their organizations use between six and 20 different security testing tools. While intended to ensure comprehensive security coverage, the more tools introduced into the development workflow, the more complex that workflow becomes.

One major issue caused by tool proliferation is an increase in “noise” — irrelevant or duplicative results that bog down development teams. The report reveals that 60 percent of respondents consider over 20 percent of their security test results to be noise. The result is a significant drain on efficiency, as security teams struggle to sift through irrelevant findings and distinguish genuine threats.

A balancing act

The report acknowledges the persistent tension between robust security testing and maintaining development speed. 86 percent of respondents reported that security testing slows down their development process to some degree. This finding underscores the challenge organizations face in integrating security practices into increasingly fast-paced development cycles, especially with the added complexities of AI-generated code.

The report highlights that even though automation in security testing is increasing, manual processes in managing security testing queues directly correlate with perceptions of security testing slowing down development. Organizations relying entirely on manual processes for their testing queues were significantly more likely to perceive a severe impact on development speed compared to those using automated solutions. The finding suggests that while security testing is often seen as a bottleneck, optimizing processes through automation can significantly alleviate the friction between security and development speed.

Future of DevSecOps

The 2024 Global State of DevSecOps report urges its readers to view the outlined challenges not as insurmountable obstacles, but as opportunities for positive change. To effectively navigate the evolving landscape of DevSecOps, the report recommends several key strategies:

– Tool consolidation and integration. Reducing reliance on a multitude of disparate security tools can significantly mitigate the issue of noise and improve efficiency. Organizations should prioritize integrating their security tools to streamline processes and centralize results for better analysis.

– Embracing automation. Automating security testing processes, particularly the management of testing queues and the parsing and cleansing of results, can significantly reduce the burden on security teams and minimize the impact on development speed.

– Establishing AI governance. With the widespread adoption of AI tools, organizations must establish clear policies and procedures for their use in development. This includes investing in tools specifically designed to vet and secure AI-generated code, addressing concerns about vulnerabilities and potential licensing conflicts.

As AI becomes increasingly intertwined with software development, the need for robust and adaptable security practices becomes paramount. The report’s findings serve as a timely reminder that while AI holds immense potential for innovation, it also presents unique security challenges. By embracing automation, streamlining toolsets, and establishing clear AI governance policies, organizations can pave the way for a future where security and development speed coexist rather than collide.

Fred Bals is the senior security researcher at Black Duck, an all-in-one application security platform optimized for development, security and operations (DevSecOps).

Be the first to comment

Leave a Reply

Your email address will not be published.


*