With the increasing reliance on collaborative and cloud-based systems, there is a drastic increase in attack surfaces and code vulnerabilities. Automation is key for fielding and defending software systems at scale. Researchers in Symbolic AI have had considerable success in finding flaws in human-created code. Also, run-time testing methods such as fuzzing do uncover numerous bugs. However, the major deficiency of both approaches is the inability of the methods to fix the discovered errors. They also do not scale and defy automation. Static analysis methods also suffer from the false positive problem – an overwhelming number of reported flaws are not real bugs. This brings up an interesting conundrum: Symbolic approaches actually have a detrimental impact on programmer productivity, and therefore do not necessarily contribute to improved code quality. What is needed is a combination of automation of code generation using large language models (LLMs), with scalable defect elimination methods using symbolic AI, to create an environment for the automated generation of defect-free code.
|