Pointers at Glance
- A recent study identified that software developers using AI-powered assistants create buggier code.
- It is also found that developers have misguided confidence in the code quality using AI assistants.
Stanford University computer scientists conducted a recent study and have found that developers using AI-powered assistants produce more buggier code. A paper was published titled ‘Do Users Write More Insecure Code with AI Assistants?’. The output of 47 developers was examined with AI coding assistance and compared to the ones without it.
The participants with AI assistance were asked to write code. Also, some participants were not assisted with AI. The test results indicated that 79% of the participants without AI assistance gave the accurate answer, while only 67% gave the correct answer with AI assistance.
The authors of the study wrote that the software developers with AI assistance produced more buggier code than those without AI access, with significant results for string encryption and SQL injection. It was also found that developers using AI assistants have misguided confidence in their code quality. Also, the developers with AI assistance trusted that they write secure code compared to those without AI access.
The lawsuit alleges that Copilot infringes on the rights of developers by scraping their code and not offering due attribution. As a result, developers who use code suggested by Copilot could unwittingly be infringing copyright.
Earlier this year, Bradley M. Kuhn of Software Freedom Conservancy wrote that Copilot leaves copyleft compliance as an exercise for the user. As a result, users might face rising liability that only increases as Copilot improves.
Developers using current AI assistants will have the risk of producing buggier, less secure, and potentially litigable code. Therefore, it is vital for developers to be aware of the potential pitfalls of using AI coding assistance and to use caution when doing so.