AI Hallucinated Packages Fool Unsuspecting Developers

Summary:
A recent report from Lasso Security, has raised concerns about software developers potentially using nonexistent or hallucinated software packages when relying on chatbots to build applications. The report, based on continued research by Bar Lanyado from Lasso, builds upon previous findings that demonstrated how large language models can inadvertently recommend packages that do not actually exist.

Last year, Lanyado highlighted the potential risks associated with these hallucinated package names. Threat actors could exploit such names by creating malicious packages with similar titles, which users might unknowingly download based on recommendations made by AI chatbots. In their latest research, Lanyado expanded the scope by testing four different AI modes: GPT-3.5-Turbo, GPT-4, Gemini Pro, and Coral. They used these models to generate responses to over 40,000 how to questions using the Langchain framework for interaction.


The results were concerning as all chatbots exhibited a high rate of hallucinations, with Gemini showing the highest at 64.5%. These hallucinations, while not always repetitive, occurred in about 15% of cases, with Cohere peaking at 24.2%. This indicates a significant potential for AI models to recommend nonexistent software packages to developers.

Security Officer Comments:
One interesting finding was the discovery that an empty package, intentionally created by the researcher was downloaded over 30,000 times based solely on AI recommendations. The implications of this research are profound. It underscores the importance of thorough cross-verification when dealing with uncertain or AI-generated responses, especially regarding software packages.

Suggested Corrections:

  • First, exercise caution when relying on Large Language Models (LLMs). If you're presented with an answer from an LLM and you're not entirely certain of its accuracy—particularly concerning software packages—make sure to conduct thorough cross-verification to ensure the information is accurate and reliable.
  • ‍Secondly, adopt a cautious approach to using Open Source Software (OSS). Should you encounter a package you're unfamiliar with, visit the package's repository to gather essential information about it. Evaluate the size of its community, its maintenance record, any known vulnerabilities, and the overall engagement it receives, indicated by stars and commits. Also, consider the date it was published and be on the lookout for anything that appears suspicious. Before integrating the package into a production environment, it's prudent to perform a comprehensive security scan.

Link(s):
https://www.securityweek.com/ai-hallucinated-packages-fool-unsuspecting-developers/


https://lasso-security.webflow.io/blog/ai-package-hallucinations