Open-source security platform JFrog has released the second instalment of its latest research series that details the unique client-side and safe mode software vulnerabilities in 22 ML-related projects.
These vulnerabilities allow attackers to hijack ML clients in the organisation, such as data scientists’ tools and MLOps pipelines that can cause code execution when loading an untrusted piece of data. Coupled with post-exploitation techniques, even a single client infection can provide bad actors with massive lateral movement inside an organisation.
This second blog in the series analyses vulnerabilities discovered and disclosed by the JFrog security research team, such as:
- MLflow Recipe XSS to code execution;
- H2O Code Execution via Malicious Model Deserialisation; and
- PyTorch “weights_only” Path Traversal Arbitrary File Overwrite.
“AI & machine learning tools hold immense potential for innovation, but can also open the door for attackers to cause widespread damage to any organisation,” said JFrog’s VP of Security Research Shachar Menashe. “To safeguard against these threats, it’s important to know which models you’re using and never load untrusted machine learning models even from a safe machine learning repository. Doing so can lead to remote code execution in some scenarios, causing extensive harm to your organisation.”