Protect AI raises $35M to build a suite of AI-defending tools
Kyle Wiggers@kyle_l_wiggers / 9:00 AM EDTâąJuly 26, 2023 Comment
Protect AI, a startup building tools to harden the security around AI systems, today announced that it raised $35 million in a Series A round led by Evolution Equity Partners with participation from Salesforce Ventures, Acrew Capital, boldstart ventures, Knollwood Capital and Pelion Ventures.
The tranche is more than double the size of Protect AIâs seed round, which closed last December, and it brings the startupâs total raised to $48.5 million. Co-founder and CEO Ian Swanson says the proceeds will be put toward enhanced capabilities in Protect AIâs platform, an expanded research effort and launching new open source projects.
âNow, we have plenty of capital to weather the storm for years to come,â Swanson told TechCrunch in an email interview, adding that Protect plans to grow its workforce from 25 people to 40 by the end of the year.
Swanson co-launched Protect AI with Daryan Dehghanpisheh in 2022. Before Protect, Swanson and Dehghanpisheh did stints at AWS and Oracle and helped to launch DataScience.com, a AI development platform that was later acquired by Oracle, their old employer, for an undisclosed sum.
âWe founded Protect AI 18 months ago based on our experience being involved in some of the biggest machine learning and AI deployments in the world,â Swanson said. âWe saw the value that AI can deliver, but also the risks that are inherent in these systems. Our mission is to help customers build a safer AI-powered world.â
As I noted in my previous coverage of Protect AI, thereâs no evidence to suggest that AI models â and the apps powering them, for that matter â are being attacked on a mass scale. (Perhaps the one exception is OpenAIâs GPT-4, which has become a target for pirates selling exposed API keys.) But Swanson makes the case that, as AI becomes more broadly adopted in sensitive industries, such as finance and healthcare, itâs only a matter of time before that changes.
Regardless of whether that prediction comes true, Protect provides a range of services designed to address what Swanson describes as AI security âweak points.â Its flagship tool, AI Radar, delivers visibility into the various components used to build an AI model â including the data used for training, testing datasets and code â and then generates a âmachine learning bill of materials,â or MLBOM for short.
âWeâre creating a new category of machine learning security that focuses on practical threats â threats in the AI and machine learning supply chain, and in how these models are being built,â Swanson said. âWhat AI Radar is able to do is take a look at that supply chain and find practical threats and risks that we can provide visibility and remediation for with our customers ⊠It can scan all the MLBOMs ⊠for every machine learning model within an enterprise and find which pipelines are using [vulnerable software].â
To Swansonâs point, a number of popular AI open source projects have been found to contain exploitable code. A recent survey from Endor Labs identified vulnerabilities in 52% of the top 100 AI open source projects. And, broadly speaking, the volume of supply chain cyberattacks is increasing. Sonatype reported late last year that attacks involving malicious third-party software increased by 633% from 2021 to 2022.
Privacy and security concerns have also deterred some companies, including Samsung, Apple and Verizon, from allowing their companies to leverage generative AI tools like ChatGPT in the course of their work. The fear is that confidential information entered into those tools could somehow leak into the public domain, intentionally or not.
In addition to AI Radar, Protect offers tools to mitigate certain types of AI attacks, such as prompt injection attacks. Prompt injection is when an AI that works from text-based instructions â prompts â to accomplish tasks is tricked by malicious, adversarial prompts to perform tasks that werenât a part of its original objective.
Protect can also scan documents from Jupyter Notebook, one of the more popular platforms used to create AI models and run data science experiments, for common issues. (Jupyter ânotebooks,â as theyâre called, contain all the code necessary to run AI development tasks like model training and fine-tuning.) Improperly secured Jupyter Notebook files can become vulnerable to Python-based ransomware and cryptocurrency mining attacks, research firms have found.
Among other potentially problematic lines in code, Protect evaluates Jupyter notebooks for personally identifiable information (e.g. names and phone numbers), internal-use authentication tokens and credentials and open source code with a ânonpermissiveâ license that might prohibit it from being used in commercial systems.
âWe have to transition from machine learning operations, or MLOps, which is a tried-and-true practice at this point that companies have been doing for over a decade, and inject security,â Swanson said. âWe need to get to the point where we are truly performing ML security operations â âMLSecOpsâ â at scale within large enterprises.â
Protect has a few competitors in the nascent space for AI-defending security tools. Thereâs Resistant AI, which is developing systems to protect algorithms from automated attacks. And thereâs HiddenLayer, which claims that its technology can defend models from attacks without the need to access any raw data or a vendorâs models.
Robust Intelligence, CalypsoAI and Troj.ai could be counted among Protectâs rivals, as well. But Protect claims to have high-profile private- and public sector-customers in the financial services, healthcare, life sciences and energy industries, signaling that itâs managed to carve out something of a niche for itself.
âThe general slowdown in tech is not happening in AI, or security,â Swanson said. (Not to aggressively fact-check Swanson, but itâs worth noting that thereâs been a downturn in cybersecurity funding, actually, with Q1 2023 marking the lowest venture capital financing for security in a decade. How might that impact Protect? Tough to say at present.) âProtect AI is at that intersection. The moment is now for AI in terms of deployment â the value itâs delivering. We help to answer questions like âhow do we de-risk AI?’â
More from TechCrunch