Featured Image
Generative AI makes stuff up. It can be biased. Sometimes it spits out toxic text. So can it be “safe”? Rick Caccia, the CEO of WitnessAI, believes it can. “Securing AI models is a real problem, and it’s one that’s especially shiny for AI researchers, but it’s different from securing use,” Caccia, formerly SVP of marketing at Palo Alto Networks, told TechCrunch in an interview.

“I think of it like a sports car having a more powerful engine — i.e., model — doesn’t buy you anything unless you have good brakes and steering, too. The controls are just as important for fast driving as the engine.”

There’s certainly demand for such controls among the enterprise, which — while cautiously optimistic about generative AI’s productivity-boosting potential — has concerns about the tech’s limitations. Fifty-one percent of CEOs are hiring for generative AI-related roles that didn’t exist until this year, an IBM study finds. Yet only 9% of companies say that they’re prepared to manage threats — including threats pertaining to privacy and intellectual property — arising from their use of generative AI, per a Riskonnect report.

WitnessAI’s platform intercepts activity between employees and the custom generative AI models that their employer is using — not models gated behind an API like OpenAI’s GPT-4, but more along the lines of Meta’s Llama 3 — and applies risk-mitigating policies and safeguards.

“One of the promises of enterprise AI is that it unlocks and democratizes enterprise data to the employees so that they can do their jobs better. But unlocking all that sensitive data _too well –_– or having it leak or get stolen — is a problem.”

WitnessAI sells access to several modules, each focused on tackling a different form of generative AI risk. One lets organizations implement rules to prevent staffers from particular teams from using generative AI-powered tools in ways they’re not supposed to (e.g., like asking about pre-release earnings reports or pasting internal codebases). Another redacts proprietary and sensitive info from the prompts sent to models and implements techniques to shield models against attacks that might force them to go off-script.

“We think the best way to help enterprises is to define the problem in a way that makes sense — for example, safe adoption of AI — and then sell a solution that addresses the problem,” Caccia said. “The CISO wants to protect the business, and WitnessAI helps them do that by ensuring data protection, preventing prompt injection and enforcing identity-based policies. The chief privacy officer wants to ensure that existing — and incoming — regulations are being followed, and we give them visibility and a way to report on activity and risk.”

But there’s one tricky thing about WitnessAI from a privacy perspective. All data passes through its platform before reaching a model. The company is transparent about this, even offering tools to monitor which models employees access, the questions they ask the models and the responses they get. But it could create its own privacy risks.

In response to questions about WitnessAI’s privacy policy, Caccia said that the platform is “isolated” and encrypted to prevent customer secrets from spilling out into the open. “We’ve built a millisecond-latency platform with regulatory separation built right in — a unique, isolated design to protect enterprise AI activity in a way that is fundamentally different from the usual multi-tenant software-as-a-service services,” he said. “We create a separate instance of our platform for each customer, encrypted with their keys. Their AI activity data is isolated to them — we can’t see it.”

Perhaps that will allay customers’ fears. As for workers concerned about the surveillance potential of WitnessAI’s platform, it’s a tougher call. Surveys show that people don’t generally appreciate having their workplace activity monitored, regardless of the reasons behind it. Balancing the benefits of generative AI with the need for privacy and security is an ongoing challenge, and platforms like WitnessAI are aiming to find the right balance.

Share this content: