AI安全盾牌:企業如何防範生成式人工智能風險?

Featured Image
Generative AI makes stuff up. It can be biased. Sometimes it spits out toxic text. So can it be “safe”? Rick Caccia, the CEO of WitnessAI, believes it can. In an interview with TechCrunch, Caccia, formerly SVP of marketing at Palo Alto Networks, stated that securing AI models is a real problem, and it’s different from securing use. He compared it to a sports car with a powerful engine but emphasized that good brakes and steering are equally important. The enterprise has shown demand for controls to address the limitations of generative AI, with 51% of CEOs hiring for generative AI-related roles that didn’t exist until this year, according to an IBM study. However, only 9% of companies feel prepared to manage threats related to generative AI, including privacy and intellectual property concerns.

WitnessAI’s platform intercepts activity between employees and the custom generative AI models used by their employers. Unlike models gated behind an API like OpenAI’s GPT-4, WitnessAI’s platform focuses on models similar to Meta’s Llama 3. It applies risk-mitigating policies and safeguards to address the risks posed by generative AI. WitnessAI offers several modules to tackle different forms of generative AI risk. One module allows organizations to implement rules preventing specific teams from using generative AI-powered tools in unintended ways. Another module redacts proprietary and sensitive information from prompts sent to models and implements techniques to protect models from off-script attacks.

WitnessAI aims to define the problem of safe adoption of AI and offer solutions that address these concerns. The platform helps Chief Information Security Officers (CISOs) protect businesses by ensuring data protection, preventing prompt injection, and enforcing identity-based policies. Chief Privacy Officers can use WitnessAI to ensure compliance with existing and incoming regulations, providing visibility and reporting on activity and risk.

However, there is a privacy concern related to WitnessAI. All data passes through its platform before reaching the model. Although the company is transparent about this process and offers tools to monitor employee activity, it could create privacy risks. WitnessAI’s CEO, Rick Caccia, reassures customers that the platform is isolated and encrypted to prevent customer secrets from being exposed. Each customer has a separate instance of the platform with their own encryption keys, ensuring that their AI activity data remains isolated.

While this may alleviate customers’ fears, concerns remain regarding the surveillance potential of WitnessAI’s platform. Surveys show that people generally do not appreciate having their workplace activity monitored, regardless of the intentions behind it.

In conclusion, WitnessAI aims to address the risks associated with generative AI by providing controls and risk-mitigating policies. The platform offers solutions to protect data, prevent unauthorized use, and ensure compliance with regulations. While privacy concerns exist, WitnessAI emphasizes its isolated and encrypted design to protect enterprise AI activity. However, the surveillance potential of the platform remains a subject of concern for employees.

Share this content: