企業級生成AI的風險與安全:探秘WitnessAI如何保護數據安全

Featured Image
Generative AI is a powerful tool that has the potential to revolutionize various industries. However, it also comes with certain risks, such as bias and the generation of toxic text. To address these concerns, Rick Caccia, the CEO of WitnessAI, believes in securing AI models to ensure their safe and responsible use.

Caccia compares securing AI models to having a powerful engine in a sports car. While a powerful engine is important, it is equally crucial to have good brakes and steering to control the car effectively. Similarly, securing controls for AI models is essential for their responsible use.

Enterprises are increasingly interested in generative AI and its potential to boost productivity. According to an IBM study, 51% of CEOs are hiring for generative AI-related roles that didn’t exist until this year. However, only 9% of companies feel prepared to manage the threats that may arise from the use of generative AI, including privacy and intellectual property concerns.

WitnessAI offers a platform that intercepts activity between employees and the custom generative AI models used by their employers. It applies risk-mitigating policies and safeguards to ensure the safe and responsible use of AI. Unlike models gated behind an API like OpenAI’s GPT-4, WitnessAI focuses on models like Meta’s Llama 3.

The platform offers several modules to tackle different forms of generative AI risk. For instance, organizations can implement rules to prevent specific teams from using generative AI-powered tools in unauthorized ways. WitnessAI also redacts proprietary and sensitive information from prompts sent to models and implements techniques to protect models against potential attacks.

WitnessAI aims to define the problem of safe AI adoption and provide a comprehensive solution. It helps Chief Information Security Officers (CISOs) protect their businesses by ensuring data protection, preventing prompt injection, and enforcing identity-based policies. It also assists Chief Privacy Officers in ensuring compliance with regulations by providing visibility and reporting capabilities.

However, WitnessAI’s platform raises privacy concerns. All data passes through the platform before reaching the model. While the company assures customers that the platform is isolated and encrypted to protect their data, there is still a potential risk. WitnessAI creates separate instances of the platform for each customer, encrypting them with their keys to ensure data isolation.

While WitnessAI’s platform offers transparency and tools to monitor AI activity, concerns about surveillance potential remain. Surveys show that people generally dislike having their workplace activity monitored, regardless of the reason.

In conclusion, securing AI models and addressing the risks associated with generative AI is crucial for its safe and responsible use. WitnessAI offers a platform that helps enterprises mitigate these risks by implementing rules, protecting against attacks, and ensuring compliance with privacy regulations. While privacy concerns exist, WitnessAI strives to provide a secure and isolated environment for AI activity.

Share this content: