WitnessAI is building guardrails for generative AI models

Published on:

Generative AI makes stuff up. It may be biased. Generally, it spits out poisonous textual content. So can it’s “secure”?

Rick Caccia, the CEO of WitnessAI, believes it may well.

“Securing AI fashions is an actual downside, and it’s one which’s particularly shiny for AI researchers, nevertheless it’s totally different from securing use,” Caccia, previously SVP of promoting at Palo Alto Networks, advised everydayai in an interview. “I consider it like a sports activities automotive: having a extra highly effective engine — i.e. mannequin — doesn’t purchase you something except you could have good brakes and steering, too. The controls are simply as essential for quick driving because the engine.”

- Advertisement -

There’s definitely demand for such controls among the many enterprise, which — whereas cautiously optimistic about generative AI’s productivity-boosting potential — has issues in regards to the tech’s limitations.

Fifty-one p.c of CEOs are hiring for generative AI-related roles that didn’t exist till this yr, an IBM ballot finds. But solely 9% of firms say that they’re ready to handle threats — together with threats pertaining to privateness and mental property — arising from their use of generative AI, per a Riskonnect survey.

WitnessAI’s platform that intercepts exercise between workers and the customized generative AI fashions that their employer is utilizing — not fashions gated behind an API like OpenAI’s GPT-4, however extra alongside the strains of Meta’s Llama 3 — and applies risk-mitigating insurance policies and safeguards.

- Advertisement -
See also  Your Amazon Fire TV is getting a free generative AI upgrade. Here's how it works

“One of many guarantees of enterprise AI is that it unlocks and democratizes enterprise knowledge to the workers in order that they’ll do their jobs higher. However unlocking all that delicate knowledge too properly –– or having it leak or get stolen — is an issue.”

WitnessAI sells entry to a number of modules, every centered on tackling a special type of generative AI threat. One lets organizations implement guidelines to forestall staffers from specific groups from utilizing generative AI-powered instruments in methods they’re not speculated to (e.g. like asking about pre-release earnings reviews or pasting inner codebases). One other redacts proprietary and delicate data from the prompts despatched to fashions, and implements strategies to protect fashions in opposition to assaults that may power them to go off-script.

“We expect one of the simplest ways to assist enterprises is to outline the issue in a approach that is smart, for instance, secure adoption of AI, after which promote an answer that addresses the issue,” Caccia stated. “The CISO needs to guard the enterprise, and WitnessAI helps them try this by making certain knowledge safety, stopping immediate injection and imposing identity-based insurance policies. The chief privateness officer needs to make sure that current — and incoming — rules are being adopted, and we give them visibility and a approach to report on exercise and threat.”

However there’s one tough factor about WitnessAI from a privateness perspective: all knowledge passes by means of its platform earlier than reaching a mannequin. The corporate is clear about this, even providing instruments to watch which fashions workers entry, the questions they ask the fashions and the responses they get. But it surely might create its personal privateness dangers.

See also  Inside Apple’s efforts to build a better recycling robot

In response to questions on WitnessAI’s privateness coverage, Caccia stated that the platform is “remoted” and encrypted to forestall buyer secrets and techniques from spilling out into the open.

“We’ve constructed a millisecond-latency platform with regulatory separation constructed proper in — a novel, remoted design to guard enterprise AI exercise in a approach that’s essentially totally different from the standard multi-tenant software-as-a-service providers,” he stated. “We create a separate occasion of our platform for every buyer, encrypted with their keys. Their AI exercise knowledge is remoted to them — we are able to’t see it.”

Maybe that’ll allay prospects’ fears. As for employees frightened in regards to the surveillance potential of WitnessAI’s platform, it’s a more durable name.

- Advertisement -

Surveys present that folks don’t usually admire having their office exercise monitored, whatever the cause — and imagine it negatively impacts firm morale. Practically a 3rd of respondents to a Forbes survey stated that they could contemplate leaving their jobs if their employer monitored their on-line exercise and communications.

However Caccia asserts that curiosity in WitnessAI’s platform has been and stays sturdy, with a pipeline of 25 early company customers in its proof of idea part. (It gained’t develop into usually accessible till Q3.) And, in a vote of confidence from VCs, WitnessAI has raised $27.5 million from Ballistic Ventures (which incubated WitnessAI) and GV, Google’s company enterprise arm.

The plan is to place the tranche of funding towards rising WitnessAI’s 18-person crew to 40 by the tip of the yr. Progress will definitely be key to beating again WitnessAI’s rivals within the nascent house for mannequin compliance and governance options, not solely from tech giants like AWS, Google and Salesforce but in addition from startups resembling CalypsoAI.

See also  Google admits its AI Overviews need work, but we’re all helping it beta test

“We’ve constructed our plan to get properly into 2026 even when we had no gross sales in any respect, however we’ve already bought virtually 20 instances the pipeline wanted to hit our gross sales targets this yr,” Caccia stated. “That is our preliminary funding spherical and public launch, however safe AI enablement and use is a brand new space, and all of our options are creating with this new market.”

- Advertisment -


- Advertisment -

Leave a Reply

Please enter your comment!
Please enter your name here