Trust Definition: Trust varies in enterprises; one definition involves security, while employee perspectives are often overlooked.
Security First: Francis deSouza emphasizes security architecture as foundational for AI trust in organizations.
Transparency Needs: Honesty about vendor-client interactions is vital, but it doesn't equate to employee equity or voice.
The word "trust" appeared dozens of times across two sessions on enterprise AI at HumanX last week. What came from those sessions was proof that trust is not always the same word within the org.
For Francis deSouza, COO of Google Cloud, trust begins with security architecture.
"There is no such thing as an AI conversation without a security conversation," he said, outlining how enterprises must contend with new attack surfaces, agentic threats, and the need to rebuild data strategies from the ground up before deploying AI at scale.
When he described employees downloading tools outside of IT's purview, he framed it as a risk management problem. The solution is a sanctioned platform, governed and auditable.
In a separate session on designing AI systems that earn user trust, Wendy Gonzalez of Sama offered a different entry point. She described a conversation she'd had with a colleague just before taking the stage. The concern wasn't job displacement. It is not a security question. No threat model addresses it.
It’s not even about, is this going to take away my job? It was more like, if people see my prompts, will they think I’m not as smart?
The Gap Is in Plain Sight
This is the problem. Enterprise AI deployments are growing increasingly sophisticated at the technical definition of trust — security, reliability, auditability, model validation. They are substantially less developed at the human one: whether employees believe the systems shaping their work are fair, honest about their limitations, and built with any consideration of the people inside them.
Organizations are conflating the two. Build a secure, compliant, well-governed AI platform and the trust problem feels solved. What's been solved is the vendor definition that appears in security audits and compliance frameworks.
The employee definition remains largely unaddressed, and in most enterprise AI conversations, largely unasked.
Christina Casioppo of Vanta offered what she called the clearest articulation of enterprise trust: "doing what it says on the tin" accountability.
In other words, if you said you'd do X and you're doing Y, say so. It's a reasonable standard, but it's an enterprise-to-enterprise standard about how a vendor earns confidence from a customer. What employees are owed gets less airtime.
The session's most candid moment came when Casioppo described Vanta's approach to internal transparency: telling new hires upfront that in any conflict, customers come first.
The logic is that disclosure softens the later shock of feeling deprioritized. It's honest. But disclosure is not the same as equity, and knowing your place in the stack rank is not the same as having any say in it.
Gonzalez came closest to the employee trust model when she described letting groups of workers test different tools, share what worked and what didn't, and celebrate both.
She framed it as a way to build genuine confidence rather than impose a platform from above. That's meaningfully different from mandating adoption and measuring the numbers. But it still stops at adoption.
Trust in the sense that CHROs and COOs should care about means something further: do employees believe that AI is being deployed in ways that account for their interests, that the decisions it influences are fair, and that there's somewhere to go when they're not?
When the System Becomes the Decision-maker
The agentic layer makes this more urgent. Gonzalez drew a distinction that cuts to the center of it.
It's one thing to trust an output, it's another thing to trust an entire system." — Wendy Gonzalez, Sama
When AI answers a question, a worker can evaluate the answer. When AI is routing work, flagging performance, or influencing who gets what opportunity, the worker often can't see the system at all. The transparency standards that make sense at the model level don't automatically extend to the people living inside the model's outputs.
DeSouza described the long-term goal as building a workforce that is "bilingual" — fluent in both their function and AI. He pointed to Google's internal hackathons, where the HR team won twice running, as evidence that broad adoption can produce genuine fluency across functions.
It's a useful signal. But fluency in a tool is not the same as trust in the system that tool belongs to. Employees can be highly capable AI users and still have no visibility into how AI is being used to evaluate, rank, or redeploy them.
The successful workforce of the future will be bilingual in every function. Everybody will know their function, and AI.
Both sessions were substantive. The speakers were thoughtful. What's notable is not what either got wrong, but what the enterprise AI conversation, taken as a whole, has yet to make room for.
Security and compliance are real, urgent, and genuinely hard problems. But they describe the infrastructure of trust, not the experience of it. For the employees being asked to build their working lives around these systems, that distinction is the whole question.
