AI vendors might promise their customers that they will handle data responsibly and deliver provocative outcomes. But do they live up to their promises?
The concept of trust in the AI space isn’t new, and yet the trust between vendors and customers is becoming less a leap of faith and more a defined, ongoing conversation. Adoption of frameworks for responsible and trustworthy AI is becoming a basic expectation. CIOs and CISOs ask tougher questions about data security and outcomes. And they might walk away if they don’t get the answers they want.
What considerations do CIOs and CISOs make about AI trust in 2026, and what does that mean for their vendor relationships?
Defining trust in AI
Trust is a big part of activating an AI system or tool that could be seen as intrusive. By its very nature, AI must ingest data. Depending on the use case, that data could be personally identifiable information or intellectual property. CIOs and CISOs want to trust that the data in question is secure, will be used only for explicitly stated purposes and is actually necessary to deliver on promised outcomes.
That can be a tall order for vendors as they compete for customers and race to incorporate the latest and greatest AI capabilities into their products.
“AI is evolving so fast that full trust is very difficult,” Doug Kersten, CISO at Appfire, an enterprise collaboration software company, told InformationWeek.
Even if organizations establish a set of rules around AI, they can quickly become irrelevant as the technology and use cases change.
“There’s this desire to find a checklist back to the static measures and fixed control. They’re looking for, ‘OK, well, how do we know this system is trustworthy? Can we establish trustworthiness and then not think about it anymore?'” said Martin Stanley, principal researcher for AI and cybersecurity at NIST.
The obvious answer is no. AI can permeate an organization and change too quickly to never be reevaluated after its initial introduction.
Various frameworks and resources are available to help businesses grapple with questions around AI trust, risk, security and responsibility. NIST has the AI Risk Management Framework. There is also ISO 27001 and SOC 2 Type 2 compliance.
For some companies, these types of certifications and attestations serve as a baseline for evaluating their AI vendors.
“We actually divide up our vendors by criticality,” Kersten said. “Our mission-critical vendors: we require a SOC 2 or an ISO audit to be done. If they don’t have it, we don’t do business with them.”
It is early days for these frameworks and resources. Customers may see the value in working with a vendor that isn’t yet certified, but the expectation for external validation will likely become commonplace as time goes on.
“Am I expecting every single vendor to be ISO 27001-compliant right this second, today? Probably not,” said Bryan Wise, CIO at 6sense, an account-based marketing platform. “I would imagine by next year, if you ask me that same question, I think it will be table stakes.”
Scrutiny of AI vendors
The evaluation of AI vendors to determine if they meet an organization’s standards for trustworthiness and value is a team effort.
“As a CISO, I just can’t make that decision by myself because there’s too many implications to what AI can do and this capability that impacts too much of the business,” Kersten said.
Governance committees offer a natural approach to tackling AI vendor evaluation. Leaders from security, IT, legal, procurement and any other relevant team gather to determine the right approach for their organization.
Appfire took this approach to determine its expectations around AI. Among the questions the group asks: “What would we allow? What wouldn’t we allow? How do we define AI? What’s an AI vendor? What steps do we take? How do we allow engineers to use AI products without causing a potential compromise in security or privacy or data loss?” Kersten said.
It isn’t enough to build policies around the answers to those questions and leave them in place. These committees need to continuously refine and update their vendor-evaluation approaches as technology and its use within an enterprise evolve.
Aimee Cardwell, CIO and CISO in residence at data governance solutions company Transcend, said she has seen some companies create an “AI czar” role. The point person evaluates AI use cases across a company and considers which questions to ask to determine business value and risk.
No matter how a company decides to approach this task, frameworks can provide a good starting place. But CIO, CISOs and anyone else involved in vendor evaluation will need to go further.
“The NIST AI Risk Management Framework is something that we ask about and follow through on, but that risk management framework is a voluntary framework,” Kersten said. Though vendors may say they embrace the framework, the extent of their compliance can vary. “How deeply do you align to it? Those are difficult questions to answer sometimes.”
Asking vendors tough questions can help businesses weed out unsuitable fits for the organization. Companies want answers around the following issues:
- Data use and protection. CIOs and CISOs want to know what data is being used, where it is going and how it is being secured. Before handing over customer data, they may consider whether that is even necessary. “I am not convinced that most companies need to use actual customer data in their AI work, but that they could be using either tokenized or de-identified data to get similar results,” Cardwell said.
Once a vendor has that data, will they use it to train their model? Where is that data going? How is it protected?
“I’d love to see an architectural diagram that shows exactly how data flows through a vendor system,” Cardwell said.
- Metadata. Some AI vendors use customers’ metadata for predictive analytics, according to Kersten. “We protect your data, but we use your metadata to do analysis, and then that raises a red flag,” he said.
- Right to be forgotten. CIOs and CISOs can learn a lot about a vendor when they ask how it handles the right to be forgotten. “Can you help me figure out how I would do that? Then that’s going to really get at the meat of how much control they have over the data that you’re feeding in,” Cardwell said.
- Acquisition vs. built in-house. Kim Huffman, CIO at Workiva, a financial reporting, sustainability, governance, risk, and compliance software company, asks vendors to articulate how they have approached AI development. Was it done in-house? Did the bolt-on capabilities arrive via acquisition? Do these bolted-on capabilities share a data model with the core system? “You can sniff out pretty quick if they’ve thought about AI being embedded in the process versus a bolt-on, and where their capabilities are as far as data security, certifications, things like that,” she said.
- Leadership. The AI market is relatively young; there are plenty of startups clamoring for customers. Wise takes a close look at the people involved. “Who are the investors? Are they backed well? Who are the founders of the company? Have they been in the space before? Do they have domain expertise? Can they speak with confidence about their roadmap and their view?” he said.
CIOs and CISOs who do their due diligence will get the answers to these kinds of questions before signing a contract.
“The best thing we can do is protect ourselves contractually so that at least should something terrible happen with your data, you can demonstrate as a company we did everything that we thought we could,” Cardwell said.
Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.

