abstract barbed wire black white black and white

Keep AI Accountable With Ongoing Risk Management

Addressing AI risk across an enterprise often involves an ongoing process of inventorying models, identifying risks, testing, and remediating.  Many organizations are increasingly using AI in their decision-making processes and business operations, producing many strategic benefits.

Source: Keep AI Accountable With Ongoing Risk Management

However, adopting AI can also expose organizations to new risks that require management—from operational risks resulting from unintended model performance to the potential for bias and misuse. To manage such risks while capitalizing on opportunities associated with AI technology, organizations can consider issues such as fairness, reliability, and accountability to foster trust and reduce regulatory and legal exposure.

AI adoption is occurring rapidly across nearly all sectors and industries, and regulators and legislators are increasingly taking notice. As such, the risks related to the use of AI are accelerating dramatically.

AI can perpetuate a range of undesirable consequences, from bias to operational risk to tampering. Biases can be baked in and deployed at scale in sensitive applications, such as loan applications, school admissions processes, policing, hiring, and other operations. Organizations can also face risks related to building models designed to accurately capture ongoing, real-world situations, such as supply chain management or fraud detection, leading to unintended outcomes. The potential for tampering is another risk for organizations to consider, as parameters or inputs may be manipulated nefariously to alter model predictions or system performance for personal gain.

As the risk landscape evolves, regulatory activity is increasing as well. The Federal Trade Commission (FTC) issued guidance to organizations regarding how they can manage consumer protection risks associated with AI and algorithms. More recently, the FTC offered additional insight on how organizations can promote truth, fairness, and equity in their use of AI. In the European Union, lawmakers are considering legislation to regulate uses of AI.

Without oversight and review, the data and processes that produce AI models may unintentionally bias the resulting logic. Bias can be introduced at the onset of the modeling process as a result of incomplete or imbalanced sampling. It might also occur within the model itself by way of complex interactions between variables or at the point of inference.

Managing Risks Related to AI

Identifying and mitigating many AI-related risks involves four risk management activities. These actions can form a feedback loop to enable continual monitoring for established and emerging risks.

Inventory AI uses. Organizations can build an understanding of where AI models are in use throughout the enterprise and how they are being used. This exercise may help teams understand the business issues each model is meant to address, including how the model outputs are used by the organization.

The inventory can be reviewed to identify potential regulatory, legal, or reputational risks that each model produces. This review may require input from people in operations who own or use the models as well as model developers.

Identify risks. For each model leveraging AI, organizations can identify risks to assess and prioritize based on the likelihood of issues arising and their potential impact from a financial, legal, or regulatory perspective. To help identify the most critical issues, business stakeholders and development teams can jointly monitor for changes in model performance as well as complaints and other feedback the organization has received related to functions that rely on AI models.

Features used by the model can be analyzed to search for sources of bias or inappropriate assumptions that feed the model’s logic. Attributes that define protected classes may amplify bias, while data points that do not yet exist when the model makes its assessment, such as future outcomes, can cause errors or decrease accuracy. Then model outputs can be evaluated for whether they are consistent with business logic and feedback from subject matter experts.

Test. With risks identified, organizations can plan and perform tests to detect the presence of threats. For models where operational risk is a concern, tests can evaluate whether the model was trained on data that reflects the current environment and whether the model is flexible enough to adapt to potential changes of data over time. Models can also be tested for their accuracy over time to identify potential declining performance.

Another area to evaluate is a model’s capacity to extrapolate, which involves testing how the model deals with edge cases and unexpected inputs. This approach can be useful in identifying rare but impactful miscalculations and can help uncover vulnerabilities that could be exploited inappropriately. Statistical methods can be used to test for a variety of potential biases.

Remediate. Organizations can select and implement appropriate measures to mitigate identified risks, collaborating with business units to deploy remediation strategies. Various tactics can be employed to address several types of risks. Common remediation approaches include reweighting and adaptive learning to address operational risks stemming from model inaccuracies, algorithmic techniques to address bias risks, and governance and digital controls to remediate AI tampering.

Reweighting can address shifting patterns in underlying data and safeguards against stale model predictions by assigning different relative importance to data points over time. This enables model developers to weight newer data points more heavily than older ones, which may be important in model settings where newer samples are more reflective of current trends. Adaptive learning can help address the same problem by training a base model on data at any given time, iteratively updating the model as new samples are provided. This can allow models to make predictions based on new patterns captured in data.

Other risks, such as bias, can be remediated throughout the model life cycle. Pre-processing approaches can help excise bias in the input dataset while preserving important information needed to make predictions. In-processing approaches can adjust AI models by guiding their interpretation and use of data. Post-processing techniques can allow model developers to draw thresholds on inference scores to improve parity among disparate groups.

To address AI tampering, organizations can establish end-to-end code governance processes. This may include defining roles and responsibilities and instilling safeguards against unauthorized changes. Digital controls can help remediate unauthorized or unintended changes to code and can monitor code repositories for anomalous or improper code updates that do not conform to predefined governance patterns.

Advertisements

An Action Plan

Many organizations and industries are at different stages of adopting AI and managing related risks. As such, a path to greater maturity in managing AI risk may not be the same across enterprises. An approach that addresses strategy, people, process, technology, and governance often begins with inventorying AI models to catalog their purpose and outputs.

With an understanding of risks, organizations can develop an enterprise plan for managing technical and operational risks across the AI portfolio. The plan can lead to establishment of a common process for development, evaluation, and deployment of AI models.

Efforts to continually reduce AI risk and bias within the models using an evaluation and remediation approach can help organizations decrease exposure to brand, reputation, legal, and other risks by injecting technical safeguards into AI model development. In addition to the automated testing plans, an independent AI review team can proactively evaluate models and investigate potential bias or tampering, promoting an accountable approach to use of AI. The review team can also act as a means of increasing overall quality.

Many regulators and stakeholders are calling for increased levels of responsibility from AI users. Documentation, communication, and transparency can provide stakeholders and regulators with information that supports accountability. AI programs can incorporate proactive governance and accountability measures that align them with organizational objectives and provide important benefits to people.

Trust and accountability may be intangible goals, but their importance is growing dramatically as decisions and processes are increasingly performed by AI. Organizations that can manage AI with the levels of trust and accountability that stakeholders and regulators increasingly expect may position themselves for success in the digital era.

Original Postrofiles/miweil.html?id=us:2el:3dp:wsjspon:awa:WSJCIO:2022:WSJFY22" target="_blank">Michael Weil, managing director and global leader of Digital Forensics, Deloitte Financial Advisory Services LLP; and Derek Snaidauf, principal, Don Williams, senior manager, and Daniel Yoo, manager, all with Deloitte Transactions and Business Analytics LLP