Singapore’s Model AI Governance Framework Sets Out To Help Organizations Deploy AI Responsibly

This is the second part of my blog and webinar series on AI governance and risk management. My previous blog post and webinar discussed how AI is driving the need for data and technology governance to evolve and expand its scope to include ethics, accountability, and risk management.

Source: Singapore’s Model AI Governance Framework Sets Out To Help Organizations Deploy AI Responsibly

Making Singapore A Trusted, AI-Enabled Digital Economy

If data is the new oil, then what are its Exxon Valdez and Deepwater Horizon moments? As with environmental disasters, any major blunder of using data and AI in an unethical way will put the brands involved under extreme pressure from consumers and governments. While Singapore has so far escaped major data and AI disasters, the proliferation of AI means that it is only a matter of time. In 2018, an AI and ethics council initiated by the Singapore government set out to address three major risk categories for the AI-enabled digital economy envisioned for Singapore:

  • Technology risk: countering data misuse and rogue AI.
  • Social risk: building trust between agencies, companies, employees, and customers.
  • Economic and political risk: securing Singapore’s future in a digital economy.

Ethics And Social Responsibility As Core Principles Of Singapore’s AI Governance Framework

The Model Framework follows two guiding principles. The first one is to ensure that AI decision-making is explainable, transparent, and fair. Explainability, transparency, and fairness — “generally accepted AI principles” — are the foundation of ethical AI use. Absent from the framework, however, is the notion of accountability. The framework’s second principle is that AI solutions should be human-centric and operate for the benefit of human beings. This ties AI ethics to the larger dimension of corporate values, corporate social responsibility, and the corporate risk management framework.

Advertisements

A Risk Management Approach For Tackling The Risks Associated With Deploying AI At Scale

In alignment with other global frameworks, the Singapore Model Framework recommends a risk management approach to address the technology risk associated with AI. Ideally, this would be a dimension added to corporate risk management frameworks. This will elevate the risk beyond IT and individual business units to the corporate level (following in the footsteps of cybersecurity risk).

In particular, the Model Framework recommends that organizations:

  • Set up AI governance structures and measures and link them to corporate structures.
  • Determine the level of human involvement with a severity probability matrix.
  • Use data and model governance for responsible AI operations.
  • Set up clear, aligned communication channels and interaction policies.

Organizations Must Start To Look Into Risk Management And Establish Accountability Chains For AI

The key task for organizations is to start early and build awareness internally about AI risk. Deploying AI-enabled decision processes at scale must be accompanied by investments in governance and risk management. Guidelines such as the Model Framework set nonbinding recommendations, but organizations must start to develop their capabilities internally. The evolving nature of the Model Framework has added use case libraries as well as assessment tools — although adoption might still challenge all but the largest organizations.

Forrester recommends that organizations start on the following activities:

  • Turn customer trust into a competitive advantage through fair, ethical, and accountable use of data and AI.
  • Align AI ethics with your corporate values and risk management frameworks.
  • Define your organization’s AI accountability chain, including external partners and providers.
  • Leverage the expertise of AI consultancies with strong capabilities in AI ethics and governance.

Further Reading

The second edition of the Singapore Model AI Governance Framework can be accessed here, and the Implementation and Self-Assessment Guide (ISAGO) is available here. In addition, the Use Case Library Vol. 1 and Use Case compendium Vol. 2 are available.

Forrester clients can access my report “Case Study: Singapore’s Journey To Deploying Responsible AI.”

Please connect with me on LinkedIn!

If you’d like to discuss how this affects you and your organization, please don’t hesitate to schedule an inquiry call with me.

Leave a Reply