AI Verify Testing Framework For Traditional and Generative AI

Share
AI Verify Testing Framework For Traditional and Generative AI

IMDA (Infocomm Media Development Authority, Singapore) developed AI Verify, a AI governance testing framework and software toolkit. The framework outlines 11 governance principles and aligns with international AI standards from the EU, US, and OECD. AI Verify helps organisations validate AI performance through standardised tests across principles such as transparency, explainability, reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency, inclusive growth, and societal and environmental well-being.

AI Verify was developed in consultation with companies of different sizes and sectors — including AWS, DBS, Google, Meta, Microsoft, Singapore Airlines, NCS/LTA, Standard Chartered, UCARE.AI, and X0PA. It was released for international pilot in May 2022 and open-sourced in 2023, with more than 50 companies including Dell, Hitachi, and IBM participating.

With the rise of Generative AI (GenAI), the AI Verify testing framework has been enhanced to address its unique risks. It now supports testing for both Traditional and GenAI use cases.

AI Verify testing framework aims to help companies assess their AI systems against 11 internationally-recognised AI governance principles:

  1. Transparency
  2. Explainability
  3. Repeatability / Reproducibility
  4. Safety
  5. Security
  6. Robustness
  7. Fairness
  8. Data Governance
  9. Accountability
  10. Human Agency and Oversight
  11. Inclusive Growth, Societal and Environmental Well-being

WHO SHOULD USE THE FRAMEWORK?

AI System Owners / Developers looking to demonstrate their implementation of responsible AI governance practices

Internal Compliance Teams looking to ensure responsible AI practices have been implemented.

External Auditors looking to validate your clients’ implementation of responsible AI practices.

How to use it

Each item in the checklist consists of:

OUTCOME

Describe the outcomes that you want to achieve for each principle.

PROCESS

Steps you need to take to achieve desired outcome.

EVIDENCE

Documentary evidence, quantitative and qualitative parameters that validate the process.

For each process, indicate if you have completed process checks and, if necessary, provide a detailed elaboration.

Download (PDF): AI governance testing framework

Source: Infocomm Media Development Authority, Singapore

Blog Automation AI Agent: Automate Your Entire SEO Content Workflow

Blog Automation AI Agent: Automate Your Entire SEO Content Workflow

Prev
Microsoft Unveils MAI: A Strategic Leap Toward an Independent AI Ecosystem

Microsoft Unveils MAI: A Strategic Leap Toward an Independent AI Ecosystem

Next
Comments
Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the Loop
Updates, No Noise
Moments and insights — shared with care.