About
AI Verify is an open-source AI governance testing framework and software toolkit developed by Singapore's Info-communications Media Development Authority (IMDA) and the AI Verify Foundation. It enables organizations to systematically test their AI systems against internationally recognized ethical and responsible AI principles, generating standardized governance reports that serve as evidence for audits, internal assurance, and public transparency. The toolkit runs a comprehensive battery of tests covering key AI governance dimensions including fairness, explainability, robustness, transparency, and safety. It is aligned with major international frameworks such as the EU AI Act, OECD AI Principles, and Singapore's Model AI Governance Framework. AI Verify supports integration with widely used machine learning libraries including scikit-learn, TensorFlow, PyTorch, XGBoost, and LightGBM, and features a plugin architecture that allows third-party test algorithms to be incorporated. Primary use cases include pre-deployment AI model auditing, regulatory compliance demonstration, responsible AI due diligence for enterprises and government agencies, and enabling AI developers to document and validate model behavior. The tool is deployed via Docker on Linux environments and outputs structured reports that can be shared with stakeholders, regulators, or the public.
Key Features
- Comprehensive Governance Testing: Runs automated tests across key responsible AI dimensions including fairness, explainability, robustness, transparency, and safety, producing evidence-backed governance reports.
- International Framework Alignment: Tests are mapped to globally recognized standards such as the EU AI Act, OECD AI Principles, and Singapore's Model AI Governance Framework, simplifying compliance documentation.
- ML Framework Compatibility: Supports popular machine learning libraries including scikit-learn, TensorFlow, PyTorch, XGBoost, and LightGBM, enabling integration into existing AI development pipelines.
- Plugin Architecture: Extensible design allows third-party test algorithms and custom plugins to be added, enabling organizations to tailor the testing suite to their specific use cases.
- Standardized Report Generation: Automatically generates structured governance reports upon test completion, suitable for sharing with auditors, regulators, or the public as transparency artifacts.
Pros
- Completely Free & Open Source: Released under the Apache 2.0 license with no usage fees, making professional-grade AI governance testing accessible to organizations of all sizes.
- Regulatory Readiness: Alignment with multiple international AI governance frameworks reduces the effort required to prepare for regulatory audits or compliance assessments.
- Credible Provenance: Developed and backed by a national government agency (IMDA Singapore), lending institutional credibility to the reports and methodology generated.
- Extensible by Design: The plugin system enables the community and enterprises to extend testing capabilities beyond the default suite without forking the core project.
Cons
- Linux-Only Deployment: The toolkit is primarily supported on Linux via Docker, which may present barriers for teams working predominantly on macOS or Windows environments.
- Primarily Tabular Data Focus: Core test coverage is strongest for tabular datasets and traditional ML models; support for unstructured data types (images, text, LLMs) is still maturing.
- Technical Setup Required: Requires familiarity with Docker and machine learning frameworks to deploy and run effectively, limiting accessibility for non-technical governance or compliance teams.