Product Assurance of AI-enabled Systems (PRAISE) is a project that seeks to explore, identify and develop methods and technologies for product-oriented assurance of AI and it’s enabled systems.
Artificial intelligence (AI) is seen as enabling a greater degree of digitalization across a wide range of products and applications. However, to unlock the true potential of AI requires overcoming obstacles to its acceptance and satisfying concerns over responsibility, accountability, governance and safety. These challenges include evaluating the data coverage of an AI model, performing AI integrity evaluation, tackling AI adversarial threats, including data poising, model stealing, evasion and inference and conducting continuous assurance of AI-enabled systems. There are also AI specific risks, vulnerabilities and threats to tackle, during the entire lifecycle of any AI model, from data preparation through model training, model verification and deployment to monitoring.
Testing tools and technologies
The PRAISE technology team is developing and releasing a set of tools to build the assurance capability to test AI products. These testing tools can evaluate AI products from data coverage to AI-model integration. One of these covers Data Representativeness Testing (DRT), which aids in identifying whether the dataset chosen is representative of critical scenarios even before the first model is developed.
Another is Image Perturbation Testing (IPT), which can evaluate AI robustness against multiple common image degradations with multiple perturbation levels. This is important since even a small deviation from a standard image can cause the AI system to interpret it as something very different from what the image actually represents.
The technology team is also developing a tool called Model-based Active Testing (MAT), which uses a score strategy in cases that have a high uncertainty estimation. More testing tools and technologies are also planned to tackle AI-Assurance requirements from security to continuous assurance.