Delete search term

Header

Main navigation

ZHAW Develops Certification Framework for AI Systems

A research team from the ZHAW School of Engineering, in collaboration with CertX AG, has developed a certification framework that assesses the trustworthiness of artificial intelligence (AI) technologies. The Innosuisse-funded project "CertAInty" addresses societal risks posed by AI and anticipates the requirements of the EU AI Act, which will soon become relevant for Swiss companies as well.

As AI is increasingly deployed in safety-critical domains, ensuring the trustworthiness of such systems is more important than ever. Poorly evaluated AI systems can lead to serious societal risks: from discriminatory algorithms and safety hazards to medical misdiagnoses.

Researchers at the ZHAW School of Engineering’s Centre for Artificial Intelligence (CAI) and the Institute of Applied Mathematics and Physics (IAMP) collaborated to create "CertAInty", a structured framework for assessing AI technologies.
"Certification of AI systems by an accredited body builds trust, accelerates adoption, and enables use in critical applications," says Ricardo Chavarriaga of the ZHAW School of Engineering. His project co-lead, Joanna Weng, adds: "CertAInty bridges the gap between the abstract regulatory requirements of the EU AI Act and concrete technical methods for evaluating AI systems."

Four Core Dimensions of Trustworthiness

The certification framework focuses on four key dimensions:

  • Reliability: Consistent system performance under varying conditions
  • Transparency: Traceability and interpretability of AI decision-making processes
  • Autonomy & Control: Definition of the level of human oversight
  • Safety: Prevention of harmful outcomes in critical application domains such as healthcare or autonomous transportation

For the reliability dimension, for example, over 55 metrics and 95 methods were evaluated, resulting in an optimized and validated selection.
"Our certification framework provides a practical methodology and a pragmatic foundation for developers, companies, and regulators to support the responsible use of AI technologies," explains Joanna Weng.

Real-World Validation

The framework’s applicability was demonstrated through several real-world case studies—for example, AI-based detection of construction vehicles using computer vision. The reliability of the AI system was systematically evaluated against factors such as weather conditions and image distortions.

The importance of the project is heightened by the EU AI Act, which came into force on 1 August 2024 and will be fully applicable from 2 August 2026. The regulation introduces mandatory certification for high-risk AI systems—focusing on exactly the same dimensions addressed by CertAInty. High-risk AI systems embedded in regulated products will have an extended transition period until 2 August 2027. A similar regulatory framework is expected to be introduced in Switzerland. Additionally, Swiss companies marketing products in the EU will have to comply with the EU AI Act.
"This project anticipates the emerging regulatory landscape and offers a methodological bridge between regulatory requirements and practical implementation," says Chavarriaga.

The company CertX is now using the certification framework as the basis for its services and offers systematic and independent evaluations of AI solutions in Switzerland. The project's results have also been presented at various conferences, including the Swiss Conference on Data Science 2024, where the ZHAW School of Engineering team won the Best Paper Award.

For industry and academic professionals interested in AI assessment, the ZHAW will offer, for the first time in May, the multi-day training course IEEE CertifAIEd™ Assessor Training in collaboration with the IEEE Standards Association.