Model Governance and Explainable AI as tools for legal compliance and risk management
On their journey towards machine learning (ML) in production, organizations often solely focus on MLOps, building the technical infrastructure and processes for model training, serving, and monitoring. However, as ML-based systems are increasingly employed in business-critical applications, ensuring their trustworthiness and legal compliance becomes paramount. To this end, highly complex “black box” AI systems pose a particular challenge.
Using the example of ML-based recruiting tools, we show how even seemingly innocuous applications can carry significant risks. Then, we demonstrate how organizations can utilize Model Governance and Explainable AI to manage them by enabling stakeholders such as management, non-technical employees, and auditors to assess the performance of AI systems and their compliance with both business and regulatory requirements.
We‘d love to show you a YouTube video right here. To do that, we need your consent to load third party content from youtube.com
- Date
- 2022-06-15
- Time
- 10:45 - 11:15
- Conference / Event
- WeAreDevelopers World Congress 2022
- Venue
- CityCube Berlin, Messedamm 26, Berlin