DEV Community

Cover image for How to debug AI image models to identify societal risks and harms
Ruth Yakubu for Microsoft Azure

Posted on • Edited on

How to debug AI image models to identify societal risks and harms

Did you know AI generated images can have unintended biases, stereotypes, and harms? If you prompt an AI image generator for images of tech developers, you are likely going to get male dominated images. You're not going to see images of females or diverse ethnicities. Scenarios like this reveal challenges where image models do not always reflect every aspect of reality. The problem is multimodal AI systems such as DALL-E that can generate text-to-image, and other data types are paving the way for exciting applications in areas such as productivity, health care, creativity, and automation. That's why identifying and mitigating responsible AI risks are essential.

✨ Join the #MarchResponsibly challenge by learning responsible AI tools and services available to you.

In this article, Besmira Nushi, an AI researcher at Microsoft, explores several techniques and challenges on how to ensure multimodal models are responsibly developed, evaluated and deployed. She discusses how to unmask hidden societal biases across modalities. She shares strategies of evaluating, mitigating and improving AI image model for issues such as spurious correlation where an AI model incorrectly predicts an object based on other objects associated with it; or how text-to-image models often misinterpret surrounding cues in an image.

Image description

👉🏽 Checkout Besmira Nushi's article: https://aka.ms/march-rai/evaluate-multimodal-models

🎉Happy Learning :)

Top comments (0)