Michael Tetelman - AI Research at Volkswagen Group - spoke at the ML Fairness Summit in June 2021. Don't miss out on upcoming RE•WORK Summits! See what we have lined up here: https://www.re-work.co/events As AI-based technology started next industrial revolution it exposed a lot of social inequalities that were mostly hidden and suddenly become obvious to everyone. Unintentional bias in data and AI solutions along with ability to scale it make the usage of the new technology unequal and unfair to people and social groups. Is AI technology responsible for that? No. The technology itself is not fair or unfair. The way we use the technology makes it fair or unfair. We have to evolve the AI technologies to comply with our social norms and we have everything we need to achieve it. Key Takeaways: - The new AI technology is a great amplifier – it is scaling up both its achievements and deficiencies. Biased data and solutions make us to pose a question: Is the new technology bad for us? - The answer is – no. The technology itself does not know social norms that we want. We have to use the technology in a way that will comply with our social norms. - I will show how the problems of data bias and unfairness of AI could be solved by using the AI technology itself, so there is a scalable way to automatically correct data biases and find fair solutions that we can accept and use. Video Chapters: 00:00 Algorithm, outcome and bias 03:02 Accuracy, image and dog 06:21 Probability, datum and dataset 08:56 Selection, distribution and solution 09:45 Pixel, segmentation and validation