I just read and enjoyed this:
Machine learning finds patterns in data. ‘AI Bias’ means that it might find the wrong patterns – a system for spotting skin cancer might be paying more attention to whether the photo was taken in a doctor’s office. ML doesn’t ‘understand’ anything – it just looks for patterns in numbers, and if the sample data isn’t representative, the output won’t be either. Meanwhile, the mechanics of ML might make this hard to spot.