Rule of thumb: If you're using AI for decisions a human could make, you will get sued in the future.
Storytime: Amazon's AI resume reader was really, really good at preferential treatment to white dudes and Asians.
In order to actually do AI and deep learning models, you start with data and train your AI on that data. In this case Amazon trained their AI from all their own past employment data.
Amazon historically employed and promoted white dudes and Asians. Amazon's culture was great for these people.
So the data the AI was trained on just perpetuated the bias Amazon already had.
And when they tried to stop the AI from being biased, it found ways to derive what someone's race was by using other data points like area codes.
The result: Amazon got sued.
Lemonade uses AI to determine insurance
I predict in the future they will get majorly sued, when it comes out that their AI is biased on attributes such as age, gender, and ethnicity.
Takeaway: Any decision that has to do with approving/selecting people, and that can be made by a human, should be made by a human.
Using AI is a really effective way to get sued in a future.