Interesting read on bias in AI: Https://venturebeat.com/2020/12/09/columbia-researchers-find-white-men-are-the-worst-at-reducing-ai-bias/
The article was updated recently; they cut way back phrasing of the title:
Columbia-researchers-find-white-men-are-the-worst-at-reducing-ai-bias -> Study finds diversity in data science teams is key in reducing algorithmic bias.
It’s important to recognise our inherent biases; this is pushing more and more resources, to improve models actively.
Teams at bigger companies like Google were created to help others tackle this problem.
One example is: https://www.tensorflow.org/responsible_ai
As the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive to everyone
Good to see you in here, Attila!
Yeah, I didn’t notice that in the URL, but you’re right. I think the edit is probably a good one: the original one feels a bit judgemental and blamey (which further exacerbates the divides between ethnic groups) rather than being constructive, which I think the updated title is better at being.
Completely agree re: our inherent biases. There are implicit biases we all have (and tests that show us how deep these can be) that we need to identify and ultimately eliminate from our behaviour. The Sphere platform should ideally be designed to help with that.
Great to see fairness and inclusivity prioritised in the TensorFlow project!