AI bias: How can we prevent machines from being racist, sexist and offensive?
Artificial intelligences continue to exhibit the same biases and prejudices as humans because they are trained on what we create, but there are ways we can improve the situation
17 June 2022
Stories of artificial intelligences exhibiting racist and sexist bias are common, including face recognition algorithms struggling to work for Black people and tools assessing whether a convicted criminal will reoffend treating white people more leniently. Despite years of efforts to make AI fair, these issues don’t seem to be going away, so what can be done about them?
Current AI research is focused heavily on machine learning, in which a model is …