Discriminating algorithms

Published in Dagens Industri, Nov 2017

Whenever I go to choose a movie to watch, I’m always struck by the fact that the highest-ranking ones don’t appeal to me. I have been wondering if there was something wrong with my taste, but then Wired magazine recently revealed that IMDb’s top movie list allows algorithms to process the voting data – and over 70 percent of that data comes from men. The film site Rotten Tomatoes bases its data analysis on other reviews, but that data comes from a similarly large share of men. In other words, the algorithms that define the world’s best films do so according to men’s preferences. Movies that are ranked high by women do not have a chance at replacing The Godfather or The Dark Knight.

In the current and long-awaited debate on gender equality, it is interesting to reflect on what happens to gender roles when artificial intelligence interprets our data and helps us make decisions. New research from Princeton, published in the scientific journal Science, shows that algorithms to a large extent associate words such as “leadership” and “pay” with men, while words like “home” and “family” are more often connected to women.

With machine learning, when computers sort through large amounts of training data and learn by example, they are using long-ingrained stereotypes hidden in our everyday lives. Machines that are learning to understand human language start from the assumption that a word is best defined by its relationship with other words. Thus machines interpret the word “computer” as something that is related to men, and the word “handicraft” as something that is related to women. This statistical approach captures the cultural and social context of words in a way that reference books have never done – and the algorithm brings our human prejudice into the equation.

We are further contaminating these algorithms through our use of digital services. Most people who are given the task to design a shoe make a male boot and the majority of photographs depicting important occupations contain men. According to researchers at the University of Virginia, machines make gender associations with one third of all objects and that ratio is even higher when it comes to verbs. Google’s software translates gender-neutral pronouns from multiple languages to “he” when the text refers to doctors and “she” when it refers to nurses. It is statistically correct that more doctors are men, but not something that should be communicated as a given.

Microsoft has shown that machines trained on prejudiced original data do not only reflect gender discrimination. Their algorithms connect men with the word “programming” to an even greater extent than what the data actually shows.

When AI-based systems take on increasingly complicated tasks, the risks of automated decision-making increase. If a future kitchen robot serves a man a beer but helps a woman with the dishes, it’s certainly rather annoying. But when robots guide men and women to make different decisions concerning their educations, jobs and pension investments, the consequences become considerably more serious.

Ethical questions about what kind of world we want to teach the computers and how we can get our system to work in the desired direction are extremely relevant. Educational material for children often shows an idealized world, with female role models in traditionally male roles – and the opposite. In the same way, we need gender-conscious algorithms and a modern version of an equality ombudsman to ensure that different individuals are treated fairly. Otherwise there is a risk that the algorithm-driven world will have a far greater affect on us than a few bad film recommendations. We cannot allow the AI systems to confine us to outdated gender roles. Equality in the future must be significantly better than it has been in the past.

Leave a Reply

Your email address will not be published. Required fields are marked *