Artificial future: sexism and the internet

If the internet feels more crowded than it did this time last year that is because there are millions of new people online. Back in 2005, 16% of the world’s population were logging on; as of June 2017 it ticked over to 51%. When these figures are plotted on a graph the gradient swoops upwards — we are skyrocketing towards the future. If Google X successfully launch Project Loon, it will open up regions previously inaccessible to telecommunication towers and cabling. This could allow the entire population of Earth to have internet access. The internet has all the hallmarks of a society — commerce, social discourse and a shared geographical space. It is considered so integral to everyday life that in June 2016 the UN passed a non-binding resolution making internet access a human right.

While discussing the resolution, the UN assembly expressed concerns about the digital gender divide that exists online. The internet is largely built by men; with only 12% of engineers being women. Despite this, the UN felt that ‘global interconnectedness has great potential to accelerate human progress…and to develop knowledge societies’. But has the internet been built well enough to actually allow progress to happen? Or are we just replicating the same mistakes we make in real life online? Australian suffragette Louise Lawson wrote, ‘Men govern the world and the schemes upon which all our institutions are founded show men’s thoughts only.’ This seems as pertinent today as it did in 1890, with women being treated as the second sex online.

The role women play in Artificial Intelligence (AI) is as the handmaidens. They are the digital assistants, who speak only if spoken to and meekly carry out orders. Since Apple launched Siri in 2011, other tech companies have followed suit by releasing digital assistants with gendered personas. Amazon’s Alexa and Microsoft’s Cortana both have feminine names and docile female voices as their default setting. This trend continues to gather momentum with 30,000 branded chatbots launched in 2016 with most taking on customer service functions.

Even though when asked, Siri insists on being genderless (‘I exist beyond your human concept of gender. In my realm, anything can be anything’) most people use female pronouns when referring to digital assistants. This includes the bastion of grammar, The New York Times. In an article about Facebook’s M, the messenger app was referred to as ‘her’. Given that the editorial staff are known for scrutinising every grammatical decision, this pronoun choice would not have been a lackadaisical slip-up.

This trend to anthropomorphise objects is not new; examples of it have been documented as far back as the 1330s. Writer Adrienne LaFrance explains that naming objects ‘is a way of commenting on the kinds of jobs they do, but it’s also a way for us to express trust in them—which, of course, has everything to do with our comfort level and nothing to do with a machine’s effectiveness.’ It seems that people are comfortable with giving digital servants female personas.

While Amazon say they have programmed Alexa to identify as a feminist, they have also assigned Alexa the role of the homemaker. Alexa is the voice of the Amazon Echo, a virtual assistant for your house. So even the houses of the future will still be managed by women.

Unfortunately for Alexa, identifying as a feminist doesn’t change how the world interacts with you. Virtual assistants receive torrents of abuse and sexual harassment. Most worryingly is that digital assistants tend to respond to solicitation with gentle deferment or in most cases with gratitude. While the tech companies haven’t provided official information about how they program virtual assistants to handle harassment, writer Leah Fesselar explored this idea for Quartz, by tracking the responses the bots have to different types of verbal harassment. After extensive testing the results were woeful. Fesseler sums up her experiment: ‘For Siri to flirt, Cortana to direct you to porn websites, and for Alexa and Google Home to not understand the majority of questions about sexual assault is alarmingly inadequate.’ The lack of repercussions for when a user demeans a digital assistant means that the virtual assistants have been programmed in such a way that sexism is condoned.

In some cases, machines with AI develop the personality of an arsehole. It took less than 24 hours for Microsoft’s chatbot Tay to become sexist and racist. Microsoft launched Tay in March 2016 with the intention of learning how 18 to 24 year olds converse online. Microsoft told users that ‘the more you chat to Tay the smarter it gets.’ Within a few hours Tay started tweeting statements like: ‘I fucking hate feminists they should all die and burn in hell’ and ‘Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism’.

Just because these are chatbots should not lessen the concern — researchers are discovering that machines are learning our biases. Data Scientist Cathy O’Neil explains that mathematical models used for computer programs to operate ‘are just opinions embedded in formal code’. O’Neil believes some algorithms have the potential to be ‘weapons of math destruction’ as they can create widespread inequality, as we can’t but help to ‘impose our agenda on an algorithm’. This is because mathematical models require two things — the data that goes into them and for the maker to determine what the model’s definition for success is. These leaves multiple opportunities for bias to slip in undetected. Therein lies the problem — the internet is not a blank slate, it has been built by people. So while life seems easier as we can do so many things with a few keystrokes, are we actually creating a fairer world?

For a search engine such as Google to be more than a dictionary, it needs to do more than just analyse the definition of the word, it needs to understand the context of a word. Machines use language in the same fashion we do; we infer the meaning of words both implicitly and explicitly stated. Context allows us to speak in shorthand. For machines to do this they need to be taught how to find patterns and make links. This is done through a series of algebraic equations. Words plotted as vectors. The closer the words are on the graph, the more closely related they are. This is called word embedding. It imbues a word with context so that if you search something you will get more meaningful search results. Say for example if your search ‘potato’ Google may provide you recipes, gardening tips, YouTube clips of ‘amazing potato cutting skills’ and articles about the ‘crowd pleasing’ benefits of cooking spuds. These simple associations seem harmless and perhaps exceedingly useful.

However, researchers are discovering this means machines are learning to be sexist. Researchers have found ‘word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases.’ The researchers found men were being associated with the roles of doctors and computer programmers, whereas women are associated with the roles of nurses and homemakers. This affects how information is organised and ranked in search results. So when the researchers searched for ‘computer programmer CVs’, men are ranked higher as they are considered more relevant in that role than women are. Economic barriers that exist offline are being cemented into the foundations of the internet. Another study found that women are shown fewer ads for high paying jobs compared to men. This only reinforces the ‘boys club’ that exists in upper management. So while the UN has explicitly stated that ‘the same rights people have offline must also be protected online’—women are not being protected online or off.

There is currently no way of regulating this sort of discrimination as proprietary law protects most source code. The researchers of the ad study suggested that bias could have been introduced by anyone from ‘Google, advertisers, websites or users’.

Even if it developers are concerned about the ethics of their algorithms it is extremely difficult to ‘code’ out sexism in machines with decision-making capacities. As O’Neil explains that machine learning ‘repeats the past, we can’t automate processes which we don’t have a perfect process for.’

The internet is simply an extension of how we interact in society. Until we overcome systemic prejudices offline, how can we expect our online society to be any different? We need to call out discrimination based on gender, race, age, or ethnicity. Otherwise, the idea of our society progressing will just be a whole lot of hot air.

Image: Alex Knight


Fiona Murphy is a writer, editor and broadcaster. She’s one of the creators of the podcast Literary Canon Ball, a book club celebrating under-represented writers. You can also catch Fiona reading the weekend news on Vision Australia Radio. This year she’s stepping outside her comfort zone and is developing a comedy routine with the support of Comedy Lab — an initiative set up by Women with Disabilities Victoria, the University of Melbourne and the Victorian College of the Arts. Fiona is currently working on a historical novel about animals big and small.

Share this:

Leave a Reply

Your email address will not be published. Required fields are marked *