Science and technology, Arts, culture & society | Australia, Asia, East Asia, The World

10 March 2021

Content algorithms are becoming attuned to the deep-seated, long-lasting social biases of their users, making the platforms themselves instruments of prejudice, Fan Yang writes for International Women’s Day.

Contrary to common belief, social media platforms are not neutral social environments. Behind their interfaces is an algorithm heavily influenced by corporate and governmental forces and tailored to the end-user.

This means that systematic and individual biases come through the way they are coded, their protocols, their interface design, which content is presented to which user, and their regulatory frameworks.

These technologically reinforced biases are manifested by the algorithm, and to a great extent imposed on users who are socially, culturally, and politically disadvantaged – the ‘other’.

One way this happens is when marginalised groups are problematically represented in erroneous, stereotypical or even pornographic ways on social platforms, or sexist or racist content is promoted based on demand for it – showing that society’s biases inevitably come to the surface of a user-centric algorithm.

For instance, both WeChat in China and Google in much of the rest of the world are biased in their algorithmic design and community guidelines.

More on this: Does social media benefit society?

Just one example is illuminated by a simple search. If one attempts via Google to locate the famous Shibuya Crossing in Tokyo, they may type ‘busy Japanese district gif’, but what appears?

Inclined to autocorrecting the input to ‘busty Japanese district gif’, it provides the unsuspecting user with a page of sexualised Asian female representation. While the images may not cater to individual demand, Google’s algorithmic suggestions were accurately sexist and racist in attempting to meet collective demand, in this case, for the otherisation of Asian women.

This shows the power of algorithms, but also that they are not making things better for marginalised groups.

Safiya Noble coined the term ‘algorithmic oppression’ to describe the way that deep-seated, long-lasting social biases, including racism and sexism, but certainly not stopping there, have become consolidated and normalised via the search engine – technology we rely on every day to seek answers. Everyday racism, sexism, and commentary from individuals on the web is an abhorrent thing which has been detailed by academics and commentators, but it is still very different from this.

More on this: Podcast: Can policymakers detoxify social media?

This is a corporate platform, vis-à-vis an algorithmically crafted web search, offering up racism and sexism as its first results in a system designed to please its users. This process reflects a corporate logic of either wilful neglect or a profit imperative that is willing to make money from social bias in search companies.

Algorithmic oppression is by no means an issue exclusively occurring on Google. Ruha Benjamin in the book Race After Technology and Charlton D McIlwain in Black Software have shown that historical social injustice is part of the architecture and the language of technology.

This then becomes operationalised in the structure of technology via coding and interface decisions that can be very hard or impossible to reverse.

Another example of gendered and racialised bias in the social media landscape is on the Chinese platform WeChat. In WeChat’s Community Guidelines, sexual content is banned, but the scope of censorship is narrowly characterised to prostitution, hook-ups, sex partners, sex-related issues, or information with sexual connotations or against ‘Chinese traditional societal decency’ – a strategically broad-brush phrase used in public policy to describe the contextually defined social norms dominated by the Chinese Communist Party.

Misogyny and verbal or written sexual harassment seem to be either excluded from being forbidden content or ignorantly allowed by WeChat.

How did this become part of technology? An organisational analysis conducted by Safiya Noble and Sarah T Roberts in 2019 on the workforce diversity of the Silicon Valley reveals that the structural racism and sexism is upheld by the white male domination of modern digital technology companies.

Long story short, the industry’s elites disproportionately consolidate resources away from people of colour, women, and other marginalised groups.

Racism, sexism, and elitism are inscribed in the corporate culture and are further present in recruitment practices, as tech corporations, rather than looking for the best person for the role, are keen to hunt for people who are better fits for their institutional agenda. This bias is bleeding through to the structure of their products.

This means that ultimately, only wide-scale social change in the technology industry is sure to help solve this crucial issue, something governments and these companies must do everything they can to enact.

Back to Top

Comments are closed.

Press Ctrl+C to copy

Republish

Close