Dots and unredable informations flying on screan

The Prejudice of Datasets

Bias in Algorithms

In 2015, Google had to issue an apology when its photo app mistook a black user for a gorilla—revealing that algorithms are not free of bias, despite their apparent impartiality. Already in 2009, Asian users reported that a digital camera was asking them not to blink when they took a photograph with their eyes open. Nikon, a Japanese firm, sold the camera or be it the sad case of Tay, Microsoft’s teen slang Twitter bot that was taken down after producing racist posts.

Amazon had to discard its AI- recruiting tool, which was designed to hire potential candidates by rating the resumes they received over ten years. The scores were added from 1 to 5 stars. But later, Amazon realized that the tool did not rate candidates based on their software development skills or their previous work experiences and merit but on the basis of their gender. In effect, Amazon’s system taught itself that male candidates were preferable.

It rejected resumes that had the word ‘women’ in it. Amazon has had men that make up a majority of their workforce for decades, and since most of the resumes came from men. US tech companies are yet to close the gender gap in technical roles that of a machine learning engineer to a software developer, where you are likely to see more men than women speak a lot of about the unbalanced workforce which latently gets played out in the algorithms. These kinds of systems are eventually penalised. However, why do they have to reach that state when this can be prevented at the initial stage?

Let´s take the example of the research conducted on the Facebook Ads and its biased way of showing some particular ads to specific groups of people or races and not showing it to another group. Like the postings for secretaries, assistants, teachers were targeted at a higher number of women. In contrast, the ones for a janitor, kitchen runner, taxi driver were aimed at the minority communities.

The same bias is seen in online dating apps like Tinder, Grindr, and Hinge. One can end up seeing the same profiles again and again despite having swiped left, the same kind of people appearing as that is the popular choice or the excellent characteristics and features favored by the majority of people defining it as ‘beauty’. A similar issue is noted with Netflix, where it recommends you the most famous and watched shows as to the ones that are more off-beat that you would have perhaps preferred.

The recommendation systems are driven by what most of the population selects. That eventually becomes the ultimate choice or rather the lack of it for the rest. Monster Match, a game funded by Mozilla, was developed primarily to study and bring to light how dating app algorithms reinforce bias—and serve the company more than the user.

A research that appeared in Journal Science disclosed how a famous triage algorithm used in the United States underestimated the medical needs of patients based on black-white differences. Non-profit research and advocacy organization, AlgorithmWatch is committed to evaluating and shedding light on algorithmic decision-making processes that have a social relevance, meaning they are used to either predict or prescribe human action or to make decisions automatically. Without knowing the data about their previous records or even ethnicity, it programmed patients with risk scores based on their health care costs. It’s no surprise when the recent data coming from the United States revealed that the majority of the African Americans were struggling with the staggering death toll due to Covid-19.

How to avoid bias?

There are many ways that companies, organizations should work on in which they could eliminate this bias early on. The skewed data, filled with outliers and noise, are usually unreliable, giving us results and predictions that aren’t entirely true, making it difficult to use for further research and insights. The data sets that are used in studying different trends should have diversity as the driving element. The more diverse the data, the more inclusion of people from various backgrounds leads the algorithms to consider different intakes and outtakes—machine learning, which finds patterns in massive amounts of data and applies them to make decisions. According to the experts, teams developing artificial intelligence must strive for greater social and professional diversity—this will be key to building a fairer future for the field.

For example, algorithms studying loan applications, Census data shows that black and Hispanic Americans are more likely to be underbanked or deprived of general banking services than white or Asian Americans. Racial gaps in mortgage loans show black and Hispanic borrowers are more likely to have their applications denied than white people.

Taryn Southern, director of the neurotechnology film I Am Human, told the online portal Big Think that brain-machine interfaces designed to make us “smarter, better, faster” reflect the “Western bias to favor productivity and efficiency.” Why assume everyone shares those values? “Perhaps in other Eastern cultures they would orient the use of an interface to induce greater states of calm or create more empathy,” Southern suggests.

Another critical issue is when the technologists who are developing these algorithms are only aware of the technical aspect of it with very little to no knowledge about the social and cultural background of what they are developing. Hence, technologists must work with social scientists to corroborate and ensure that all the underlying influences have been studied and kept in mind while coding the AI.

Data Scientists also play a vital role in this process, as they are studying and analyzing the chunks of data. It is definitely to be considered that to leverage data results, diversity of its interpretation is vital.

Hope for Bias-less Data

Algorithmic bias is human-made; hence it can also be solved by us. Since AI can help expose the hidden truth in faulty data sets, it helps the people developing these technologies to understand the errors better if not quicker. Observing ethically questionable whirlpools in data can help us rectify our process and shift our focus into a more balanced manner of developing algorithms.

Author

  • Akansha Krishnani

    Studying Contemporary Art and English Literature then working for different Museums like Museo del Prado, Guild Art gallery, Studio x and many such contemporary art spaces in Madrid and Mumbai . She also represented her galleries at the Delhi Art Fair, Dubai Art Fair, Singapore Art Fair and Dhaka Art Fair. Gradually taking on Sound Performance while working in Rome after which getting selected for the Prestigious Music and Sound Art Residency at Fabrica, Benetton’s Research Lab in Treviso, Italy. Where she worked on original soundtracks for different projects and brands like Bvlgari ,United Nations, UK climate change committee to name a few. She then received another scholarship to study Design and Sound in Rome where she spent three years. Finally founding an Interior hub making Home Design Accessible to everyone. Operating in the canaries & soon in Madrid. Her first fiction book titled, 'Desire' is getting published this year, which will be followed by the TV adaptation of the same.

Subscribe

Do you like our content?

Always be up to date about local news, the latest updates, and events in Fuerteventura.