Blogpost \ Decoding Algorithmic Bias: The Hidden Impact on Gender Equality in Hiring

Introduction

In today’s rapidly evolving digital landscape, algorithms hold a significant role in our daily lives, from controlling the content we see on our phones every day as we scroll through social media, to the visibility of our job applications, to even credit approvals. Given their ubiquity in today’s digital world, it’s not difficult to observe its promise and deliverance of greater efficiency, accuracy and objectivity. But lurking behind the claims of efficiency and objectivity lies a rising concern that perpetuates and reinforces existing discriminatory gender inequalities – algorithmic gender discrimination.

While advancing technology is touted as a tool that can bridge societal divides, there is an imperative need for critical observations surrounding its potential, to deepen existing biases, as it often can go unnoticed (Castenda et al., 2022). ‘Bias’ is a term that can be broadly categorised as outcomes that are systematically less favourable to individuals within the realm of particular groups in cases where there are no relevant differences between these groups that enable a justification for the lack of favorability (Lee et al., 2019). Gender based inequalities remain an everyday reality from a variety of perspectives, such as position availability, pay, or employment prospects. This raises a crucial question: do algorithms bear the risk of fortifying and legitimating gender biases? My focus will be directed towards the spaces that make up the workforce alongside standing on the perspective that algorithmic bias is an issue that calls for increasing attention and understanding. With this in mind this blog post uncovers what is behind the algorithms used in the hiring process. This will be done through exploring these questions. What mechanisms lead to the production of gender biases by algorithms?

How can these biases be overcome? Is the use of algorithms an obstacle to achieving gender equality, through the replication of human biases, in today’s workforce? What kind of considerations need to be taken in the future with the rising use of AI technology in the workplace?

1.     The rising challenge of algorithmic gender bias in hiring

What exactly is algorithmic bias? It pertains to the systematic and repeatable errors in mathematical or computer systems that lead to unfair outputs that privilege one or more groups over others. Simply put, it is a term that indicates any way in which these tools can perpetuate as well as amplify bias (Lawton, 2022). The focus of this blog post is on the more nuanced section amongst types of algorithmic bias, namely gender specific algorithmic bias, where people are treated unequally or disadvantageously on the basis of theirgender or gender presentation. Recently, attention has shifted to gender-based algorithmic discrimination because women are alarmingly under represented in the AI field, specifically within the production and design sectors (Castenda et al., 2022). This has raised concerns with regards to what this means for the use of AI technology.

Biases exist in all the various processes of hiring starting from the sourcing, to the screening and later even in the selection and offer procedures of hiring. In order to uncover this, we will walk through the major areas in which gender bias gets introduced in algorithms used for recruitment.

1.1  Biassed training data sets and Machine Learning Algorithms

Understanding that certain algorithms are trained on historical data is a great way to see the inherent biases algorithms forthcome. Historical data refers to data that has been collected over time and includes information pertaining to past events, trends or behaviours. The use of historical data that contains information on past behaviours, and therefore past human biases, key us into understanding where bias can be introduced into algorithms. Hiring decisions which utilise algorithms are influenced by societal markers of gendered stereotypes and prejudices. So in terms of hiring, what types of algorithms are being used?

Machine learning algorithms are what are being predominantly used for all the various processes included in the procedure of recruitment (Bakkaloglu, 2022). Machine learning algorithms work through utilising historical data as the input in order to predict the new output values. This is precisely one of the causes of the gender bias of algorithms in personnel selection or recruitment. These machine learning algorithms study and comprehend past practices and therefore devise an output that has replicated or ‘learnt’ from past predispositions, amplifying and reiterating these biases to a much larger scale.

To give an example of the historical data’s role in generating biases , if a disproportionate number of male candidates were chosen for certain positions historically, the algorithm can take this data and output the inference that these positions better suit the employment of men over women. In this case, the gender of theapplicant itself becomes a relevant variable to the algorithm, although in reality it only reflects – and perpetuates – past discrimination. A case that encapsulates this is, what occurred with Amazon’s hiring experimentation which started in 2014. The company attempted utilising an AI based recruitment tool to allocate prospective employees scores that ranged from 1 to 5 stars, and used the tool to pick the top final candidates (Dastin, 2018). Yet in reality, the tool and system of rating candidates was not conducted in a gender neutral manner, and instead the algorithm explicitly advocated men over women. This is a case that primarily uncovered the capacity of algorithms to perpetuate gendered forms of bias into the workplace. This example is indicative of a larger problem posed by machine learning algorithms and its perpetuation of gender inequality.

1.2  Language bias and Algorithms in hiring

Another manner in which gendered bias is introduced into the hiring process is via the language skills that are taught to AI programs (Knight, 2016). Gender biases are often an implicit part of our society and culture, and as such are often encoded within language, the same language that is used in the development of AI systems. Through language, biases become contextual factors that directly influence the output of AItechnologies (O’Connor & Liu, 2023).. The process of training AI systems involves feeding huge quantities of written or spoken language into these systems enabling the system to draw connections between words and phrases, and this is what is known as word embeddings. Through word embeddings the algorithms create semantic fields of meanings associated with words. As a result word embedding often results in theacquisition of stereotypical gender biases that exist in the texts they are trained from (Brunet et al., 2019). When data predominantly assigns certain traits and roles to a particular gender, algorithms inadvertentlyreplicate these associations. A good example of this is described within O’Connor and Liu’s article, where they describe how everyday lexical choices are given stereotypical gender associations, with traits such as “caring and generous” being semantically grouped as descriptors of females, whilst jargon such as “efficient and assertive” is placed in the same semantic field as men. The algorithm picks up on these associations and reproduces them.

So what does this mean for hiring? Certain job descriptions that contain leading words, or words with stereotypical gendered semantics associated with it, have the potential to attract or discourage candidates of a particular gender, furthering the disparity in equal gender representation in different working sectors. Understanding the weight of language has vital value in uncovering – where bias gets introduced in algorithms, especially with regards to the hiring process, due to the consistent use of language through the procedures involved with hiring. However, language in AI perpetuating gender bias is not limited to the hiring process or the recruiter’s side, but also extends to the applicant’s side. There is existing research indicating the tendency for women to minimise their skills during the process of creating resumes, whilst men usually have the tenacity to amplify their abilities. This approach can often lead men’s resumes to stand out and be picked up by the algorithm. This difference can also be credited to job applicants inadvertently employing gendered language, for example: men frequently implement more assertive terms such as “leader”, “competitive” and “dominant” whilst women may lean towards utilising lexis such as “support” and “understand”. The difference in choice of language results in replicating the gendered manner in which the algorithm scans their resumes, and may result in the algorithm reading men as being more qualified just on the basis of the more active language they integrate in the process of crafting their resumes (Peyush, 2022). Both the recruiter, and the job seeker have the capacity to play into the bias that exists in AI due to the language skills taught to the algorithm.

2.    What can be done to mitigate bias?

Improving and abridging the issues that arise with the use of algorithms in hiring can be narrowed down primarily to a change in what data is being used and how (Bornstein, 2018)Mitigating bias is possible and can even take many forms.

2.1  Increase in Human Oversight

Algorithms even with active measures put in place to mitigate bias have the potential due to its lack of predictability to make biassed decisions. This is what calls for a growing requirement of human moderators, along with a call for a diverse pool of human moderators, so as to not exacerbate any other forms of biases that arise from human involvement (Lee et al., 2019). Targeting a mitigation of bias from the input level, can be in the form of diversifying those involved in the algorithm design process, in order to challenge biases at the root, and play a role in contributing to fairer decision making mechanisms.

2.2  Better Algorithmic Literacy

The knowledge gap that exists between the subjects namely the people and public who are impacted by these automated decisions and those producing and developing the algorithms proves to be a major obstacle in mitigating bias. Algorithms are generally invisible and often been equated as ‘black box constructs’ which is in reference to systems that do not provide any information about their internal workings. In order to mitigate gender bias in algorithms there has to be widespread algorithmic literacy.

Improving algorithm literacy will allow those affected by these automated decisions (e.g., those affected by recruitment conducted by algorithms) to provide feedback and thus be better able to anticipate areas where bias may manifest (Lee et al., 2019). A rise in knowledge and awareness can consciously lead to more equitable decision making through the use of algorithms (Bonezzi & Ostinelli, 2021). This need for increase in knowledge was explained aptly by David Lankers, who states that “Unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.”

2.3  Data Review

Regular examination of training sets used for development of algorithms is a fundamental manner in which gender bias can be mitigated in algorithm development and use. Considering that data lays the foundation blocks upon which algorithms are built, it becomes crucial to ensure that said foundation is representative and unbiased in its design. In order to achieve this feat, there needs to be a move towards inclusive datacollection, and this could even mean moving away from historical data used within training sets. Algorithms themselves can be trained through specific data input to flag and detect patterns with discriminatory trends, which allows beating the system with the systemThis would mean that the practical fix needs to come from the level of data. Yet whilst recognising the direction that needs to be taken towards more practical fixes to mitigating bias, I do recognise the roadblocks that exist with regards to data examination as well as the production of better data being a reality. This can be credited to the obstacles that stem from society itself, due to the lack of knowledge and understanding surrounding AI technology, which again calls for the need to improve algorithmic literacy, and to move AI away from being a black box construct. Until and unless changes that root from social systems and institutions take place there will be considerable difficulty in attempting to mitigate bias through the data level.

Conclusion

Reaching closer to having the perks of using algorithms, with their ease and efficiency in use, alongside guaranteeing a more equitable use of said technological wonders can only come through closer collaborations between the research fields of science and technology, public policy and gender studies (O’Connor & Liu, 2023). This ensures that alongside the development of newer technologies, real world implications and the intricate challenges that come with these complex technologies are made aware of andunderstood. This also reinforces the imperative that discussions about and developments of new AI technology have to be accompanied and centred with conversations around accountability and ethical responsibility. These conversations are not only necessary in the domain of recruitment and hiring, or pertaining to gender biases only, but also applicable within other fields utilising these technologies whether itbe within the workforce, banking or social media, or concerning other types of biases, such as racial and religious biases. Knocking down the barriers that exist in achieving equitable and ethical use of algorithms and other automated tools is crucial, and the first step towards it is omitting the asymmetry in knowledge about today’s ever changing technology. Another core element in mitigating bias can be zeroed down to working towards a considerable change and regulation in what data is being used in creating algorithms and AI technologies, and how these data are being employed on a larger scale. And alongside this strive is also the need to acknowledge that for a change to be seen within AI tools it also has to be reflected in tandem within society, as AI and algorithms are controlled and directed by humans, and therefore an extension of them (O’Connor & Liu, 2023).

Bibliography

Bakkaloglu, B. (2022, January 15). AI algorithms in hiring processes. Medium. https://medium.com/@burak.bakkaloglu/ai-algorithms-in-hiring-processes-f54c7f899d93#:~:text=Most%20of%20the%20hiring%20AI,an%20outcome%20for%20new%20cases

Bonezzi, A., & Ostinelli, M. (2021). Can algorithms legitimize discrimination? Journal of ExperimentalPsychology: Applied27(2), 447–459. https://doi.org/10.1037/xap0000294

Bornstein, S. (2018). Antidiscriminatory algorithms. Alabama Law Review, 70(2), 519-572

Brunet, M. E., Houlihan, C. A., Anderson, A., & Zemel, R. (2019). Understanding the Origins of Bias in WordEmbeddings. Proceedings of Machine Learning Research97, 1–9.https://doi.org/http://proceedings.mlr.press/v97/brunet19a/brunet19a.pdf

Burns, E. (2021, March 30). What is machine learning and why is it important?. Enterprise AI. https://www.techtarget.com/searchenterpriseai/definition/machine-learning-ML#:~:text=Machine%20learning%20algorithms%20use%20historical,(BPA)%20and%20Predictive

%20maintenance

Castaneda, J., Jover, A., Calvet, L., Yanes, S., Juan, A. A., & Sainz, M. (2022). Dealing with gender bias issues in data-algorithmic processes: A social-statistical perspective. Algorithms15(9), 303. https://doi.org/10.3390/a15090303

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK0 8G

Kenton, W. (2023, June 29). What is a black box model? definition, uses, and examples. Investopedia.https://www.investopedia.com/terms/b/blackbox.asp

Knight, W. (2016, November 23). How to fix Silicon Valley’s sexist algorithms. MIT Technology Review. https://www.technologyreview.com/2016/11/23/155858/how-to-fix-silicon-valleys-sexist-a lgorithms/

Lawton, G. (2022, August 29). Ai hiring bias: Everything you need to know: TechTarget. HR Software. https://www.techtarget.com/searchhrsoftware/tip/AI-hiring-bias-Everything-you-need-to- know

Lee, N. T., Resnick, P., & Barton, G. (2019, May 22). Algorithmic bias: New research on best practices and policies to reduce consumer harms. Brookings. https://www.brookings.edu/events/algorithmic-bias-new-research-on-best-practices/

O’Connor, S., & Liu, H. (2023). Gender bias perpetuation and mitigation in AI technologies: Challenges and opportunities. AI & Society, 1–13. https://doi.org/10.1007/s00146-023-01675-4

Peyush, A. (2022, September 27). Ai discrimination in hiring, and what we can do about it. New America. https://www.newamerica.org/oti/blog/ai-discrimination-in-hiring-and-what-we-can-do-abo ut-it/

Rainie, L., & Anderson, J. (2022, September 15). Theme 7: The need grows for algorithmic literacy,transparency and oversight. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2017/02/08/theme-7-the-need-grows-for-algorithm ic-literacy-transparency-and-oversight/

Leave a Reply

Your email address will not be published. Required fields are marked *