You are what you eat in this context means that the kind of data we use for the creation of algorithms and to train AI systems has a direct, and sometimes dire, impact on the end results.
How much can AI’s decisions impact your life?
In the United States, a married couple separately applied for credit cards. Despite having slightly better credit scores, the woman was granted a lower credit by half. For unknown reasons, the AI deemed the woman as less creditworthy1. Another area where gender bias is common is healthcare. For example, even though cardiovascular diseases are equally common for men and women, men are 2,5 times more likely to be referred to a cardiologist when experiencing chest pain compared to women2, a decision which could be fatal in some cases.
It is estimated that around 40% of hiring decisions in international companies3 are based on or aided by AI and these often tend to assign a higher score to candidates with certain characteristics or attributes resulting in bias favoring one gender over another. In other cases, race is the deciding factor, based on which some people are discriminated against and receive a worse quality of service or healthcare. There have been cases when facial recognition systems have had trouble detecting figures because they were dark-skinned and the rates of misrecognition were even greater for women, with up to 34,7% error rate4, dark-skinned people also have a lower success rate of skin cancer detection aided by AI, due to datasets with insufficient information about ethnicity and lack of documentation of dark-skinned cancer patients5.
These are all examples of biased or discriminatory decisions made or aided by AI, which can have very negative, sometimes even fatal results for certain people and groups. Bias and discrimination in AI can be attributed to a variety of factors: the kind of data we use to train AI and develop algorithms, or the way we create the algorithms. Individual biases of the developers might get transferred into the system and grow exponentially, as these faulty algorithms influence more and more decisions.
What can we do to remedy biased and discriminatory AI decisions and channel new tech developments to create a more just and equal world?
The widespread use of AI presents a great opportunity for ensuring better access to healthcare, helping us fight climate change, or improve decision-making processes and governance. Limiting the use of AI is thus not an option. Instead, we should recognize the existence of bias and discrimination within these systems and take active steps to remedy them.
1. Acknowledge and recognize bias
As Dr. Muneera Bano said in a podcast on AI and gender bias: “If there is a behavior that the algorithm is going to exhibit, we as humans will see that behavior more clearly than we would evaluate ourselves for that exact behavior. So to me, that’s one of the greatest things that the AI has done: show us a mirror of what we are as a society.” This is the first step towards making a positive change.
2. Close the data and digital gap
Much of the above-mentioned examples of bias in AI can be attributed to the data we use to create and train algorithms. There are two dimensions to this problem; the first is who produces the data we collect and the second one is the replication of our society’s biases and stereotypes in the AI systems. According to the 2022 gender digital divide statistics, 63% of women compared to 69% of men globally have access to the internet, with the difference being especially pronounced in Africa and the Arab States6. This means that in general, we possess fewer data produced by girls and women and so they are less represented in all our data.
Furthermore, a lot of the data we possess reflect historical biases – such as women being excluded from financial services or clinical trials in medicine in the past, which results in large data gaps about women even today. We must work towards closing the digital gap by ensuring access to digital technologies to all communities and subgroups of society and equipping them with digital skills to be able to use these technologies in a beneficial way.
3. Encourage more women to join STEM and especially data and machine learning and increase diversity
Currently, only 17% of ICT roles in Europe are filled by women and globally women make up only 12% of AI researchers, based on the 2020 Global Gender Gap Report by World Economic Forum7. By encouraging diversity in terms of gender, ethnicity, age, religion and bringing more women into STEM fields and especially data science and machine learning, there will be a higher chance of them being aware of possible bias, detecting it more readily, and training the AI to be fair, just and not perpetuate harmful stereotypes and biases.
4. Actively implement measures to ensure responsible and fair AI and its use
You can educate your employees on the existence of bias in AI, how to detect it, and what steps to take to counter the biased results, decisions, or recommendations. You can also create, sign, and implement a set of rules you will follow in the creation and use of AI tools and systems to ensure they do not perpetuate bias or discriminate and use representative datasets to train algorithms. You can also actively hire diverse teams that have a higher chance of creating a more inclusive algorithm, and cooperate with organizations promoting unbiased and fair AI and data science.
***
1 ssir.org
2 www.thelancet.com
3 www.pwc.nl
4 proceedings.mlr.press
5 www.curemelanoma.org
6 www.itu.int
7 www3.weforum.org
Kristína Gotthardová, Policy Officer, AmCham Slovakia
Follow us