overview

AI Applications โ€ข June 2, 2022

1020:1000 and Still Biased?

Once upon a time a man and his wife applied for a credit card. Their credit history showed that both of them having roughly the same income, similar spending patterns, and debt. Yet, the credit limit made available to the woman was almost half of the amount offered to her husband. This wasnโ€™t an error, it was the algorithmic bias.ย 

Technology has transformed our world at an incredible pace, fuelled by better education and humankindโ€™s powerful cognitive abilities. Inventions like motorized vehicles, radio waves, and telegraphs that were once hailed as revolutionary for speeding up interactions and communication of information, seem slow by todayโ€™s standards of emails, instant messaging, conferencing, and the latest cryptocurrencies and metaverse. But have we shown similar progress when it comes to eliminating gender bias in technology?

With growing opportunities for learning and advancement, women today are rapidly making their mark across roles and domains. Thereโ€™s a rising number of powerful female icons, actively collaborating and competing with their male peers, pioneering reform and steering society towards a better, more vibrant future. But there is still much that needs to be done to improve representation in all verticals of industry and to eliminate gender bias from all walks of life, especially the technology workforce. There is far less representation of women in the fields of AI and data science, statistics available vary from around 20 to 26 percent.

AI and machine learning (ML) models are dependent on the data fed to them during the training phase.ย  The systems mimic the human thought process to spot patterns, make correlations and extrapolate results that can be applied to other similar data sets. The issue the industry faces is that the data that is fed to the advanced systems may contain elements of bias that have plagued our society for millennia. Bias, gender-based or otherwise, conscious or unconscious, has seeped into the data we collect. These biases skew the analytics and impact inferences drawn by the AI/ML systems, delivering output that is similarly compromised.

Watch this session by Dr. Kamakshi Anantharaman, Global Delivery Head, Google Cloud Platform Alliance at Quantiphi, taking on the issue of gender bias in AI. She uses the example of algorithmic bias cited above to highlight the risk of bias pervading even the technology of tomorrow. She calls out the fact that the representation of women in Indiaโ€™s AI ecosystem is under 30 percent, and this statistic re-emphasizes the effects of bias. If left unchecked, technologies like AI pose the risk of causing unintended harm to particular classes of individuals, something that would be detrimental to the society of tomorrow.ย ย 

The concept of โ€œResponsible AIโ€ deals with mitigating such future risks through careful scrutiny of how we collect our data, ensure its suitability and develop our AI models responsible and thereby increasing the interpretability of these models.

Dr. Anantharaman also reiterates the urgent need for on-ground interventions to increase representation and foster the growth of the female workforce. Female role model interventions at the ground level are imperative and research has proven the impact of the same on the overall increase of women in STEM fields.ย 

Women in tech need to encourage, empower and endorse other women in their micro-ecosystems. Having strong female role models to benchmark themselves against allows young women to aspire for similarly ambitious career paths, helping them thrive in their respective professions. It is imperative then that the movers and shakers of the technology industry take notice, and actively take steps to ensure a progressive, bias-free future and thereby a robust responsible AI ecosystem at scale.

Written by

Aishika Bhattacharya

Thank you for reaching out to us!

Our experts will be in touch with you shortly.

In the meantime, explore our insightful blogs and case studies.

Something went wrong!

Please try it again.

Share