By: Astor Sonnen 

Lending money to males is less risky than giving it to females. That was the evaluation of  Goldman Sach’s machine learning (ML) algorithm. The system, which supports Apple’s shiny (can’t touch leather or denim) credit card, was accused of being sexist after numerous reports of huge discrepancies between the credit limits offered to males and females, even in cases where the latter has a better credit rating.

It’s not the first time technology has been accused of discrimination. Users of voice assistants – such as Amazon’s Alexa, Google’s Home Assistant, Microsoft’s Cortana and Apple’s Siri – have complained that their regional accents have left smart devices confused. Research from the Georgia Institute of Technology discovered that some driverless car technology struggles to identify pedestrians with darker skin tones. And, fears exist that banking algorithms will always favour more ‘profitable’ customers, leaving vulnerable individuals struggling to get access to loans.    

A lack of diversity affects us all

Many believe that technology represents the answer to human bias. It should be impartial, making decisions that are free from the social stereotypes that influence how people behave. But, more and more, we’re seeing examples of technology magnifying the issue – why is it happening?

One reason is the lack of diversity that still exists within development teams and wider organisations. When you have a variety of employees with different backgrounds and experiences, more diverse thinking permeates down through the technology being developed. If the vast majority of individuals share similar experiences, that will be embedded deep into any solutions. Any bias that exists, both conscious and unconscious, will be reflected in the how the technology operates, meaning it’s likely to favour a particular demographic.

Another issue is the historical data sets which are being relied on in the development and analysis process. When data sets are skewed in a particular way, which is highly likely if collection has taken place over a number of years, if not decades, then the technology will use these slanted findings to identify correlations and patterns to make its decision. Bias still exists as the machine doesn’t have the data to think differently.

Is there an answer?

As more decisions are at least partially determined by machines, there needs to be further discussions exploring how the level of bias can be reduced and that technology is ethically used.

The European Union recognised the need to shine the spotlight on Artificial Intelligence (AI) and ML development and published its ‘Ethics guidelines for trustworthy AI’ in April 2019. It sets out seven key requirements that AI systems should meet and removing bias is one of them.

“Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.”

At this moment in time, the guidelines are simply that. However, just last month (November 2019) the European Economic and Social Committee (EESC) proposed introducing an EU certification for ‘trusted AI’ products based on the framework. 

The proposal does fuel discussions around whether businesses should be regulated when developing technology. If organisations are provided with rules that necessitate greater diversity within teams and data sets, for example, would that help the situation? Opposers would argue that it will simply stifle innovation and send firms to other countries where they won’t be restricted by red tape.

What about encouraging greater collaboration between organisations and countries who are developing AI? They could explore challenges and increase diversity simply through sharing experiences. A mixture of cultures coming together to talk can only be a positive, surely? Or will there be an unwillingness by corporations and developers to share information?

Perhaps it’s as simple as always reverting back to human judgement? Let the machines do the time-consuming heavy lifting and number crunching which enable human experts to simply make the final call. Seems sensible, but just ask football fans what they think of the Video Assistant Referee (VAR) – former UEFA President Michel Platini isn’t a supporter – human decisions off the back of technology isn’t always a crowd pleaser.

There is a debate to be had and opinions around how best to tackle the issue will always differ. However, as AI and ML play a greater role in our lives, there just needs to be an awareness that the situation isn’t perfect. Organisations must understand the potential flaws in their systems and have processes in place to respond and rectify.

If you want to get involved with the conversation or hear about how Aspectus can help you to increase exposure, contact tech@aspectusgroup.com. With a portfolio of technology clients across a range of industries, we are the perfect partner to ensure engagement with your audience.  

Want to know more?

Get in touch

We’d love to hear about your business challenges and talk about how we can help solve them.

CONTACT US