Breadcrumb
Bias, racism and lies: facing up to the unwanted consequences of AI
The phrase "artificial intelligence" can conjure up images of machines that are able to think, and act, just like humans, independent of any oversight from actual, flesh and blood people. Movies versions of AI tend to feature super-intelligent machines attempting to overthrow humanity and conquer the world.
The reality is more prosaic, and tends to describe software that can solve problems, find patterns and, to a certain extent, "learn". This is particularly useful when huge amounts of data need to be sorted and understood, and AI is already being used in a host of scenarios, particularly in the private sector.
Examples include chatbots able to conduct online correspondence; online shopping sites which learn how to predict what you might want to buy; and AI journalists writing sports and business articles (this story was, I can assure you, written by a human).
And, whilst a recent news story from Iran has revived fears about the use of killer robots (Iranian authorities have claimed that a "machine gun with AI" was used to assassinate the country’s most senior nuclear scientist), negative stories connected with AI, which have included exam grades incorrectly downgraded in the UK, an innocent man sent to jail in the USA, and personal data stolen worldwide, are more likely to concern its misuse, and old-fashioned human error.
Ahead of the launch of a UN guide to understanding the ethics of AI, here are five things you should know about the use of AI, its consequences, and how it can be improved.
1) The consequences of misuse can be devastating
In January, an African American man in the US state of Michigan, was arrested for a shoplifting crime he knew nothing about. He was taken into custody after being handcuffed outside his house in front of his family.
This is believed to be the first wrongful arrest of its kind: the police officers involved had trusted facial recognition AI to catch their man, but the tool hadn’t learned how to recognize the differences between black faces because the images used to train it had mostly been of white faces.
Luckily, it quickly became clear that he looked nothing like the suspect seen in a still taken from store security cameras, and he was released, although he spent several hours in jail.
And, in July, there was uproar in the UK, when the dreams of many students hoping to go to the university of their choice were dashed, when a computer programme was used to assess their grades (traditional exams had been cancelled, because of the COVID-19 pandemic).
To work out what the students would have got if they had sat exams, the programme took their existing grades, and also took into account the track record of their school over time. This ended up penalising bright students from minority and low-income neighbourhoods, who are more likely to go to schools that have, on the whole, lower average grades than schools attended by wealthier students
These examples show that, for AI tools to work properly, well-trained data scientists need to work with high quality data. Unfortunately, much of the data used to teach AI is currently taken from consumers around the world, often without their explicit consent: poorer countries often lack the ability to ensure that personal data are protected, or to protect their societies from the damaging cyber-attacks and misinformation that have grown since the COVID-19 pandemic.
2) Hate, division and lies are good for business
Many social media companies have come under fire from knowledgeable sceptics for using algorithms, powered by AI, to micro-target users, and send them tailored content that will reinforce their prejudices. The more inflammatory the content, the more chance that it will be consumed and shared.
The reason that these companies are happy to "push" socially divisive, polarizing content to their users, is that it increases the likelihood that they will stay longer on the platform, which keeps their advertisers happy, and boosts their profits.
This has boosted the popularity of extremist, hate-filled postings, spread by groups that would otherwise be little-known fringe outfits. During the COVID-19 pandemic, it has also led to the dissemination of dangerous misinformation about the virus, potentially leading to more people becoming infected, many experts say.
3) Global inequality is mirrored online
There is strong evidence to suggest that AI is playing a role in making the world more unequal, and is benefiting a small proportion of people. For example, more than three-quarters of all new digital innovation and patents are produced by just 200 firms. Out of the 15 biggest digital platforms we use, 11 are from the US, whilst the rest are Chinese.
This means that AI tools are mainly designed by developers in the West. In fact, these developers are overwhelmingly white men, who also account for the vast majority of authors on AI topics. The case of the wrongful arrest in Michigan is just one example of the dangers posed by a lack of diversity in this highly important field.
It also means that, by 2030, North America and China are expected to get the lion’s share of the economic gains, expected to be worth trillions of dollars, that AI is predicted to generate.
4)The potential benefits are enormous
Meet Florence, @WHO's AI-driven digital health worker, who will tirelessly provide accurate information, help make a quitting plan, and recommend help-lines & support apps, to help people quit #tobacco
— UN News (@UN_News_Centre) December 9, 2020
More in our story ⤵️ https://t.co/3LQDtm7f5mpic.twitter.com/8LZ3Emyuht
This is not to say that AI should be used less: innovations using the technology are immensely useful to society, as we have seen during the pandemic.
Governments all around the world have turned to digital solutions to new problems, from contact-tracing apps, to tele-medicine and drugs delivered by drones, and, in order to track the worldwide spread of COVID-19, AI has been employed to trawl through vast stores of data derived from our interactions on social media and online.
The benefits go far beyond the pandemic, though: AI can help in the fight against the climate crisis, powering models that could help restore ecosystems and habitats, and slow biodiversity loss; and save lives by helping humanitarian organizations to better direct their resources where they are most needed.
The problem is that AI tools are being developed so rapidly that neither designers, corporate shareholders nor governments have had time to consider the potential pitfalls of these dazzling new technologies.
5) We need to agree on international AI regulation
For these reasons, the UN education, science and culture agency, UNESCO, is consulting a wide range of groups, including representatives from civil society, the private sector, and the general public, in order to set international AI standards, and ensure that the technology has a strong ethical base, which encompasses the rule of law, and the promotion of human rights.
Important areas that need to be considered include the importance of bringing more diversity in the field of data science to reduce bias, and racial and gender stereotyping; the appropriate use of AI in judicial systems to make them fairer as well as more efficient; and finding ways to ensure that the benefits of the technology are spread amongst as many people as possible.
Writing the rules of AI
- UNESCO’s consultation on AI began in July 2020.
- A draft legal, global document on the ethics of AI was drawn up by UNESCO experts, taking into account the wide-ranging impacts of AI, including on the environment and the needs of the global south.
- Drafting international rules governing the use of AI is an important step that will allow us to decide which values need to be enshrined and, crucially, what rules need to be enforced.