Artificial Intelligence and the Problem of Bias
When we
think of artificial intelligence, we think of it as objective, impartial and
unrelentingly logical. But we need to remember that, at one point, AI is
programmed to learn by humans. To learn, machines are fed data sets and they
can be full of historical or human bias.
Machine
learning is something we need consider in the wider community, because as forms
of AI increasingly thread into our day-to-day lives, if you’re not male, or
white, there could be some problems.
A very easy
example first. Take Pokémon Go. When Pokémon Go was released, users in New York
found the gyms and PokeStops appearing more in predominantly whiteneighbourhoods.
Turns out
Pokémon Go was using a crowdsourced dataset from a previous augmented reality
game. The people who wrote the algorithms weren’t a diverse group and so their
bias ended up in the game.
If diverse
groups are required to help create unbiased products then it’s worrying when
you consider how unwelcoming the tech industry is to women and people of
colour.
Another
example is LinkedIn. Women on LinkedIn (a business and employment-oriented
service) found they weren’t seeing high-paying jobs as frequently as men.
That’s because LinkedIn was selecting men to see the jobs.
Anu Tewary,
chief data officer for Mint at Inuit, explained the problem to TechRepublic: “…
it was biases that came in from the way the algorithms were written. The
initial users of the product features were predominantly male for these
high-paying jobs, and so it just ended up reinforcing some of the biases.”
Joy
Buolamwini carried out a study of various AI-powered facial recognition
software and found that they performed best with white, especially male, faces.
When it came to women of colour, there were 34% more errors in recognition.
Buolamwini found that when using examples of the darkest-skinned women, the
face-detection systems could get their sex wrong close to half the time. This
error rate was happening because when building the software, the engineers fed
their algorithms primarily images of white males.
Buolamwini
highlights that when you train your software with a biased data set, you end up
with a biased result.
This is a link to an example where AI learned to associate “woman” with “kitchen” using research image collections.
Machine
learning reinforces – it magnifies. If a photoset generally associates women
with kitchens, software trained to study the association and the labels
assigned will end up creating an even stronger association.
Just look
how long it took Tay to turn from Microsoft’s upbeat twitter AI bot to
foul-mouthed racist telling feminists to burn in hell. (Note: less than a day).
This was the result of jokey-troll tweets but still it makes you question how
will we ever teach AI using public data without supplying our public racism and
inequality?
What we
feed AI matters. Who programs AI matters.
These are
low-level examples because AI is still in early stages. But as AI-based systems
take on more complex tasks, as we embed more AI into our daily lives, we risk
embedding sexism, racism and all our prejudices.
What if we
were to use AI for diagnosing in healthcare? Machines can parse loads of
information very quickly. But if we use current data about symptoms and
treatment, we could end up with incorrect or dangerous analysis.
For
example: women can experience heart attack symptoms differently to men – but
may be misdiagnosed because the male symptoms are the “typical” ones.
Women with
endometriosis can take years to get a correct diagnosis because there’s not
enough information about the condition. Not to mention pregnant or menstruating
women are often left out of medical trials.
This is
evidence of biased data. Could feeding this skewed data to a learning machine
just exacerbate the inequalities women experience when it comes to healthcare?
I would say, yes, it could. We need to consider how we might accidentally
contaminate new systems as we expand our thinking to utilise the benefits AI
will provide us.
We like to
imagine using artificial intelligence in our machines will help us be more
logical and less prejudiced. But are we really freeing ourselves from bias? Or
are we embedding it for future generations?
By: Tee Linden
YENİ PERDE MODELLERİ
ReplyDeletenumara onay
mobil ödeme bozdurma
Nft nasil alınır
Ankara Evden Eve Nakliyat
trafik sigortası
dedektör
Web Sitesi Kurmak
aşk kitapları