Business Today
Loading...

Twitter and Zoom face backlash over racial algorithmic bias

Zoom and Twitter's AI highlight photos of people or characters with lighter skin tones, recent tweets show.

twitter-logoYasmin Ahmed | September 22, 2020 | Updated 14:19 IST
(Source: Reuters)

Highlights

  • Recent tweets show that Twitter's AI highlighted lighter skin tones and Twitter has admitted its shortcomings.
  • Informal experiments were carried out by users for fictional animated characters and animals which gave similar results.
  • Twitter noted that it will work on the feature and will open source the analysis so others can review and replicate.

Twitter and Zoom faced backlash from users who complained of racial bias in the visual algorithms. A researcher, Colin Madland, reported that Zoom cropped the head of his faculty member with darker skin during a Zoom call when the virtual background on the platform was changed. When Madland took to Twitter to report the issue, he found that Twitter AI also followed the same face-cropping process.

"A faculty member has been asking how to stop Zoom from removing his head when he uses a virtual background. We suggested the usual plain background, good lighting etc, but it didn't work," Madland tweeted.


Other users came up with similar conclusions when they tried out similar informal experiments. Entrepreneur and cryptographic engineer Tony Arcieri posted photos of Former US President Barack Obama and senate majority leader Mitch McConnell and found that Twitter highlighted McConnell's photo over Obama's.

Several replies to the thread adjusted the contrast and tweaked the photos to find varying results. If the algorithm highlighted Obama's image, it was because the image's colours were either inverted or the contrast of the photo was increased. "The algorithm seems to prefer a lighter Obama over a darker one," Arcieri in the tweet wrote.

Users further tested the algorithms functions for animated fictional characters from The Simpsons, dogs and stock photos and found similar results.

Twitter stopped the facial recognition feature because most if the times images do not have faces. It instead uses several algorithmic tools to try to focus on the most important parts of the picture, trying to ensure that faces and text remain in the cropped part of an image. Twitter automatically crops images to prevent them taking up too much space on the main feed, and to allow multiple pictures to be shown in the same tweet, The Guardian noted.

However, the current posts show that the feature is flawed and Twitter has admitted its shortcomings.


"Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it's clear from these examples that we've got more analysis to do. We'll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate," a Twitter spokesperson said in a statement.

Twitter CTO Parag Aggrawal reacted saying, "This is a very important question. To address it, we did analysis on our model when we shipped it, but needs continuous improvement. Love this public, open, and rigorous test — and eager to learn from this."

Youtube
  • Print

  • COMMENT
BT-Story-Page-B.gif
A    A   A
close