Business Today
Loading...

Google wants its scientists to portray AI more positively, say reports

The technology giant requested researchers refrain from casting its technology in a negative light in at least three separate cases.

Akarsh Verma | December 24, 2020 | Updated 20:32 IST

Highlights

  • Google is said to have asked researchers to strike a 'positive tone' when it comes to AI.
  • In at least three cases Google requested authors to refrain from casting its technology in a negative light.
  • Google is yet to make an official statement.

Google has added an extra layer of checks for all the research produced by its experts, who now have to consult with legal, policy, and public relations teams before pursuing sensitive topics, such as facial recognition. On a few occasions, experts were also advised to "take great care to strike a positive tone".

Google's parent company Alphabet Inc. moved to tighten control over its scientific research papers by launching a new "sensitive topics" review, and in at least three cases requested authors to refrain from casting its technology in a negative light, according to internal communications and interviews with researchers involved in the work. It's not clear when Google started enforcing the new policy, but people familiar with the matter are saying it began sometime in June.

"Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues," one of the internal web pages on the policy says, according to Reuters.

The people behind the new policy have said that it does not mean researchers should "hide from the real challenges" of the use of AI. But, discussing the matter with Reuters, senior scientist Margaret Mitchell warned about the dangers of this policy. "If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we're getting into a serious problem of censorship," she said.

In early December, Kebru, a widely renowned researcher said he had been fired by Google. He backed down from the order not to publish the research, saying that AI, which is capable of reflecting speech, could turn marginalised people into a disadvantage. Four staff researchers who spoke with Reuters confirmed Kebru's claims, saying they also believe that Google is beginning to interfere in critical studies of its harmful potential.

Google is yet to make an official statement or comment about their new 'sensitive topics' review policy or the alleged censorship of research papers relating to AI.

Youtube
  • Print

  • COMMENT
BT-Story-Page-B.gif
A    A   A
close