Business Today

Why countries need proper regulations for automated decision making

A report on machine bias highlighted how many of the programmes used in US to decide on bail, sentencing and for identifying future criminals was biased against African American citizens.

twitter-logo Prosenjit Datta   New Delhi     Last Updated: April 5, 2018  | 18:45 IST
Why countries need proper regulations for automated decision making

By now, how tech and tech savvy private companies - ranging from Google, Microsoft, Amazon and Facebook to start ups in India and China - are using algorithms, big data and artificial intelligence for decision making and precise targeting has been discussed at length.

Not so well discussed is the fact that governments are also taking tentative steps in using algorithms and big data sets on a range of subjects in policy making. The US has been the first off the block in this regard. Public agencies in the US have started using data analysis based scoring systems and other software programmes to take decisions on granting bail, length of sentence for someone convicted of an offence, and enforcement of various services. A few days ago, the US used algorithms to pick the 1,300 products on which high import tariffs were to be levied, "to minimize the impact on US consumers while maximizing the impact on China," according to US trade representative Robert Lighthizer (It is not clear whether the US used the algorithms to also calculate the impact on US manufacturers because of the import bans, or whether China used algorithms when it retaliated with its own tariff barriers for US products).

Meanwhile, the Indian government is beginning to take baby steps to use algorithms and data in some areas. There are reports that it is using geospatial analytics for the first time for a better understanding of urbanisation. The tax department has also reportedly started using algorithms to track down black money, especially to check the big cash deposits post the demonetisation of specified banknotes in November 2016.

Over a period of time, governments around the world can be increasingly expected to use data analytics, algorithm based choices and artificial intelligence. And that is why AI Now, a research institute associated with the New York University, and working on the social implications of artificial intelligence has flagged the dangers associated with using artificial intelligence tools without proper safeguards.

The latest AI Now report focuses on four specific areas - Labour and Automation, Bias and Inclusion, Rights and Liberties, and finally, Ethics and Governance. The report points out that the big danger is that a public agency or a government may use "black box" systems opaque to outside scrutiny. AI Now says that citizens should have the right to know how the systems taking the decisions operate, or whether it has been tested or validated or not.

The big problem that could crop up is that the algorithm or AI software programme might have a bias built in. A ProPublica report on machine bias had highlighted how many of the programmes used by different states in the US to decide on bail and sentencing and for identifying future criminals was biased against citizens of the African American community.

As countries around the world grapple with rules and regulations for data protection challenges, they also need to sit and work out proper regulations and guidelines for automated decision making - and make sure that people are protected against inbuilt machine bias, and more importantly, have the right to challenge the decisions with proper information.

  • Print

A    A   A