One of the recurring themes at the recently concluded India Economic Summit in Delhi was around Artificial Intelligence (AI) and ethics. AI-enabled systems learn on their own and if they learn from bad data, they could wreak havoc. AI-enabled bots, for instance, can turn racist. AI-enabled engines can also deny one a loan if it doesn't like her or his demographic profile. This has governments and companies worried.
Kay Firth-Butterfield is with the World Economic Forum's Centre for the Fourth Industrial Revolution. She works with governments, businesses, civil societies and academia to create governance around AI. The major ethical concerns that companies and governments are now coming to understand are privacy, bias, transparency, accountability, and safety, she says.
"I started working in this field in 2014. Not many companies were thinking about AI ethics at that time. AI was all about producing the algorithms and making money. However, now, companies are beginning to understand that ethical AI should be a component of the work they are doing. Without it, there is a potential danger to their brand value," she stresses.
Many consultancies now have a 'Responsible AI' division. Some big companies have AI Ethics panels. "You see this across the world. Governments are also creating strategies that have ethics. The Beijing Government has an AI strategy with an ethical component. India's government has an AI strategy with an ethical component. The work that Europe has done around GDPR (General Data Protection Regulation) has also brought ethics to the fore," she adds.
Firth-Butterfield notes that it is important companies think about the whole lifecycle of an AI product, from the ethical considerations at the beginning of the product to thinking about testing the algorithms during the lifecycle.
A few software companies that enable companies to use AI for decision-making are offering a choice - businesses can decide their boundaries and tolerances when it comes to ethics. Customer engagement software company Pegasystems is one of them.
One of the problems with AI algorithms is that they are opaque. It is difficult to understand how they arrive at a decision. This could become a regulatory nightmare for companies, says Suman Reddy Eadunuri, Managing Director and Country Head at Pegasystems India.
"If there is an AI-generated credit card offer in the US based on your zip code, or ethnic background, it could be a regulatory issue. AI being opaque is not a good thing," he says. Pega's platform allows one to toggle between transparency and the opaqueness. "We have provided a switch in our software. When you turn on the transparency button, you can audit how a decision was made. You can see what data was used to make a decision. We don't mandate; we just recommend what the company should do," Eadunuri explains.
Just like Ethical AI, the company has introduced a feature around Empathetic AI in its software. AI is good enough to make one buy many things today. Such as a loan offer that pops up on the mobile phone screen and it appears so well timed and seductive that people fall for it.
"Any company would like to maximise sales. They can push loans down your throat. But if you think about the lifetime value you can get out of a customer, thinking about short-term revenues may not be a good thing. Your economic background might change and you might default on loans," Eadunuri says.
Pega's software has a 'warm versus cold' setting. "If it shows that a decision is extreme cold, you are not empathetic. This is evident to the developer or the business guy at a company. They can toggle and play with this system and see how offers can get generated. The system allows you to calculate the customer's lifetime value and as a company, you can pick the right spot," he adds.