Business Today

Seed of Doubt

Why the look-before-you-leap practice is critical for machines too.
Team BT   New Delhi     Print Edition: February 25, 2018
Seed of Doubt

Humans experience self doubt more often than they like to admit. It's thought of as negative and rarely encouraged as it means indecisiveness. Doubt serves an important function though: it makes you question and review an action or decision you are about to take and, hopefully, reduces the chances of a blunder.

If human beings are to work in tandem with intelligent machines, doubt has to be built into them as well. Not only do humans need to be assured that a machine is reviewing possibilities and consequences, but also that it can communicate the degree of uncertainty. "If a self-driving car doesn't know its level of uncertainty, it can make a fatal error that can be catastrophic," says Google researcher Dustin Tran, in the MIT Technology Review. That's why Google and Uber's AI Lab are working on 'Probabilistic Programming' with a whole new language called Pyro. Instead of just responding to data with a yes or no, this type of programming builds in knowledge that includes different probabilities.

The belief of researchers working in this field is that a machine being aware of its own level of uncertainty and being able to communicate it will make the machine or system safer. Machines will also be set up to make mistakes and learn from them before they are out in the wild and in use for everyday life. These issues were discussed at a recent conference on AI in California.

Another framework being used to combine deep learning and probabilistic scenarios is Edward, being developed at the Columbia University. Astonishingly, probabilistic programming isn't new; Microsoft has been researching it for many years now based on an understanding that technology and AI predictions cannot afford to be deterministic and must have grey areas built in. To quote an example of a simple demonstration from a researcher in the company, if there is an intelligent application trying to recommend movies for a person to watch, it might display say 200 moves. It has knowledge of previously seen films that are liked by the person. If that is all it knows, it will make recommendations based on similar ones. But as you feed it more information such as what movies are disliked, which actors are liked, what genre, inputs from friends, etc, the bank of knowledge improves and recommendations can improve, though uncertainty increases.

With AI becoming a part of many of facets of life, the complexity and subtlety of giving the system scenarios to consider increases exponentially as it involves pattern recognition, probabilistic reasoning and computational learning. It's a nascent science, but an important one. Mixed Reality


Brain Control It All

It is impressive enough that you can control something with eye movement inside of a virtual reality (VR) headset; now you can actually brain-control things as well. A company called Neurable based in Boston, US, has demonstrated mind control using a VR game they call 'Awakening'. First, we have a headband that has electrodes to tap brain activity that connects to an HTC Vive VR headset. Then there's software to match the two. A calibration phase is needed in which the wearer focusses on objects and the accompanying brain signal is recorded.

In the game, when the player thinks of the objects already seen, the software is able to pick up what he wants to do - for example, pick up a toy and move it. The process is still being fine-tuned, but it has progressed far enough for people to try it out.


  • Print

A    A   A