Machines and applications with artificial intelligence (AI) capabilities will soon rely less on bottom-up big data and more on top-down reasoning that more closely resembles the way humans approach problems and tasks, Accenture said Monday.
“We will have top-down systems that don’t require as much data and are faster, more flexible, and, like humans, more innately intelligent,” Accenture said in a blog in Harvard Business Review titled The Future of AI Will Be About Less Data, Not More. To craft a vision of where AI is heading in the next several years, and then plan investments and tests accordingly, companies should look for development in four areas, wrote blog authors Paul Daugherty, Accenture’s chief technology and innovation officer, and H. James Wilson, managing director of information technology and business research at Accenture Research.
- More efficient robot reasoning – When robots have a conceptual understanding of the world, as humans do, it is easier to teach them things, using far less data. Consider CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), which are “easy for humans to solve and hard for computers.” Vicarious, a U.S.-based startup, is working to develop artificial general intelligence for robots, enabling them to generalize from few examples. Their model can break through CAPTCHAs at a far higher rate than deep neural networks and with 300-fold more data efficiency. To parse CAPTCHAs with almost 67% accuracy, the Vicarious model required only five training examples per character, while a state-of-the-art deep neural network required a 50,000-fold larger training set of actual CAPTCHA strings.
- Ready expertise – Industrial manufacturing company Siemens is using top-down AI to control the highly complex combustion process in gas turbines, where air and gas flow into a chamber, ignite and burn at temperatures as high as 1,600 degrees Celsius. Using bottom-up machine learning methods, the gas turbine would have to run for a century before producing enough data to being training. Instead, Siemens researchers used methods that required little data in the learning phase for the machines. “The monitoring system that resulted makes fine adjustments that optimize how the turbines run in terms of emissions and wear, continuously seeking the best solution in real time, much like an expert knowledgeably twirling multiple knobs in concert.”
- Common sense – What comes naturally to humans, without explicit training or data, is fiendishly difficult for machines. Says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2): “No AI system currently deployed can reliably answer a broad range of simple questions, such as, ‘If I put my socks in a drawer, will they still be there tomorrow?’ or ‘How can you tell if a milk carton is full?’” To help define what it means for machines to have common sense, AI2 is developing a portfolio of tasks against which progress can be measured. And researchers at Microsoft and McGill University in Montreal have jointly developed a system that has shown great promise for untangling ambiguities in natural language.
- Making better bets – Human can routinely, and often effortlessly, sort through probabilities and act on the likeliest, even with relatively little prior experiences. Machines are now being taught to mimic such reasoning through the application of Gaussian processes – probabilistic models that can deal with extensive uncertainty, act of sparse data, and learn from experience. Google’s parent company, Alphabet, designed Project Loon, which provides internet service to under-served regions of the world through a system of giant balloons hovering in the stratosphere. Their navigational systems employ Gaussian processes to predict where in the stratified and highly variable winds aloft the balloons need to go. The balloons can not only make reasonably accurate predictions by analyzing past flight data, but also analyze data during a flight and adjust their predictions accordingly. “Such Guassian processes hold great promise,” the blog authors wrote. “They don’t require massive amounts of data to recognize patterns; the computations required for inference and learning are relatively easy, and if something goes wrong, it can be traced, unlike the black boxes of neural networks.”
The coming five years will see applications and machines becoming less artificial and more intelligent, the authors conclude. “This general reasoning ability will enable AI to be more broadly applied than ever, creating opportunities for early adopters even in businesses and activities to which it previously seemed unsuited.”