Page 30 - GCN, Feb/Mar 2018
P. 30

                                 Statuscheck     MACHINE LEARNING
Machine learning: The
good, the bad and the ugly
The Intelligence Advanced Research Projects Activity has been on the cutting edge of this ‘new’ technology for over a decade
lthough machine learn- ing might be the enabling technology of the fu-
ture, at the Intelligence Advanced Research Projects Activity, it’s old school.
“Machine learning has been a pri- ority research area since we were created,” IARPA Director Jason Ma- theny said. “In fact, most of our first programs were in machine learning.”
IARPA was founded in 2006 to drive research and innovation in the federal government’s intelligence agencies. Some of its early efforts include the Biometrics Exploitation Science and Technology program, which developed tools for facial recogni- tion that have since been widely adopted.
Aladdin Video, for example, was an effort to identify the activities in stream- ing video. It could “tell whether this is a video of a birthday party or a video of someone break dancing or a video of somebody describing how to build an ex- plosive device,” Matheny said.
Those projects laid the foundation for the use of machine learning in more complex applications, such as predicting cyberattacks based on chatter in hacker forums and the market price of malware, forecasting military mobilization and terrorism, and developing accurate 3-D models of buildings or entire cities from satellite imagery.
IARPA is also researching ways to im- prove the fundamental architecture on which machine learning is built. The neu- ral network is “a very rough approxima- tion of how we thought the brain worked in the 1950s,” Matheny said. “Our ma- chine learning approaches, in general, haven’t caught up with neuroscience.”
One effort to close the gap is the on- going Machine Intelligence from Cortical Networks program, which seeks to re- verse engineer the brain’s algorithms. In its first year, it has developed the largest dataset of wiring diagrams of the circuits responsible for learning in animal brains.
“Not only are \\\[animal brains\\\] able to learn from a much smaller number of examples than typical machine learning systems, but they do so with much less
energy — about one one-millionth of the amount of energy in typical computers,” Matheny said.
The agency is also in the early stages of exploring what quantum computing will mean for machine learning. Gate-based quantum com- puting is years away from being a viable technology, he said, but quan- tum annealing could prove useful in solving optimization problems that could improve machine learning — such as finding the shortest path be- tween two points.
All the projects rely on having quality data to train the programs. Such training could become even
more important in combating spoofing or adversarial machine learning, Ma- theny said. Spoofing involves changing a select few pixels in an image. Humans might not notice the change, but image recognition systems can be tricked into misclassifying a school bus as an ostrich.
Having systems “arm wrestle” against adversarial examples while training could teach them how to spot such trick- ery, he added. Another defense is model averaging, where “you might have a neural net and a support vector machine that’s not vulnerable to the same kinds of attacks.”
Matheny described spoofing as “a deep problem in machine learning” and said that “before machine learning is as- signed to doing high-consequence analy- sis, some of these vulnerabilities need to be fixed.” •
GCN covers AI, machine learning and other emerging technologies on a daily basis at For more on machine learning, go to

   28   29   30   31   32