Page 8 - GCN, Feb/Mar 2018
P. 8

                                \\\[BrieFing\\\]
  Machine learning with
limited data
BY MATT LEONARD
MACHINE LEARNING IS SHOWING
its potential for an ever-broader range of government missions. But the process of teaching a neural network typically requires a great deal of data, so what if there isn’t much relevant data available?
For James Sethian, director of the Center for Advanced Mathematics
for Energy Research Applications at Lawrence Berkeley National Labora- tory, that’s a recurring challenge. “Because we’re working in a laboratory and because there are a lot of scientific problems here, we often don’t have
the vast amounts of curated or labeled data that are necessary to train such a network,” he said.
So Sethian and his colleague Daniël Pelt developed a neural network that requires fewer parameters and training images. The result is a Mixed-Scale Dense Convolution Neural Network.
A typical neural network is made up of layers, and in the past, each layer performed a specific analysis. One layer informs the next layer, so relevant information must be copied and passed along.
“What that means is if there is some relevant information found at a certain layer but it’s required deeper in the network, \\\[then\\\] it has to be copied throughout the network,” Pelt said.
Standard practice involves looking at fine-scale information in the early layers (e.g., by asking, “Is this an edge?”) and large-scale information in the later layers (“Where is X in relation to Y in the image?”).
“The main difference with our ap- proach compared to these traditional ones is that now we mix different scales within each layer,” Pelt said.
That means large-scale informa- tion is analyzed earlier along with fine-scale information, allowing the algorithm to focus on only the relevant fine-grain details. The ability to have multiple scales on the same layer was one important change, Pelt said. The second is what gives the network the “dense” part of its name.
Pelt and Sethian wanted each layer to be able to talk with every other layer. In other words, they wanted the layers to be densely connected. As a result, information doesn’t have to be copied repeatedly throughout the network, and earlier layers can com-
municate relevant information directly to layers later in the series.
Because of those operational changes, the neural network needs far fewer parameters and training images to correctly identify what is being observed.
Pelt and Sethian have been using the network to extract biological structure from cell images, which is an arduous process that takes weeks when done by hand. The new network was trained on data from seven cells to accurately iden- tify the biological structure of an eighth.
Although the Mixed-Scale Dense Convolution Neural Network is power- ful, it does not require supercomputer- scale resources. In fact, the ability is live on a web portal for anyone to use for image-labeling applications. •
 8 GCN FEBRUARY/MARCH 2018 • GCN.COM
 VHCAL/SHUTTERSTOCK













































































   6   7   8   9   10