Deep learning has proven to be a powerful method for going about artificial intelligence over the last few years, which has led to a growing interest in it across industries. So how does it generally work?
Chief scientist of contact intelligence company CircleBack, Tim Oates, gave an overview of how deep neural networks work at Big Data TechCon in Boston, which took place in April.
A deep neural network is an artificial neural network that has many hidden layers where data passes through between the input layer and the output layer. ‘Normal’ neural nets are not deep in that they usually only have one layer.
Deep neural nets take inspiration from how the human brain transmits data through signals between neurons and their connections with other neurons and tries to mimic that in a computer program.
“A very raw sketch of the way a neuron works is that neurons fire, they send signals out, some of those signals come into another neuron and then that neuron says, ‘If I get enough incoming signals then I’m going to fire as well.’
“This then repeats. So one fires and sends its signal off to many other neurons and then they fire,” Oates said.
In a deep neural net, raw data values are first input into nodes, which are like neurons, that each have weights put on them, which is like ranking their importance.
“So say I’m a neuron. I do a weighted sum of my inputs, I ask with your input what is the value, what’s the weight? And then another input, what’s the value and the weight? I do a weighted sum of both.
"Then I can have a threshold that says if the weighted sum is above some number I’ll fire,” said Oates.
Data is squeezed into increasingly thin ‘bottlenecks’ as it goes through the layers of the deep net to take an image, for example, and learn how to reconstruct it bit by bit. For example, it will start by providing the edges of an object, then in the next level it will form parts of the object, and so forth until it fills in all details of the object, which is then used for classification.
This process can be considered unsupervised learning in that it doesn’t need to be told what to do in advance, it learns how to extract features by itself.
“It turns out that by pushing it through that information bottleneck, it makes the system learn to extract high level features. At every level of that hierarchy I’m learning more and more abstract features."
With ‘normal’ neural networks, the features are extracted with a domain expert who can point out which ones are most relevant to the problem. Deep neural networks can automate this process.
“If I’m trying to classify patients as to whether they need a blood transfusion or not, I’ll talk to a doctor and he or she will say, ‘If the blood pressure falls below a certain level, or if I have the heart rate too high or too low.’ Then we’ll say, ‘Hey neural network, I’m going to tell you about this thing’.
“What deep learning methods do is they say we are going to take the human expert out of the equation.
“There’s been a lot of work on this at Google, for example, which has a lot of access to an enormous image collection. What it’ll like to be able to do is to train a system to automatically recognise a variety of things.”
Though deep neural nets are powerful, the downside is that they require a lot of data and computer power. “They tend to be computationally expensive,” Oates pointed out.
They also tend to be opaque, he said.
“The issue is that you might have a system that works really well, performs well but you might not have any idea why it’s doing it. It would say if these two images look somewhat like these two but you can’t it ask why.”
“Although people are starting to look at that a little bit,” Oates added.
Follow Rebecca Merrett on Twitter: @Rebecca_Merrett