While Deep Learning has made machine learning possible it requires vast data sets that are properly normalized along with vast computing resources in order to successfully train them. Compare this with your dog who can learn where you hid his dog treat by seeing you hide it only once.
This is the chasm we have yet to bridge in machine intelligence. While back propagation algorithms have got us off the starting blocks and the volume of data sets and raw compute resources have allowed us to brute force training it is nowhere close to the learning performance of natural brains.
Learning through experience or observation is not unique to humans. The octopus, which has a brain that developed independently from humans, is able to observe the behaviours of other octopus and emulate them. After seeing another octopus remove a cap to get at a crab they can replicate the same actions to obtain their own reward. The evolutionary convergence of human and octopus brains gives us a pathway to understanding what the important aspects of natural learning neural networks are.
Neither humans or octopus have huge data centers at their disposal to run back propagation algorithms to determine inter neuron weightings. The training of weights in natural brains occurs through feedback paths. Natural brains operate based on discrete voltage spikes which trigger neurotransmitters across synapses. The weightings are modified based on the timing of input spikes, which in turn modifies the synapse chemistry. Artificial neural networks use back propagation to model the probability of a spike and modify the weightings of the intermediate neuron connections.
In natural brains the feedback connections for training are part of the network. Artificial Neural Networks on the other hand only have feed forward connections. Adding feedback connections would create loops that would make the back propagation algorithms intractable. Current Deep Learning specifies a set of privileged output neurons. The network is trained in reference to the output neurons, with the back propagation algorithm modifying intermediate connection weights such that the input neurons will trigger the right output neuron.
With natural brains there are no privileged neurons. Weightings are determined only through interactions via direct connections rather than the weightings being modified according to some remote privileged neuron.
Despite the disadvantages that natural brains have compared to machine brains, operating at only hundreds of operations a second compared to machines which operate at billions of operations a second, natural brains still vastly outperform machine brains in terms of learning performance.
While neural networks are evidently the way forward for machine intelligence we need to radically change how they are trained if we are to approach the learning efficiency of natural brains.
The rapid progress in machine learning we have seen since 2012 confirmed my original estimation of how close we are to true general machine intelligence. With the resources being committed to machine intelligence and the focus major tech companies have placed on it I believed the pace of development would only increase leading to general machine intelligence in the short term.
But this apparent progress was deceptive, based on the success of a machine learning algorithm which worked, but very poorly. There is now almost an orthodoxy that machine learning requires vast volumes of structured data and huge computing resources. How many are stopping to ask why comparably feeble natural brains are far more capable of learning?
To achieve substantial improvement in machine learning systems I expect we will need to follow natures lead and use feedback and local neural connections to train them rather than the algorithmic approach currently in favour. It is possible that the success of the back propagation approach is leading much of the machine learning community on a wild goose chase. It might be that we will see another AI winter if the vast resources being poured into the back propagation approach to training neural networks does not bear fruit.
On the other hand there are already many successful applications which are being exploited to make fortunes off the back of Deep Learning. There is so much at stake now in AI that it is hard to see how tech companies could step away.
Progress continues to be made in researching an approach closer to natural brains. Neuroscientists are trying to image the working brain in higher resolution to better understand its structure. Computer scientists are experimenting with Spiking Neural Networks which are closer to how natural brains work.
My bullish estimation on the probability of achieving general intelligence in machines has been moderated by new understanding of the limitations of the current crop of Deep Learning based systems. There will be new applications that may cause significant social impact. But I believe we will need to radically improve learning performance before we achieve general intelligence, and I'm no longer as certain that this will be achieved in the short time frame I originally believed.
Top comments (4)
Hi! Thank you for a great article.
Unfortunately, I can't point a link now but I recall reading the article which states that while heuristics that require an understanding of a problem bring more sense of scientific satisfaction it's brute-force methods like DL which proved to be effective in a long term run. The reason is as simple as a big rise in computational powers during the decades. And when we are sceptical about DL it's again because we underestimate future rise of computing powers.
And tbh this is quite credible to me.
The question is whether such huge computing resources are required to achieve artificial general intelligence. It appears from simple observation of natural systems that a neural net can be trained without huge data centers and megawatts of power. There is no doubt that back propagation is getting to the same end point, a trained neural network, but it is not a fast learner.
When I talk about learning speed I mean how many observations or experiences it takes to learn something. For example, the unit of experience might be a single game of Go. For a human leaning Go might take perhaps 500 games. For Alpha Zero it took hundreds of millions of games. Because machines operate faster they can play vastly more games and thus have more experience, but their ability to learn from each is minuscule.
My observation is simply that natural systems point toward a better learning solution which once bootstrapped are able to learn from a single experience vs the hundreds or thousands required by current ANNs.
Deep Learning works, but it is like the Model T Ford of cars, or the first aircraft at Kitty Hawk. My article was indicating that many have taken the idea that machine learning requires big data and huge data centers to heart because the only implementations we have to date have these features.
There is a weakness in my appeal to nature, in that often engineered solutions can surpass nature just as modern aircraft vastly surpass birds. But at the same time until we achieve comparable learning performance there is something to be learned by taking inspiration from nature.
You seem to have some fundamental misunderstanding about DL and how it relates to natural neurons.
Neurons were just the inspiration for ANNs, we still don't really know how they work, we do know it's not like an artificial "neuron"1. The human brain is also humongous. A quote that stuck with me from a uni professor: "there are more neurons in the brain than there are stars in the universe". So ANNs differ from brains both in a qualitative and a quantitative way. I'd also argue that ANNs are discrete whereas NNNs are continuous (up to quanta).
Essentially, neural networks are just a way to fit (complex) formulas to training data using gradient descent. Linear regression could be considered the simplest one, albeit it is lacking an activation function, and it draws a straight line through datapoints. I don't think anybody in the field is expecting strong AI from ANNs at this point in time. Maybe it can be combined into something bigger in the future. A better contemporary comparison would be between ANNs and just the visual cortex, though my partner who works in neuroscience would probably get argumentative at that comparison as well.
Finally, I think you are underestimating the learning effort of natural brains. Not only have they been subjected to millions of years of evolution (something we've tried to replicate by evolving neural networks!), we actually do get a fuckton of data in our infancy. Every "frame" of input can be seen as a training sample. Presuming the brain "works at 90 fps", that's over 200 million samples in the first month of life.
You do raise an interesting point about mirroring behaviour, the use of "memes" (by their original definition), in essence. In ANNs, AFAIK the main way to pass knowledge between networks is with pre-training.
A few minor things:
My belief is that the companies and governments who invest in deep learning and have access to these huge datasets, will end up producing generalised models that can be trained on specific tasks with small amounts of data. DL/ML/AI will become anothe readily accessible tool.
I still think of DL as in the very early mobile phone era. It works and everyone can see the benefit, and anyone with money is scrambling to make it useful and be the first to market etc. Once it becomes a product that is affordable/accessible to the average company without having to have millions, then that is when we will see it truly become useful.