DeepMind presents the new Elastic Weight Consolidation (EWC) algorithm to improve machine learning. Credit: Digital Trends.

Google scientists of the machine learning division DeepMind have developed a new deep learning technique that resembles how human memory works. Using the Elastic Weight Consolidation (EWC) algorithm, AIs can learn faster than before.

This new process represents a major threshold crossing for the field of artificial intelligence, given that, up until now, machines could only learn how to do something by forgetting how to do the previous thing they learned first.

Now, much like humans do, they can store information in their artificial memories, resulting in a learning process similar to that of humans. The experience is richer and faster than before, and it could be the key to true AI in the coming years.

How does continual learning work on artificial intelligence?

Neural networks typically work in conditions under which they are presented with a problem and they learn how to solve it through several attempts. Trial and error at its best, it is highly effective but it has inherent limitations when it comes to actual effective learning.

Given these parameters, it means that modern machines cannot tackle several problems at once. Their memories are configured to solve a particular issue, and after long exposure to it they are optimized to achieve their goal, but that’s it.

This phenomenon is known as catastrophic forgetting. It is safe to assume why scientists named it like that, but soon, there won’t be any need to worry about AIs wiping their memory to undertake a new task.

Google researchers have developed a deep learning model that works like human memory does. When we learn something, we can learn more things later and still remember the first ability we acquired.

DeepMind’s solution

To more closely resemble the human thought process, scientists came up with the Elastic Weight Consolidation (EWC) algorithm. This new formula gives importance to all the input events the machine perceives and, in turn, is able to remember and learn about them simultaneously.

As such, DeepMind scientists put their numbers to the test and made a computer play several Atari games at once using this new technique. A different machine was carrying out the same process using regular neural networks.

Results showed that the EWC machine was not only able to progress on several games all at once but also that it did so much faster than expected. This is because being able to learn in this manner makes it easier for neural networks to connect to previous knowledge and consolidate new one more meaningfully.

The other machine, in turn, took longer to get skilled enough at one game at a time. To take on the next one, it had to forget all that it had learned about the one played before, making the process slower and rather inhuman.

Google machines are getting more and more sophisticated, as scientists apply tenets from neuroscientific theories to their work in hopes of making the technology transcend some of its limitations.

Source: DeepMind