Researchers say they have successfully addressed what they call a ‘major, long-standing obstacle to increasing AI capabilities’ by drawing inspiration from a human brain memory mechanism known as ‘replay’.
Artificial intelligence (AI) experts at the University of Massachusetts (UMass) Amherst and the Baylor College of Medicine said they have developed a new method to protect – “surprisingly efficiently” – deep neural networks from “catastrophic forgetting” – upon learning new lessons, the networks forget what they had learned before.
Post-doctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting.
“One solution would be to store previously encountered examples and revisit them when learning something new. Although such ‘replay’ or ‘rehearsal’ solves catastrophic forgetting,” they wrote, “constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly.”
According to the researchers, unlike AI neural networks, humans are able to continuously accumulate information throughout their life, building on earlier lessons. An important mechanism in the brain believed to protect memories against forgetting is the replay of neuronal activity patterns representing those memories.
Siegelmann says the team’s major insight is in “recognising that replay in the brain does not store data”. Rather, “the brain generates representations of memories at a high, more abstract level with no need to generate detailed memories”.
Inspired by this, Siegelmann and her colleagues created an artificial brain-like replay, in which no data is stored. Instead, like the brain, the network generates high-level representations of what it has seen before.
According to the team, the “abstract generative brain replay” proved extremely efficient, and they showed that replaying just a few generated representations is sufficient to remember older memories while learning new ones.
The researchers added that generative replay not only prevents catastrophic forgetting and provides a new, more streamlined path for system learning, but it also allows the system to generalise learning from one situation to another.
They highlighted an example in this process. “If our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also tell cats from foxes without specifically being trained to do so. And notably, the more the system learns, the better it becomes at learning new tasks,” said van de Ven.
In their report, the researchers proposed a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. “Our method achieves state-of-the-art performance on challenging continual learning benchmarks without storing data, and it provides a novel model for abstract level replay in the brain,” they wrote.
Van de Ven said: “Our method makes several interesting predictions about the way replay might contribute to memory consolidation in the brain. We are already running an experiment to test some of these predictions.”