General
Introducing Nested Learning: A new ML paradigm for continual learning by Google
What’s the idea?
Current machine-learning models struggle with continual learning once they learn new tasks. They often forget older ones. This problem is called catastrophic forgetting. Google Research introduces a new view and solution called Nested Learning. Nested learning treats a model not as one big learning process, but as many smaller, interlinked learning problems (with different level and update speeds) all happening inside a single system. Basically copying human like learning with improved memory. Nested Learning is Google’s attempt to mimic human-style learning: fast learning for new tasks, slow learning for stable knowledge. The result is better long-term memory and far less forgetting. If that works, it solves a big problem for heavy AI use. It's one of the great problems solved like it looks from the outside.
I'm looking forward to testing it!
Sources:
https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/