Representation Stability in a Minimal Continual Learning Agent
Abstract
A minimal continual learning agent demonstrates representational stability and plasticity transitions through persistent state vector updates and cosine similarity measurements across sequential textual inputs.
Continual learning systems are increasingly deployed in environments where retraining or reset is infeasible, yet many approaches emphasize task performance rather than the evolution of internal representations over time. In this work, we study a minimal continual learning agent designed to isolate representational dynamics from architectural complexity and optimization objectives. The agent maintains a persistent state vector across executions and incrementally updates it as new textual data is introduced. We quantify representational change using cosine similarity between successive normalized state vectors and define a stability metric over time intervals. Longitudinal experiments across eight executions reveal a transition from an initial plastic regime to a stable representational regime under consistent input. A deliberately introduced semantic perturbation produces a bounded decrease in similarity, followed by recovery and restabilization under subsequent coherent input. These results demonstrate that meaningful stability plasticity tradeoffs can emerge in a minimal, stateful learning system without explicit regularization, replay, or architectural complexity. The work establishes a transparent empirical baseline for studying representational accumulation and adaptation in continual learning systems.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stream Neural Networks: Epoch-Free Learning with Persistent Temporal State (2026)
- Training instability in deep learning follows low-dimensional dynamical principles (2026)
- Kalman-Inspired Runtime Stability and Recovery in Hybrid Reasoning Systems (2026)
- How Exploration Breaks Cooperation in Shared-Policy Multi-Agent Reinforcement Learning (2026)
- Evolving Programmatic Skill Networks (2026)
- Cerebellar-Inspired Residual Control for Fault Recovery: From Inference-Time Adaptation to Structural Consolidation (2026)
- Minimal Computational Preconditions for Subjective Perspective in Artificial Agents (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper