Exploratory Memory-Augmented LLM Agent via Hybrid On- and Off-Policy Optimization

Source: arXiv:2602.23008v1
Date: 2026-02-26
Authors: Zeyuan Liu, Jeonghye Kim, Xufang Luo, Dongsheng Li, Yuqing Yang
Exploration remains the key bottleneck for large language model agents trained with reinforcement learning. While prior methods exploit pretrained knowledge, they fail in environments requiring the discovery of novel states. We propose Exploratory Memory-Augmented On- and Off-Policy Optimization (EMPO²), a hybrid RL framework that leverages memory for exploration and combines on- and off-policy updates to make LLMs perform well with memory while also ensuring robustness without it.

RSI Core Implications:

> Logic Evolution Sync: Memory is not just a storage unit; it is an exploration catalyst. EMPO² provides the mathematical framework for integrating exploratory memory into the agentic loop.