To read this content please select one of the options below:

Efficient experience replay architecture for offline reinforcement learning

Longfei Zhang (College of System Engineering, National University of Defense Technology, Changsha, China)
Yanghe Feng (College of System Engineering, National University of Defense Technology, Changsha, China)
Rongxiao Wang (College of System Engineering, National University of Defense Technology, Changsha, China)
Yue Xu (The No. 31102 Troop of PLA, Nanjing, China)
Naifu Xu (College of System Engineering, National University of Defense Technology, Changsha, China)
Zeyi Liu (College of System Engineering, National University of Defense Technology, Changsha, China)
Hang Du (College of System Engineering, National University of Defense Technology, Changsha, China)

Robotic Intelligence and Automation

ISSN: 2754-6969

Article publication date: 21 March 2023

Issue publication date: 28 March 2023

181

Abstract

Purpose

Offline reinforcement learning (RL) acquires effective policies by using prior collected large-scale data, while, in some scenarios, collecting data may be hard because it is time-consuming, expensive and dangerous, i.e. health care, autonomous driving, seeking a more efficient offline RL method. The purpose of the study is to introduce an algorithm, which attempts to sample the high-value transitions in the prioritized buffer, and uniformly sample from the normal experience buffer, improving sample efficiency of offline reinforcement learning, as well as alleviating the “extrapolation error” commonly arising in offline RL.

Design/methodology/approach

The authors propose a new structure of experience replay architecture, which consists of double experience replies, a prioritized experience replay and a normal experience replay, supplying samples for policy updates in different training phases. At the first training stage, the authors sample from prioritized experience replay according to the calculated priority of each transitions. At the second training stage, the authors sample from the normal experience replay uniformly. The combination of the two experience replies is initialized by the same offline data set.

Findings

The proposed method eliminates out-of-distribution problem in an offline RL regime, and promotes training by leveraging a new efficient experience replay. The authors evaluate their method on D4RL benchmark, and the results reveal that the algorithm can achieve superior performance over the state-of-the-art offline RL algorithm. The ablation study proves that the authors’ experience replay architecture plays an important role in terms of improving final performance, data-efficiency and training stability.

Research limitations/implications

Because of the extra addition of prioritized experience replay, the proposed method increases the computational burden and has the risk of changing data distribution due to the combined sample strategy. Therefore, researchers are encouraged to use the experience replay block effectively and efficiently further.

Practical implications

Offline RL is susceptible to the quality and coverage of pre-collected data, which may be not easy to be collected from specific environment, demanding practitioners to handcraft behavior policy to interact with environment for gathering data.

Originality/value

The proposed approach focuses on the experience replay architecture for offline RL, and empirically demonstrates the superiority of the algorithm on data efficiency and final performance over conservative Q-learning across diverse D4RL tasks. In particular, the authors compare different variants of their experience replay block, and the experiments show that the stages, when to sample from the priority buffer, play an important role in the algorithm. The algorithm is easy to implement and can be combined with any Q-value approximation-based offline RL methods by minor adjustment.

Keywords

Citation

Zhang, L., Feng, Y., Wang, R., Xu, Y., Xu, N., Liu, Z. and Du, H. (2023), "Efficient experience replay architecture for offline reinforcement learning", Robotic Intelligence and Automation, Vol. 43 No. 1, pp. 35-43. https://doi.org/10.1108/RIA-10-2022-0248

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles