Reinforcement Learning with Auxiliary Memory
April 30, 2021
Tuesday, May 25th at 1:00pm
Advisor: David Wingate
MS thesis defense for Sterling Suggs
Abstract:
Deep reinforcement learning algorithms typically require vast amounts of data to
train to a useful level of performance. Each time new data is encountered, the network
must inefficiently update all of its parameters. Auxiliary memory units can help deep neural
networks train more efficiently by separating computation from storage, and providing a
means to rapidly store and retrieve precise information. We present four deep reinforcement
learning models augmented with external memory, and benchmark their performance on ten
tasks from the Arcade Learning Environment. Our discussion and insights will be helpful for
future RL researchers developing their own memory agents.