Imperial College London > Talks@ee.imperial > CAS Talks > Neural Network Based Reinforcement Learning Acceleration on FPGA Platforms

Neural Network Based Reinforcement Learning Acceleration on FPGA Platforms

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Joshua M Levine.

This talk is a practice talk for HEART 2016 . The topic is about accelerating neural-network-based reinforcement learning algorithm using FPGA . Below is the abstract of the paper:

Deep Q-learning (DQN) is a recently proposed reinforcement learning algorithm where a neural network is applied as a non-linear approximator to its value function. The exploitation-exploration mechanism allows the training and prediction of the NN to execute simultaneously in an agent during its interaction with the environment. Agents often act independently on battery power, so the training and prediction must occur within the agent and on a limited power budget. In this work, We propose an FPGA acceleration system design for Neural Network Q-learning (NNQL). Our proposed system has high flexibility due to the support to run-time network parameterization, which allows neuro-evolution algorithms to dynamically restructure the network to achieve better learning results. Additionally, the power consumption of our proposed system is adaptive to the network size because of a new processing element design. Based on our test cases on networks with hidden layer size ranging from 32 to 16384, our proposed system achieves 7x to 346x speedup compared to GPU implementation and 22x to 77x speedup to hand-coded CPU counterpart.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity