Imperial College London > Talks@ee.imperial > CAS Talks > An Exact MCMC Accelerator Under Custom Precision Regimes

An Exact MCMC Accelerator Under Custom Precision Regimes

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Grigorios Mingas.

Markov chain Monte Carlo (MCMC) is one of the most popular and important tools to generate random samples from probability distributions over many variables which occur frequently in Bayesian inference. However, MCMC cannot be practically applied to models with large data sets because of the prohibitive costly likelihood evaluations for each data point. Previous solutions propose to compute the likelihood approximately, e.g. by sub-sampling data or by using custom precision implementations. These methods involve a trade-off between bias in the output and sampling speed; therefore they cannot guarantee unbiased sampling, which is critical in many applications. In this talk, we will introduce a novel mixed precision MCMC accelerator for FPG As, which simulates from the exact probability distribution in contrast to existing approximate MCMC samplers. An auxiliary binary variable is appended to each data point to indicate the corresponding likelihood term evaluation in full or reduced precision. The proposed method guarantees unbiased samples, while the large majority of likelihood computations are performed in reduced precision. Moreover, a tailored FPGA architecture for the algorithm is introduced, and its performance is evaluated using two Bayesian logistic regression case studies of varying complexity: a 2-dimension synthetic problem and MNIST classification with 12-dimension parameters. The achieved speedups over double-precision FPGA designs are 4.21x and 4.76x respectively.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity