Imperial College London > Talks@ee.imperial > CAS Talks > What's inside Xilinx's 7nm Versal Adaptive Compute Acceleration Platforms

What's inside Xilinx's 7nm Versal Adaptive Compute Acceleration Platforms

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact George A Constantinides.

Xilinx has recently unveiled 7nm product families. This family introduces innovations including a hardened Network-on-Chip and a network of vector VLIW processors (AI Engine) to accelerate dense compute.

Together with other features introduced at the 28nm and 16nm process nodes (ARM processors, Multi-GS/s DAC /ADC, HBM memory), the chips open new markets in 5G wireless and AI inference. Most of these features will be accessible from software programmers without hardware experience. This talk will introduce the new device architecture and AI Engine in particular and begin a discussion about what novel capabilities they unlock for academic research.

Dr Samuel Bayliss is a Principal Engineer within the Xilinx Research Labs in San Jose, CA. He holds a PhD in Electronic Engineering from Imperial College London and has continuing research interests in formal compiler techniques for loop optimization and the design of specialized processors and compilers targeting AI inference applications.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity