Imperial College London > Talks@ee.imperial > Yiannis Demiris's list > Multimodal Learning for Robot Behavior Generation utilizing Neuro-dynamical Model
Log inImperial users Other users No account?Information onFinding a talk Adding a talk Syndicating talks Who we are Everything else |
Multimodal Learning for Robot Behavior Generation utilizing Neuro-dynamical ModelAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Yiannis Demiris. In this talk, I will present two topics of our research on neuro-dynamical models which enable a humanoid robot to recognise the environment and to interact with human beings. The first is a multi-modal integration model of humanoid robot using deep neural network. The mechanism enables the humanoid to handle different object without any dedicated sensory feature extraction mechanism. By retrieving temporal sequences over the learnt different modalities, the robot can generate the object manipulation behaviors from the corresponding image sequences, and vice versa. The other is a recurrent neural model for a linguistic communication of the robot using the sequence to sequence method. After the network receives a verbal input, its internal state changes according to the first half of the attractors with branch structures corresponding to semantics. Then, the internal state shifts to the second half of the attractors for generating the appropriate behavior. The model achieves immediate and repeatable response to linguistic directions. This talk is part of the Yiannis Demiris's list series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsTalks cas Featured talksOther talksStability and power sharing in microgrids |