Imperial College London > Talks@ee.imperial > Featured talks > Human-In-the-Loop Graphics and Video

Human-In-the-Loop Graphics and Video

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Patrick Kelly.

In the long-term, we aim to have visual computing algorithms and systems that are responsive: they should help a user accomplish their tasks, and they should improve with continued use. Measuring progress and reaching this goal is harder than it may seem. In this talk, I will present three of our recent systems that successfully wrap modest user-interfaces around purpose-built computer vision/graphics systems.

I will show how statistical models of shape and appearance are adjusted through feedback from users. The user-input, in turn, enables applications where we i) synthesize text in other people’s handwriting, ii) rotoscope moving objects in special effects footage, and iii) identify rare actions in videos. The third system, VideoTagger, is actually the most flexible, designed to give non-programming scientists an experimental platform for studying, for example, 3 months long fruit-fly videos. Interested users are encouraged to try these systems for themselves, and fellow researchers are encouraged to view ease-of-adaptation as one criterion of the algorithms that we design.

This is joint work with friends and colleagues at UCL , Bath, The Foundry, and Deepmind.

Dr Gabriel Brostow http://web4.cs.ucl.ac.uk/staff/g.brostow/

This talk is part of the Featured talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity