Log inImperial users Other users No account?Information onFinding a talk Adding a talk Syndicating talks Who we are Everything else |
Human-In-the-Loop Graphics and VideoAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Patrick Kelly. In the long-term, we aim to have visual computing algorithms and systems that are responsive: they should help a user accomplish their tasks, and they should improve with continued use. Measuring progress and reaching this goal is harder than it may seem. In this talk, I will present three of our recent systems that successfully wrap modest user-interfaces around purpose-built computer vision/graphics systems. I will show how statistical models of shape and appearance are adjusted through feedback from users. The user-input, in turn, enables applications where we i) synthesize text in other people’s handwriting, ii) rotoscope moving objects in special effects footage, and iii) identify rare actions in videos. The third system, VideoTagger, is actually the most flexible, designed to give non-programming scientists an experimental platform for studying, for example, 3 months long fruit-fly videos. Interested users are encouraged to try these systems for themselves, and fellow researchers are encouraged to view ease-of-adaptation as one criterion of the algorithms that we design. This is joint work with friends and colleagues at UCL , Bath, The Foundry, and Deepmind. Dr Gabriel Brostow http://web4.cs.ucl.ac.uk/staff/g.brostow/ This talk is part of the Featured talks series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsType the title of a new list here IEEE Talks COMMSP & CP listOther talksMultimodal interfaces for rehabilitation and assistive robotics Mean Field Analysis: Applications, Convergence and the Rate of Convergence |