Imperial College London > Talks@ee.imperial > CAS Talks > Out-of-Distribution Detection for Selective Classification: It's OK to Confuse ID with OOD if the Prediction Will Be Wrong Anyway

Out-of-Distribution Detection for Selective Classification: It's OK to Confuse ID with OOD if the Prediction Will Be Wrong Anyway

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact George A Constantinides.

Detecting out-of-distribution (OOD) data is a well explored task in deep learning. However, the performance of OOD detection methods is generally evaluated on the task in isolation, rather than also considering potential downstream tasks in tandem. In this work we examine selective classification with OOD data. That is to say, that the motivation for discarding potentially OOD samples is to reduce their impact on the proportion of correct predictions that are eventually served—we don’t want invalid predictions on OOD data reducing model accuracy. We show under this task specification, that post-hoc methods perform quite differently compared to when evaluated only on OOD detection. This is because the conflation of misclassifications with OOD data is no longer a negative, whilst the conflation within in-distribution (ID) data of correct predictions with misclassifications becomes undesirable. Furthermore, relative performance of methods varies significantly depending on the ratio of out-of vs in-distribution data, the operating threshold for selecting data, and the type of OOD data. We also propose a simple method, that outperforms other post-hoc approaches in the new problem setting.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity