File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s11263-024-02222-4
- Scopus: eid_2-s2.0-85205587774
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Dissecting Out-of-Distribution Detection and Open-Set Recognition: A Critical Analysis of Methods and Benchmarks
| Title | Dissecting Out-of-Distribution Detection and Open-Set Recognition: A Critical Analysis of Methods and Benchmarks |
|---|---|
| Authors | |
| Keywords | Open-set Recognition Out-of-Distribution Detection |
| Issue Date | 4-Oct-2024 |
| Publisher | Springer |
| Citation | International Journal of Computer Vision, 2024, v. 133, p. 1326-1351 How to Cite? |
| Abstract | Detecting test-time distribution shift has emerged as a key capability for safely deployed machine learning models, with the question being tackled under various guises in recent years. In this paper, we aim to provide a consolidated view of the two largest sub-fields within the community: out-of-distribution (OOD) detection and open-set recognition (OSR). In particular, we aim to provide rigorous empirical analysis of different methods across settings and provide actionable takeaways for practitioners and researchers. Concretely, we make the following contributions: (i) We perform rigorous cross-evaluation between state-of-the-art methods in the OOD detection and OSR settings and identify a strong correlation between the performances of methods for them; (ii) We propose a new, large-scale benchmark setting which we suggest better disentangles the problem tackled by OOD detection and OSR, re-evaluating state-of-the-art OOD detection and OSR methods in this setting; (iii) We surprisingly find that the best performing method on standard benchmarks (Outlier Exposure) struggles when tested at scale, while scoring rules which are sensitive to the deep feature magnitude consistently show promise; and (iv) We conduct empirical analysis to explain these phenomena and highlight directions for future research. Code: https://github.com/Visual-AI/Dissect-OOD-OSR |
| Persistent Identifier | http://hdl.handle.net/10722/362563 |
| ISSN | 2023 Impact Factor: 11.6 2023 SCImago Journal Rankings: 6.668 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Wang, Hongjun | - |
| dc.contributor.author | Vaze, Sagar | - |
| dc.contributor.author | Han, Kai | - |
| dc.date.accessioned | 2025-09-26T00:36:09Z | - |
| dc.date.available | 2025-09-26T00:36:09Z | - |
| dc.date.issued | 2024-10-04 | - |
| dc.identifier.citation | International Journal of Computer Vision, 2024, v. 133, p. 1326-1351 | - |
| dc.identifier.issn | 0920-5691 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362563 | - |
| dc.description.abstract | Detecting test-time distribution shift has emerged as a key capability for safely deployed machine learning models, with the question being tackled under various guises in recent years. In this paper, we aim to provide a consolidated view of the two largest sub-fields within the community: out-of-distribution (OOD) detection and open-set recognition (OSR). In particular, we aim to provide rigorous empirical analysis of different methods across settings and provide actionable takeaways for practitioners and researchers. Concretely, we make the following contributions: (i) We perform rigorous cross-evaluation between state-of-the-art methods in the OOD detection and OSR settings and identify a strong correlation between the performances of methods for them; (ii) We propose a new, large-scale benchmark setting which we suggest better disentangles the problem tackled by OOD detection and OSR, re-evaluating state-of-the-art OOD detection and OSR methods in this setting; (iii) We surprisingly find that the best performing method on standard benchmarks (Outlier Exposure) struggles when tested at scale, while scoring rules which are sensitive to the deep feature magnitude consistently show promise; and (iv) We conduct empirical analysis to explain these phenomena and highlight directions for future research. Code: https://github.com/Visual-AI/Dissect-OOD-OSR | - |
| dc.language | eng | - |
| dc.publisher | Springer | - |
| dc.relation.ispartof | International Journal of Computer Vision | - |
| dc.subject | Open-set Recognition | - |
| dc.subject | Out-of-Distribution Detection | - |
| dc.title | Dissecting Out-of-Distribution Detection and Open-Set Recognition: A Critical Analysis of Methods and Benchmarks | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1007/s11263-024-02222-4 | - |
| dc.identifier.scopus | eid_2-s2.0-85205587774 | - |
| dc.identifier.volume | 133 | - |
| dc.identifier.spage | 1326 | - |
| dc.identifier.epage | 1351 | - |
| dc.identifier.eissn | 1573-1405 | - |
| dc.identifier.issnl | 0920-5691 | - |
