Abstract
Discriminative learning effectively predicts true object class
for image classification. However, it often results in false
positives for outliers, posing critical concerns in applications
like autonomous driving and video surveillance systems. Previous
attempts to address this challenge involved training image
classifiers through contrastive learning using actual outlier
data or synthesizing outliers for self-supervised learning.
Furthermore, unsupervised generative modeling of inliers in
pixel space has shown limited success for outlier detection. In
this work, we introduce a quantile-based maximum likelihood
objective for learning the inlier distribution to improve the
outlier separation during inference. Our approach fits a
normalizing flow to pre-trained discriminative features and
detects the outliers according to the evaluated log-likelihood.
The experimental evaluation demonstrates the effectiveness of
our method as it surpasses the performance of the
state-of-the-art unsupervised methods for outlier detection. The
results are also competitive compared with a recent
self-supervised approach for outlier detection. Our work allows
to reduce dependency on well-sampled negative training data,
which is especially important for domains like medical
diagnostics or remote sensing.
Users
Please
log in to take part in the discussion (add own reviews or comments).