A survey on unsupervised outlier detection in high‐dimensional numerical data
Citations Over TimeTop 1% of 2012 papers
Abstract
Abstract High‐dimensional data in Euclidean space pose special challenges to data mining algorithms. These challenges are often indiscriminately subsumed under the term ‘curse of dimensionality’, more concrete aspects being the so‐called ‘distance concentration effect’, the presence of irrelevant attributes concealing relevant information, or simply efficiency issues. In about just the last few years, the task of unsupervised outlier detection has found new specialized solutions for tackling high‐dimensional data in Euclidean space. These approaches fall under mainly two categories, namely considering or not considering subspaces (subsets of attributes) for the definition of outliers. The former are specifically addressing the presence of irrelevant attributes, the latter do consider the presence of irrelevant attributes implicitly at best but are more concerned with general issues of efficiency and effectiveness. Nevertheless, both types of specialized outlier detection algorithms tackle challenges specific to high‐dimensional data. In this survey article, we discuss some important aspects of the ‘curse of dimensionality’ in detail and survey specialized algorithms for outlier detection from both categories. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2012
Related Papers
- → Efficient Outlier Detection for High-Dimensional Data(2017)90 cited
- → Finding High-Order Correlations in High-Dimensional Biological Data(2010)6 cited
- → Differentially Private Low-dimensional Synthetic Data from High-dimensional Datasets(2023)1 cited
- → Finding Well-Clusterable Subspaces for High Dimensional Data(2014)
- Innovation Pursuit: A New Approach to Subspace Clustering (with and without Spectral Clustering)(2015)