Several other clustering methods have also been considered but have
not been used in our experimental comparison. Agglomerative models
(single link, average link, complete link) [DH73] are
computationally expensive (at least
) and often result
in highly skewed trees, indicating domination by one very large
cluster.
Mixture models [Fuk72] such as AUTOCLASS [CS96] are also
popular but are limited by problems of parameter estimation for
high-dimensional data. Clustering algorithms from the data mining
community (e.g., CLARANS, DBSCAN, BIRCH, CLIQUE, CURE, WAVECLUSTER
[RS99,HKT01]) have been omitted since they are mostly scalable
versions designed for low-dimensional data.
Partitioning approaches based on principal directions have not been
shown here since they perform comparably to hierarchical agglomerative
clustering [BGG$^+$99].
Other graph partitioning approaches such as spectral bisectioning
[HL95] are not included since they are already represented by
the multi-level partitioner METIS.