Next: Summary
Up: System Issues
Previous: FASTOPOSSUM
  Contents
Parallel Implementation
Another notion of scalability is w.r.t. the number of processors
(speedup, iso-efficiency, etc.). Our analysis [SG00b] shows
almost linear speedup for our method, as the similarity computation as
well as graph partitioning can both be fairly trivially parallelized
with little overhead. Parallel implementation of the all-pair
similarity computation on SIMD or distributed memory processors is
trivial. It can be done in a systolic or block systolic manner with
essentially no overhead. Frameworks such as MPI also provide native
primitives for such computations. Parallelization of METIS is also
very efficient, and [SKK99], reports partitioning of graphs with
over 7 million vertices in 7 seconds into 128 clusters on a 128
processor Cray T3E. For further details, see [SG00b].
Alexander Strehl
2002-05-03