Benchmarks

We benchmark openTSNE (v0.4.3) against three popular open-source implementations from scikit-learn (v0.23.1), MulticoreTSNE (v0.1), and FIt-SNE (v1.1.0). The benchmarks were run on a consumer-grade Intel Core i7-7700HQ processor found in laptop computers, and on a server-grade Intel Xeon E5-2650. To generate benchmark data sets of different sizes, we subsampled data from the 10X Genomics 1.3 million mouse brain data set five times, resulting in five different data sets for each size. In total, we run each implementation on 30 different data sets.

For each run, five separate datasets were generated by sampling from the 10X 1.3 Million Brain Cells dataset (available here). We reduce the number of dimensions to 50 principal components, following the standard single-cell pipeline. Each t-SNE implementation was then run on every dataset using the following parameters: perplexity=30, learning_rate=200 for 250 early exaggeration iterations with factor 12 and 750 standard iterations with factor 1. Other parameters were set to their default values. This was then repeated for different sample sizes (1,000, 100k, 250k, 500k, 750k, 1mln) resulting in 30 runs for each imlpementation.

_images/benchmarks.png

Caveats when running benchmarks

When using Intel’s Math Kernel Library (MKL), care must be taken to properly limit the number of threads available. The number of threads should be limited by setting the environmental variable OMP_NUM_THREADS=X, where X is the number threads. This is important when using a numpy distribution linked against the MKL. Both openTSNE and scikit-learn make heavy use of numpy. By default, the MKL will use all available cores by default, ignoring the user defined n_jobs parameters.