Torch Dimensionality Reduction
TorchDR is an open-source dimensionality reduction (DR) library using PyTorch. Its goal is to accelerate the development of new DR methods by providing a common simplified framework.
DR aims to construct a low-dimensional representation (or embedding) of an input dataset that best preserves its geometry encoded via a pairwise affinity matrix . To this end, DR methods optimize the embedding such that its associated pairwise affinity matrix matches the input affinity. TorchDR provides a general framework for solving problems of this form. Defining a DR algorithm solely requires choosing or implementing an Affinity object for both input and embedding as well as an objective function.
Benefits of TorchDR include:
Modularity |
All of it is written in python in a highly modular way, making it easy to create or transform components. |
Speed |
Supports GPU acceleration, leverages sparsity and batching strategies with contrastive learning techniques. |
Memory efficiency |
Relies on sparsity and/or |
Compatibility |
Implemented methods are fully compatible with the |
Getting Started
TorchDR offers a user-friendly API similar to scikit-learn where dimensionality reduction modules can be called with the fit_transform
method. It seamlessly accepts both NumPy arrays and PyTorch tensors as input, ensuring that the output matches the type and backend of the input.
from sklearn.datasets import fetch_openml
from torchdr import PCA, TSNE
x = fetch_openml("mnist_784").data.astype("float32")
x_ = PCA(n_components=50).fit_transform(x)
z = TSNE(perplexity=30).fit_transform(x_)
TorchDR enables GPU acceleration without memory limitations thanks to the KeOps library. This can be easily enabled as follows:
z_gpu = TSNE(perplexity=30, device="cuda", keops=True).fit_transform(x_)
MNIST example. Here is a comparison of various neighbor embedding methods on the MNIST digits dataset.
The code to generate this figure is available here.
Single cell example. Here is an example of single cell embeddings using TorchDR, where the embeddings are colored by cell type and the number of cells is indicated in each title.
The code for this figure is here.
Implemented Features (to date)
Affinities
TorchDR features a wide range of affinities which can then be used as a building block for DR algorithms. It includes:
Usual affinities such that scalar product, Gaussian and Student kernels.
Affinities based on k-NN normalizations such Self-tuning affinities [Z04] and MAGIC [V18].
Doubly stochastic affinities with entropic [S67] [C13] [F19] [L21] and quadratic [Z23] projections.
Adaptive affinities with entropy control [H02] [V13] and its symmetric version [V23].
Dimensionality Reduction Algorithms
Spectral. TorchDR provides spectral embeddings [H04] calculated via eigenvalue decomposition of the affinities or their Laplacian.
Neighbor Embedding. TorchDR includes various neighbor embedding methods such as SNE [H02], t-SNE [V08], t-SNEkhorn [V23], UMAP [M18] [D21], LargeVis [T16] and InfoTSNE [D23].
Evaluation Metric
TorchDR provides efficient GPU-compatible evaluation metrics : Silhouette score [R87].
Installation
You can install the toolbox through PyPI with:
pip install torchdr
To get the latest version, you can install it from the source code as follows:
pip install git+https://github.com/torchdr/torchdr
Finding Help
If you have any questions or suggestions, feel free to open an issue on the issue tracker or contact Hugues Van Assel directly.
Citation
If you use TorchDR in your research, please cite the following reference:
Van Assel H., Courty N., Flamary R., Garivier A., Massias M., Vayer T., Vincent-Cuaz C. TorchDR URL: https://torchdr.github.io/
or in Bibtex format :
@misc{vanassel2024torchdr,
author = {Van Assel, Hugues and Courty, Nicolas and Flamary, Rémi and Garivier, Aurélien and Massias, Mathurin and Vayer, Titouan and Vincent-Cuaz, Cédric},
title = {TorchDR},
url = {https://torchdr.github.io/},
year = {2024}
}
References
Geoffrey Hinton, Sam Roweis (2002). Stochastic Neighbor Embedding. Advances in Neural Information Processing Systems 15 (NeurIPS).
Laurens van der Maaten, Geoffrey Hinton (2008). Visualizing Data using t-SNE. The Journal of Machine Learning Research 9.11 (JMLR).
Hugues Van Assel, Titouan Vayer, Rémi Flamary, Nicolas Courty (2023). SNEkhorn: Dimension Reduction with Symmetric Entropic Affinities. Advances in Neural Information Processing Systems 36 (NeurIPS).
Max Vladymyrov, Miguel A. Carreira-Perpinan (2013). Entropic Affinities: Properties and Efficient Numerical Computation. International Conference on Machine Learning (ICML).
Richard Sinkhorn, Paul Knopp (1967). Concerning Nonnegative Matrices and Doubly Stochastic Matrices. Pacific Journal of Mathematics, 21(2), 343-348.
Marco Cuturi (2013). Sinkhorn Distances: Lightspeed Computation of Optimal Transport. Advances in Neural Information Processing Systems 26 (NeurIPS).
Jean Feydy, Thibault Séjourné, François-Xavier Vialard, Shun-ichi Amari, Alain Trouvé, Gabriel Peyré (2019). Interpolating between Optimal Transport and MMD using Sinkhorn Divergences. International Conference on Artificial Intelligence and Statistics (AISTATS).
Leland McInnes, John Healy, James Melville (2018). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv preprint arXiv:1802.03426.
Stephen Zhang, Gilles Mordant, Tetsuya Matsumoto, Geoffrey Schiebinger (2023). Manifold Learning with Sparse Regularised Optimal Transport. arXiv preprint.
Ham, J., Lee, D. D., Mika, S., & Schölkopf, B. (2004). A Kernel View of the Dimensionality Reduction of Manifolds. In Proceedings of the twenty-first international conference on Machine learning (ICML).
Sebastian Damrich, Fred Hamprecht (2021). On UMAP’s True Loss Function. Advances in Neural Information Processing Systems 34 (NeurIPS).
Tang, J., Liu, J., Zhang, M., & Mei, Q. (2016). Visualizing Large-Scale and High-Dimensional Data. In Proceedings of the 25th international conference on world wide web.
Sebastian Damrich, Jan Niklas Böhm, Fred Hamprecht, Dmitry Kobak (2023). From t-SNE to UMAP with Contrastive Learning. International Conference on Learning Representations (ICLR).
Landa, B., Coifman, R. R., & Kluger, Y. (2021). Doubly Stochastic Normalization of the Gaussian Kernel is Robust to Heteroskedastic Noise. SIAM journal on mathematics of data science, 3(1), 388-413.
Charlier, B., Feydy, J., Glaunes, J. A., Collin, F. D., & Durif, G. (2021). Kernel Operations on the GPU, with Autodiff, without Memory Overflows. Journal of Machine Learning Research, 22 (JMLR).
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., … & Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (NeurIPS).
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., … & Duchesnay, É. (2011). Scikit-learn: Machine learning in Python. Journal of machine Learning research, 12 (JMLR).
Max Zelnik-Manor, L., & Perona, P. (2004). Self-Tuning Spectral Clustering. Advances in Neural Information Processing Systems 17 (NeurIPS).
Van Dijk, D., Sharma, R., Nainys, J., Yim, K., Kathail, P., Carr, A. J., … & Pe’er, D. (2018). Recovering Gene Interactions from Single-Cell Data Using Data Diffusion. Cell, 174(3).
Rousseeuw, P. J. (1987). Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20, 53-65.