UMAP#
- class torchdr.UMAP(n_neighbors: float = 30, n_components: int = 2, min_dist: float = 0.1, spread: float = 1.0, a: float | None = None, b: float | None = None, lr: float = 0.1, optimizer: str | Type[Optimizer] = 'SGD', optimizer_kwargs: Dict | str = 'auto', scheduler: str | Type[LRScheduler] | None = 'LinearLR', scheduler_kwargs: Dict | str | None = 'auto', init: str = 'pca', init_scaling: float = 0.0001, min_grad_norm: float = 1e-07, max_iter: int = 2000, device: str | None = None, backend: str | None = 'faiss', verbose: bool = False, random_state: float | None = None, tol_affinity: float = 0.001, max_iter_affinity: int = 100, metric_in: str = 'sqeuclidean', metric_out: str = 'sqeuclidean', n_negatives: int = 10, check_interval: int = 50, discard_NNs: bool = False, compile: bool = False, **kwargs)[source]#
Bases:
SampledNeighborEmbedding
UMAP introduced in [McInnes et al., 2018] and further studied in [Damrich and Hamprecht, 2021].
It uses a
UMAPAffinity
as input affinity \(\mathbf{P}\).The loss function is defined as:
\[-\sum_{ij} P_{ij} \log Q_{ij} + \sum_{i,j \in \mathrm{Neg}(i)} \log (1 - Q_{ij})\]where \(\mathrm{Neg}(i)\) is the set of negatives samples for point \(i\).
- Parameters:
n_neighbors (float, optional) – Number of nearest neighbors.
n_components (int, optional) – Dimension of the embedding space.
min_dist (float, optional) – Minimum distance between points in the embedding space.
spread (float, optional) – The effective scale of the embedded points. Used to configure the UMAPAffinityOut.
a (float, optional) – Parameter for the Student t-distribution.
b (float, optional) – Parameter for the Student t-distribution.
lr (float, optional) – Learning rate for the algorithm, by default 1e-1.
optimizer (str or torch.optim.Optimizer, optional) – Name of an optimizer from torch.optim or an optimizer class. Default is “SGD”.
optimizer_kwargs (dict or 'auto', optional) – Additional keyword arguments for the optimizer. Default is ‘auto’. which sets appropriate momentum values for SGD based on early exaggeration phase.
scheduler (str or torch.optim.lr_scheduler.LRScheduler, optional) – Name of a scheduler from torch.optim.lr_scheduler or a scheduler class. Default is “LinearLR”.
scheduler_kwargs (dict, 'auto', or None, optional) – Additional keyword arguments for the scheduler. Default is ‘auto’, which corresponds to a linear decay from the learning rate to 0 for LinearLR.
init ({'normal', 'pca'} or torch.Tensor of shape (n_samples, output_dim), optional) – Initialization for the embedding Z, default ‘pca’.
init_scaling (float, optional) – Scaling factor for the initialization, by default 1e-4.
min_grad_norm (float, optional) – Precision threshold at which the algorithm stops, by default 1e-7.
max_iter (int, optional) – Number of maximum iterations for the descent algorithm. by default 2000.
device (str, optional) – Device to use, by default “auto”.
backend ({"keops", "faiss", None}, optional) – Which backend to use for handling sparsity and memory efficiency. Default is “faiss”.
verbose (bool, optional) – Verbosity, by default False.
random_state (float, optional) – Random seed for reproducibility, by default None.
tol_affinity (float, optional) – Precision threshold for the input affinity computation.
max_iter_affinity (int, optional) – Number of maximum iterations for the input affinity computation.
metric_in ({'euclidean', 'manhattan'}, optional) – Metric to use for the input affinity, by default ‘euclidean’.
metric_out ({'euclidean', 'manhattan'}, optional) – Metric to use for the output affinity, by default ‘euclidean’.
n_negatives (int, optional) – Number of negative samples for the noise-contrastive loss, by default 10.
check_interval (int, optional) – Check interval for the algorithm, by default 50.
discard_NNs (bool, optional) – Whether to discard the nearest neighbors from the negative sampling. Default is False.
compile (bool, optional) – Whether to compile the algorithm using torch.compile. Default is False.
Examples using UMAP
:#

Neighbor Embedding on genomics & equivalent affinity matcher formulation