UMAP#
- class torchdr.UMAP(n_neighbors: float = 30, n_components: int = 2, min_dist: float = 0.1, spread: float = 1.0, a: float | None = None, b: float | None = None, lr: float = 0.1, optimizer: str = 'SGD', optimizer_kwargs: dict | None = None, scheduler: str = 'constant', scheduler_kwargs: dict | None = None, init: str = 'pca', init_scaling: float = 0.0001, min_grad_norm: float = 1e-07, max_iter: int = 2000, device: str | None = None, backend: str | None = None, verbose: bool = False, random_state: float | None = None, early_exaggeration_coeff: float = 1.0, early_exaggeration_iter: int = 0, tol_affinity: float = 0.001, max_iter_affinity: int = 100, metric_in: str = 'sqeuclidean', metric_out: str = 'sqeuclidean', n_negatives: int = 10, sparsity: bool = True)[source]#
Bases:
SampledNeighborEmbedding
UMAP introduced in [McInnes et al., 2018] and further studied in [Damrich and Hamprecht, 2021].
It uses a
UMAPAffinityIn
as input affinity \(\mathbf{P}\) and aUMAPAffinityOut
as output affinity \(\mathbf{Q}\).The loss function is defined as:
\[-\sum_{ij} P_{ij} \log Q_{ij} + \sum_{i,j \in \mathrm{Neg}(i)} \log (1 - Q_{ij})\]where \(\mathrm{Neg}(i)\) is the set of negatives samples for point \(i\).
- Parameters:
n_neighbors (int) – Number of nearest neighbors.
n_components (int, optional) – Dimension of the embedding space.
min_dist (float, optional) – Minimum distance between points in the embedding space.
spread (float, optional) – Initial spread of the embedding space.
a (float, optional) – Parameter for the Student t-distribution.
b (float, optional) – Parameter for the Student t-distribution.
lr (float, optional) – Learning rate for the algorithm, by default 1e-1.
optimizer ({'SGD', 'Adam', 'NAdam'}, optional) – Which pytorch optimizer to use, by default ‘SGD’.
optimizer_kwargs (dict, optional) – Arguments for the optimizer, by default None.
scheduler ({'constant', 'linear'}, optional) – Learning rate scheduler.
scheduler_kwargs (dict, optional) – Arguments for the scheduler, by default None.
init ({'normal', 'pca'} or torch.Tensor of shape (n_samples, output_dim), optional) – Initialization for the embedding Z, default ‘pca’.
init_scaling (float, optional) – Scaling factor for the initialization, by default 1e-4.
min_grad_norm (float, optional) – Precision threshold at which the algorithm stops, by default 1e-7.
max_iter (int, optional) – Number of maximum iterations for the descent algorithm. by default 2000.
device (str, optional) – Device to use, by default “auto”.
backend ({"keops", "faiss", None}, optional) – Which backend to use for handling sparsity and memory efficiency. Default is None.
verbose (bool, optional) – Verbosity, by default False.
random_state (float, optional) – Random seed for reproducibility, by default None.
early_exaggeration_coeff (float, optional) – Coefficient for the attraction term during the early exaggeration phase. By default 1.0.
early_exaggeration_iter (int, optional) – Number of iterations for early exaggeration, by default 250.
tol_affinity (float, optional) – Precision threshold for the input affinity computation.
max_iter_affinity (int, optional) – Number of maximum iterations for the input affinity computation.
metric_in ({'euclidean', 'manhattan'}, optional) – Metric to use for the input affinity, by default ‘euclidean’.
metric_out ({'euclidean', 'manhattan'}, optional) – Metric to use for the output affinity, by default ‘euclidean’.
n_negatives (int, optional) – Number of negative samples for the noise-contrastive loss, by default 10.
sparsity (bool, optional) – Whether to use sparsity mode for the input affinity. Default is True.
Examples using UMAP
:#
![](../_images/sphx_glr_demo_ne_methods_affinity_matcher_thumb.png)
Neighbor Embedding on genomics & equivalent affinity matcher formulation