SparseNeighborEmbedding

class torchdr.SparseNeighborEmbedding(affinity_in: Affinity, affinity_out: Affinity, kwargs_affinity_out: dict = {}, n_components: int = 2, lr: float | str = 1.0, optimizer: str = 'Adam', optimizer_kwargs: dict | str | None = None, scheduler: str = 'constant', scheduler_kwargs: dict | None = None, tol: float = 1e-07, max_iter: int = 2000, init: str = 'pca', init_scaling: float = 0.0001, tolog: bool = False, device: str = 'auto', keops: bool = False, verbose: bool = False, random_state: float = 0, early_exaggeration: float = 1.0, coeff_repulsion: float = 1.0, early_exaggeration_iter: int | None = None)[source]

Bases: NeighborEmbedding

Solves the neighbor embedding problem with a sparse input affinity matrix.

It amounts to solving:

\[\min_{\mathbf{Z}} \: - \lambda \sum_{ij} P_{ij} \log Q_{ij} + \gamma \mathcal{L}_{\mathrm{rep}}( \mathbf{Q})\]

where \(\mathbf{P}\) is the input affinity matrix, \(\mathbf{Q}\) is the output affinity matrix, \(\mathcal{L}_{\mathrm{rep}}\) is the repulsive term of the loss function, \(\lambda\) is the early_exaggeration parameter and \(\gamma\) is the coeff_repulsion parameter.

Fast attraction. This class should be used when the input affinity matrix is a SparseLogAffinity and the output affinity matrix is an UnnormalizedAffinity. In such cases, the attractive term can be computed with linear complexity.

Parameters:
  • affinity_in (Affinity) – The affinity object for the input space.

  • affinity_out (Affinity) – The affinity object for the output embedding space.

  • kwargs_affinity_out (dict, optional) – Additional keyword arguments for the affinity_out method.

  • n_components (int, optional) – Number of dimensions for the embedding. Default is 2.

  • lr (float or 'auto', optional) – Learning rate for the optimizer. Default is 1e0.

  • optimizer (str or 'auto', optional) – Optimizer to use for the optimization. Default is “Adam”.

  • optimizer_kwargs (dict or 'auto', optional) – Additional keyword arguments for the optimizer.

  • scheduler (str, optional) – Learning rate scheduler. Default is “constant”.

  • scheduler_kwargs (dict, optional) – Additional keyword arguments for the scheduler.

  • tol (float, optional) – Tolerance for stopping criterion. Default is 1e-7.

  • max_iter (int, optional) – Maximum number of iterations. Default is 2000.

  • init (str, optional) – Initialization method for the embedding. Default is “pca”.

  • init_scaling (float, optional) – Scaling factor for the initial embedding. Default is 1e-4.

  • tolog (bool, optional) – If True, logs the optimization process. Default is False.

  • device (str, optional) – Device to use for computations. Default is “auto”.

  • keops (bool, optional) – Whether to use KeOps for computations. Default is False.

  • verbose (bool, optional) – Verbosity of the optimization process. Default is False.

  • random_state (float, optional) – Random seed for reproducibility. Default is 0.

  • early_exaggeration (float, optional) – Coefficient for the attraction term during the early exaggeration phase. Default is 1.0.

  • coeff_repulsion (float, optional) – Coefficient for the repulsion term. Default is 1.0.

  • early_exaggeration_iter (int, optional) – Number of iterations for early exaggeration. Default is None.

Examples using SparseNeighborEmbedding:

TSNE embedding of the swiss roll dataset

TSNE embedding of the swiss roll dataset

Neighbor Embedding on genomics & equivalent affinity matcher formulation

Neighbor Embedding on genomics & equivalent affinity matcher formulation