SparseNeighborEmbedding#

class torchdr.SparseNeighborEmbedding(affinity_in: Affinity, affinity_out: Affinity, kwargs_affinity_out: Dict | None = None, n_components: int = 2, lr: float | str = 1.0, optimizer: str | Type[Optimizer] = 'SGD', optimizer_kwargs: Dict | str = 'auto', scheduler: str | Type[LRScheduler] | None = None, scheduler_kwargs: Dict | None = None, min_grad_norm: float = 1e-07, max_iter: int = 2000, init: str | Tensor | ndarray = 'pca', init_scaling: float = 0.0001, device: str = 'auto', backend: str | None = None, verbose: bool = False, random_state: float | None = None, early_exaggeration_coeff: float = 1.0, early_exaggeration_iter: int | None = None)[source]#

Bases: NeighborEmbedding

Solves the neighbor embedding problem with a sparse input affinity matrix.

It amounts to solving:

\[\min_{\mathbf{Z}} \: - \lambda \sum_{ij} P_{ij} \log Q_{ij} + \mathcal{L}_{\mathrm{rep}}( \mathbf{Q})\]

where \(\mathbf{P}\) is the input affinity matrix, \(\mathbf{Q}\) is the output affinity matrix, \(\mathcal{L}_{\mathrm{rep}}\) is the repulsive term of the loss function, \(\lambda\) is the early_exaggeration_coeff parameter.

Fast attraction. This class should be used when the input affinity matrix is a SparseLogAffinity and the output affinity matrix is an UnnormalizedAffinity. In such cases, the attractive term can be computed with linear complexity.

Parameters:
  • affinity_in (Affinity) – The affinity object for the input space.

  • affinity_out (Affinity) – The affinity object for the output embedding space.

  • kwargs_affinity_out (dict, optional) – Additional keyword arguments for the affinity_out method.

  • n_components (int, optional) – Number of dimensions for the embedding. Default is 2.

  • lr (float or 'auto', optional) – Learning rate for the optimizer. Default is 1e0.

  • optimizer (str or torch.optim.Optimizer, optional) – Name of an optimizer from torch.optim or an optimizer class. Default is “SGD”. For best results, we recommend using “SGD” with ‘auto’ learning rate.

  • optimizer_kwargs (dict or 'auto', optional) – Additional keyword arguments for the optimizer. Default is ‘auto’, which sets appropriate momentum values for SGD based on early exaggeration phase.

  • scheduler (str or torch.optim.lr_scheduler.LRScheduler, optional) – Name of a scheduler from torch.optim.lr_scheduler or a scheduler class. Default is None (no scheduler).

  • scheduler_kwargs (dict, optional) – Additional keyword arguments for the scheduler.

  • min_grad_norm (float, optional) – Tolerance for stopping criterion. Default is 1e-7.

  • max_iter (int, optional) – Maximum number of iterations. Default is 2000.

  • init (str or torch.Tensor or np.ndarray, optional) – Initialization method for the embedding. Default is “pca”.

  • init_scaling (float, optional) – Scaling factor for the initial embedding. Default is 1e-4.

  • device (str, optional) – Device to use for computations. Default is “auto”.

  • backend ({"keops", "faiss", None}, optional) – Which backend to use for handling sparsity and memory efficiency. Default is None.

  • verbose (bool, optional) – Verbosity of the optimization process. Default is False.

  • random_state (float, optional) – Random seed for reproducibility. Default is None.

  • early_exaggeration_coeff (float, optional) – Coefficient for the attraction term during the early exaggeration phase. Default is 1.0.

  • early_exaggeration_iter (int, optional) – Number of iterations for early exaggeration. Default is None.

Examples using SparseNeighborEmbedding:#

Neighbor Embedding on genomics & equivalent affinity matcher formulation

Neighbor Embedding on genomics & equivalent affinity matcher formulation

TSNE embedding of the swiss roll dataset

TSNE embedding of the swiss roll dataset