TSNE
- class torchdr.TSNE(perplexity: float = 30, n_components: int = 2, lr: float | str = 'auto', optimizer: str = 'auto', optimizer_kwargs: dict | str = 'auto', scheduler: str = 'constant', scheduler_kwargs: dict | None = None, init: str = 'pca', init_scaling: float = 0.0001, tol: float = 1e-07, max_iter: int = 2000, tolog: bool = False, device: str | None = None, keops: bool = False, verbose: bool = False, random_state: float = 0, early_exaggeration: float = 12.0, coeff_repulsion: float = 1.0, early_exaggeration_iter: int = 250, tol_affinity: float = 0.001, max_iter_affinity: int = 100, metric_in: str = 'sqeuclidean', metric_out: str = 'sqeuclidean', **kwargs)[source]
Bases:
SparseNeighborEmbedding
Implementation of t-Stochastic Neighbor Embedding (t-SNE) introduced in [V08].
It involves selecting a
EntropicAffinity
as input affinity \(\mathbf{P}\) and aStudentAffinity
as output affinity \(\mathbf{Q}\).The loss function is defined as:
\[-\sum_{ij} P_{ij} \log Q_{ij} + \log \Big( \sum_{ij} Q_{ij} \Big) \:.\]- Parameters:
perplexity (float) – Number of ‘effective’ nearest neighbors. Consider selecting a value between 2 and the number of samples. Different values can result in significantly different results.
n_components (int, optional) – Dimension of the embedding space.
lr (float or 'auto', optional) – Learning rate for the algorithm, by default ‘auto’.
optimizer ({'SGD', 'Adam', 'NAdam', 'auto}, optional) – Which pytorch optimizer to use, by default ‘auto’.
optimizer_kwargs (dict or 'auto', optional) – Arguments for the optimizer, by default ‘auto’.
scheduler ({'constant', 'linear'}, optional) – Learning rate scheduler.
scheduler_kwargs (dict, optional) – Arguments for the scheduler, by default None.
init ({'normal', 'pca'} or torch.Tensor of shape (n_samples, output_dim), optional) – Initialization for the embedding Z, default ‘pca’.
init_scaling (float, optional) – Scaling factor for the initialization, by default 1e-4.
tol (float, optional) – Precision threshold at which the algorithm stops, by default 1e-7.
max_iter (int, optional) – Number of maximum iterations for the descent algorithm, by default 2000.
tolog (bool, optional) – Whether to store intermediate results in a dictionary, by default False.
device (str, optional) – Device to use, by default “auto”.
keops (bool, optional) – Whether to use KeOps, by default False.
verbose (bool, optional) – Verbosity, by default False.
random_state (float, optional) – Random seed for reproducibility, by default 0.
early_exaggeration (float, optional) – Coefficient for the attraction term during the early exaggeration phase. By default 12.0 for early exaggeration.
coeff_repulsion (float, optional) – Coefficient for the repulsion term, by default 1.0.
early_exaggeration_iter (int, optional) – Number of iterations for early exaggeration, by default 250.
tol_affinity (_type_, optional) – Precision threshold for the entropic affinity root search.
max_iter_affinity (int, optional) – Number of maximum iterations for the entropic affinity root search.
metric_in ({'sqeuclidean', 'manhattan'}, optional) – Metric to use for the input affinity, by default ‘sqeuclidean’.
metric_out ({'sqeuclidean', 'manhattan'}, optional) – Metric to use for the output affinity, by default ‘sqeuclidean’.
References
[V08]Laurens van der Maaten, Geoffrey Hinton (2008). Visualizing Data using t-SNE. The Journal of Machine Learning Research 9.11 (JMLR).
Examples using TSNE
:
TSNE embedding of the swiss roll dataset
Neighbor Embedding on genomics & equivalent affinity matcher formulation