COSNE#
- class torchdr.COSNE(perplexity: float = 30, learning_rate_for_h_loss: float = 1, gamma: float = 2, n_components: int = 2, lr: float | str = 'auto', optimizer_kwargs: Dict | str | None = None, scheduler: str | Type[LRScheduler] | None = None, scheduler_kwargs: Dict | None = None, init: str = 'hyperbolic', init_scaling: float = 0.5, min_grad_norm: float = 1e-07, max_iter: int = 2000, device: str = 'auto', backend: str | FaissConfig | None = None, verbose: bool = False, random_state: float | None = None, max_iter_affinity: int = 100, metric: str = 'sqeuclidean', sparsity: bool = True, check_interval: int = 50, compile: bool = False, distributed: bool | str = 'auto', **kwargs)[source]#
Bases:
NeighborEmbeddingImplementation of the CO-Stochastic Neighbor Embedding (CO-SNE) introduced in [Guo et al., 2022].
This algorithm is a variant of SNE that uses a hyperbolic space for the embedding. It uses a
EntropicAffinityas input affinity \(\mathbf{P}\) and a Cauchy kernel in hyperbolic space as output affinity \(Q_{ij} = \gamma / (d_H(\mathbf{z}_i, \mathbf{z}_j) + \gamma^2)\) where \(d_H\) is the hyperbolic distance.The loss function is defined as:
\[-\sum_{ij} P_{ij} \log Q_{ij} + \log \Big( \sum_{ij} Q_{ij} \Big) + \lambda_1 \cdot \frac{1}{n} \sum_i \Big( \| \mathbf{x}_i \|^2 - d_H(\mathbf{z}_i, \mathbf{0})^2 \Big)^2\]where the first two terms form the KL divergence between \(\mathbf{P}\) and \(\mathbf{Q}\) (up to a constant) and the third term regularizes the embedding to preserve the norms of the input data in hyperbolic space.
- Parameters:
perplexity (float) – Number of ‘effective’ nearest neighbors. Consider selecting a value between 2 and the number of samples. Different values can result in significantly different results.
learning_rate_for_h_loss (float) – Coefficient for the distance preservation loss enforcing that the hyperbolic distance to the origin of each embedded point matches the squared Euclidean norm of the corresponding input sample.
gamma (float) – Gamma parameter of the Cauchy distribution used for affinity, by default 2.
n_components (int, optional) – Dimension of the embedding space.
lr (float, optional) – Learning rate for the algorithm, by default 1.0.
optimizer_kwargs (dict, optional) – Arguments for the optimizer, by default None.
scheduler ({'constant', 'linear'}, optional) – Learning rate scheduler.
scheduler_kwargs (dict, optional) – Arguments for the scheduler, by default None.
init ({'hyperbolic'} or torch.Tensor of shape (n_samples, output_dim), optional) – Initialization for the embedding Z, default ‘hyperbolic’.
init_scaling (float, optional) – Scaling factor for the initialization, by default 0.5.
tol (float, optional) – Precision threshold at which the algorithm stops, by default 1e-4.
max_iter (int, optional) – Number of maximum iterations for the descent algorithm, by default 2000.
device (str, optional) – Device to use, by default “auto”.
backend ({"keops", "faiss", None} or FaissConfig, optional) – Which backend to use for handling sparsity and memory efficiency. Can be: - “keops”: Use KeOps for memory-efficient symbolic computations - “faiss”: Use FAISS for fast k-NN computations with default settings - None: Use standard PyTorch operations - FaissConfig object: Use FAISS with custom configuration Default is None.
verbose (bool, optional) – Verbosity, by default False.
random_state (float, optional) – Random seed for reproducibility, by default None.
max_iter_affinity (int, optional) – Number of maximum iterations for the entropic affinity root search.
metric ({'sqeuclidean', 'manhattan'}, optional) – Metric to use for the input affinity, by default ‘sqeuclidean’.
sparsity (bool, optional) – Whether to use sparsity mode for the input affinity. Default is True.
check_interval (int, optional) – Number of iterations between checks for convergence, by default 50.
distributed (bool or 'auto', optional) – Whether to use distributed computation across multiple GPUs. - “auto”: Automatically detect if running with torchrun (default) - True: Force distributed mode (requires torchrun) - False: Disable distributed mode Default is “auto”.