CondAEModel.generate

CondAEModel.generate(y: ndarray | torch.Tensor, z: ndarray | torch.Tensor | None = None, return_untransformed: bool = False, accelerator: str = 'cpu', enable_progress_bar: bool = False, lightning_logger_level: int = 30, disable_user_warnings: bool = True, **kwargs) Dict[str, torch.Tensor] | Tuple[Dict[str, torch.Tensor], Dict[str, torch.Tensor]][source]

Generate samples from the model.

Parameters:
  • y (torch.Tensor) – The conditional data.

  • z (Union[np.ndarray, torch.Tensor, None], optional, default=None) – The latent representation to decode. If None, a latent representation is sampled from a normal distribution.

  • return_untransformed (bool, optional, default=False) – If True, the generated data is additionally returned in the original space, by applying the inverse transformation.

  • accelerator (str, optional, default=”cpu”) – Which accelerator should be used (e.g. cpu, gpu, mps, etc.).

  • enable_progress_bar (bool, optional, default=False) – If True, enable the progress bar.

  • lightning_logger_level (int, optional, default=logging.WARNING) – The logging level for PyTorch Lightning.

  • disable_user_warnings (bool, optional, default=True) – If True, disable user warnings.

  • **kwargs – Additional keyword arguments that can be passed to the Trainer. Default is an empty dictionary.

Returns:

Union[Dict[str, torch.Tensor], Tuple[Dict[str, torch.Tensor], Dict[str, torch.Tensor]]] – A dictionary containing the generated data. If inverse_transform is True, a tuple of the generated data in the transformed and original space is returned.