fast_lisa_subtraction.priors.analytical module
Some standard distributions for sampling the priors.
- class fast_lisa_subtraction.priors.analytical.Cauchy(loc, scale, minimum=None, maximum=None, name=None, device='cpu')[source]
Bases:
PriorCauchy prior distribution.
- Parameters:
loc (float) – Location parameter of the Cauchy prior.
scale (float) – Scale parameter of the Cauchy prior.
minimum (float or None, optional) – Lower bound of the support.
maximum (float or None, optional) – Upper bound of the support.
name (str, optional) – Name of the prior parameter.
device (str, optional) – Torch device used for sampling.
- sample(num_samples, standardize=False)[source]
Sample from the Cauchy prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor
- class fast_lisa_subtraction.priors.analytical.Gamma(alpha, beta, offset=0, minimum=None, maximum=None, name=None, device='cpu')[source]
Bases:
PriorGamma prior distribution with an optional location shift.
- Parameters:
alpha (float) – Shape parameter of the Gamma prior.
beta (float) – Rate parameter of the Gamma prior (inverse scale).
offset (float, optional) – Additive offset applied to samples.
minimum (float or None, optional) – Lower bound of the support.
maximum (float or None, optional) – Upper bound of the support.
name (str, optional) – Name of the prior parameter.
device (str, optional) – Torch device used for sampling.
Notes
The unshifted Gamma density is
\[p(\theta) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha - 1} e^{-\beta \theta}.\]Samples are drawn from
torch.distributions.Gamma(alpha, beta)and then shifted by a median-based centeringmand the providedoffset:\[x = z - m + \mathrm{offset},\]where
zis Gamma-distributed andmis computed from the inverse incomplete gamma function.- log_prob(x, standardize=False)[source]
Evaluate the log-probability of the shifted Gamma prior.
- Parameters:
x (array-like or torch.Tensor) – Points at which to evaluate
log p(x).standardize (bool, optional) – If True, interpret
xas standardized and de-standardize before evaluating.
- Returns:
Log-probability values.
- Return type:
torch.Tensor
Notes
The evaluation uses the shifted variable
\[z = x - \mathrm{offset} + m,\]and returns
-infwherez <= 0.
- sample(num_samples, standardize=False)[source]
Sample from the shifted Gamma prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor
- class fast_lisa_subtraction.priors.analytical.Gaussian(mean, std_dev, name=None, device='cpu')[source]
Bases:
PriorGaussian prior distribution.
- Parameters:
mean (float) – Mean of the Gaussian prior.
std_dev (float) – Standard deviation of the Gaussian prior.
name (str, optional) – Name of the prior parameter.
device (str, optional) – Torch device used for sampling.
- sample(num_samples, standardize=False)[source]
Sample from the Gaussian prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor
- class fast_lisa_subtraction.priors.analytical.LogNormal(mu, sigma, minimum=None, maximum=None, name=None, device='cpu')[source]
Bases:
PriorShifted log-normal prior distribution.
- Parameters:
mu (float) – Location parameter used as an additive shift.
sigma (float) – Log-space standard deviation.
minimum (float or None, optional) – Lower bound of the support.
maximum (float or None, optional) – Upper bound of the support.
name (str, optional) – Name of the prior parameter.
device (str, optional) – Torch device used for sampling.
Notes
Sampling draws
yfrom a log-normal distribution and returns a shifted value:\[x = y + \mu, \quad y \sim \mathrm{LogNormal}(0, \sigma).\]- log_prob(x, standardize=False)[source]
Evaluate the log-probability of the shifted log-normal prior.
- Parameters:
x (array-like or torch.Tensor) – Points at which to evaluate
log p(x).standardize (bool, optional) – If True, interpret
xas standardized and de-standardize before evaluating.
- Returns:
Log-probability values.
- Return type:
torch.Tensor
Notes
The evaluation uses the shifted variable
\[y = x - \mu,\]and applies a log-normal density with parameters
(mu, sigma)toy.
- sample(num_samples, standardize=False)[source]
Sample from the shifted log-normal prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor
- class fast_lisa_subtraction.priors.analytical.MultivariatePrior(priors)[source]
Bases:
objectContainer for a multivariate distribution built from 1D priors.
- Parameters:
priors (dict or list) – Collection of
Priorinstances. If a dict is provided, the values are used in the order of the keys.
- destandardize(samples)[source]
Reverse standardization using component prior statistics.
- Parameters:
samples (dict) – Mapping from parameter name to standardized samples.
- Returns:
De-standardized samples.
- Return type:
dict
- property maximums
Maximum bounds of the component priors.
- Returns:
Mapping from parameter name to maximum value.
- Return type:
dict
- property means
Means of the component priors.
- Returns:
Mapping from parameter name to mean.
- Return type:
dict
- property metadata
Metadata associated with the multivariate prior.
- Returns:
Dictionary containing:
meansdictMapping from parameter name to mean.
stdsdictMapping from parameter name to standard deviation.
boundsdictMapping from parameter name to
(min, max).inference_parameterslist of strParameter names in order.
- Return type:
dict
- property minimums
Minimum bounds of the component priors.
- Returns:
Mapping from parameter name to minimum value.
- Return type:
dict
- property names
Names of the component priors.
- Returns:
Names of each prior in order.
- Return type:
list of str
- sample(num_samples, standardize=False)[source]
Sample from each component prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
- sample_array(num_samples, standardize=False)[source]
Sample from the prior distribution and return a NumPy array.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior as an array of shape
(N, D).- Return type:
numpy.ndarray
- standardize(samples)[source]
Standardize samples using component prior statistics.
- Parameters:
samples (dict) – Mapping from parameter name to samples.
- Returns:
Standardized samples.
- Return type:
dict
- property stds
Standard deviations of the component priors.
- Returns:
Mapping from parameter name to standard deviation.
- Return type:
dict
- class fast_lisa_subtraction.priors.analytical.Prior(minimum=None, maximum=None, name='Parameter', device='cpu')[source]
Bases:
objectBase class for analytical priors.
- Parameters:
minimum (float or None, optional) – Lower bound of the prior support. If None, the bound is estimated from sampled values.
maximum (float or None, optional) – Upper bound of the prior support. If None, the bound is estimated from sampled values.
name (str, optional) – Parameter name.
device (str, optional) – Torch device used for sampling and computations.
- clip(samples)[source]
Filter samples to the prior support.
- Parameters:
samples (torch.Tensor) – Samples to clip.
- Returns:
Samples within
[minimum, maximum].- Return type:
torch.Tensor
- destandardize(samples)[source]
Reverse standardization to the original scale.
- Parameters:
samples (torch.Tensor) – Standardized samples.
- Returns:
Samples in the original scale.
- Return type:
torch.Tensor
- property maximum
Upper bound of the prior support.
- Returns:
Maximum value of the support.
- Return type:
float or torch.Tensor
- property mean
Mean of the prior.
- Returns:
Mean estimated from sampled values.
- Return type:
torch.Tensor
- property minimum
Lower bound of the prior support.
- Returns:
Minimum value of the support.
- Return type:
float or torch.Tensor
- property name
Parameter name.
- Returns:
Name of the parameter.
- Return type:
str
- sample(num_samples, standardize=False)[source]
Draw samples from the prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor
- Raises:
NotImplementedError – If the subclass does not implement sampling.
- standardize(samples)[source]
Standardize samples using the prior mean and standard deviation.
- Parameters:
samples (torch.Tensor) – Samples to standardize.
- Returns:
Standardized samples.
- Return type:
torch.Tensor
- property std
Standard deviation of the prior.
- Returns:
Standard deviation estimated from sampled values.
- Return type:
torch.Tensor
- class fast_lisa_subtraction.priors.analytical.Uniform(minimum, maximum, name=None, device='cpu')[source]
Bases:
PriorUniform prior distribution.
- Parameters:
minimum (float) – Minimum value of the prior.
maximum (float) – Maximum value of the prior.
name (str, optional) – Name of the prior parameter.
device (str, optional) – Torch device used for sampling.
- sample(num_samples, standardize=False)[source]
Sample from the uniform prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor
- class fast_lisa_subtraction.priors.analytical.UniformCosine(minimum=0, maximum=3.141592653589793, name=None, device='cpu')[source]
Bases:
PriorAngle prior uniform in the cosine of the angle.
This is commonly used for inclination angles.
- Parameters:
minimum (float, optional) – Minimum angle in radians.
maximum (float, optional) – Maximum angle in radians.
name (str, optional) – Name of the prior parameter.
device (str, optional) – Torch device used for sampling.
Notes
Uniformity in \(\cos(\theta)\) implies a density proportional to \(\sin(\theta)\) over the support.
- sample(num_samples, standardize=False)[source]
Sample from a cosine-uniform angle prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor
- class fast_lisa_subtraction.priors.analytical.UniformSine(minimum=-1.5707963267948966, maximum=1.5707963267948966, name=None, device='cpu')[source]
Bases:
PriorAngle prior uniform in the sine of the angle.
This is commonly used for ecliptic latitude angles.
- Parameters:
minimum (float, optional) – Minimum angle in radians.
maximum (float, optional) – Maximum angle in radians.
name (str, optional) – Name of the prior parameter.
device (str, optional) – Torch device used for sampling.
Notes
Uniformity in \(\sin(\theta)\) implies a density proportional to \(\cos(\theta)\) over the support.
- sample(num_samples, standardize=False)[source]
Sample from a sine-uniform angle prior.
- Parameters:
num_samples (int) – Number of samples to draw.
standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False
- Returns:
Samples drawn from the prior.
- Return type:
torch.Tensor