fast_lisa_subtraction.priors.population module

Some standard population priors. See Abbott et al 2021 (arxiv.org/pdf/2010.14533)

class fast_lisa_subtraction.priors.population.BrokenPowerLaw(alpha, beta, b, minimum, maximum, delta_p=None, name=None, device='cpu')[source]

Bases: Prior

Broken power-law prior distribution.

Notes

The density is piecewise:

\[p(\theta) \propto \theta^{\alpha}, \quad \theta_{\min} \le \theta < \theta_{\mathrm{break}},\]
\[p(\theta) \propto \theta^{\beta}\, \theta_{\mathrm{break}}^{\alpha-\beta}, \quad \theta_{\mathrm{break}} \le \theta \le \theta_{\max}.\]

The break point is determined by b. If 0 <= b <= 1, then \(\theta_{\mathrm{break}} = \theta_{\min} + b(\theta_{\max}-\theta_{\min})\); otherwise b is treated as an absolute break value within the support.

Optional low-end smoothing \(S(\theta; \delta)\) can be applied in the interval \([\theta_{\min}, \theta_{\min}+\delta]\).

Parameters:
  • alpha (float) – Power-law index for \(\theta < \theta_{\mathrm{break}}\).

  • beta (float) – Power-law index for \(\theta \ge \theta_{\mathrm{break}}\).

  • b (float) – Break parameter (fraction or absolute).

  • minimum (float) – Lower bound of the support.

  • maximum (float) – Upper bound of the support.

  • delta_p (float or None, optional) – Optional smoothing width \(\delta\). If None or 0, smoothing is disabled.

  • name (str or None, optional) – Parameter name.

  • device (str, optional) – Torch device used for sampling.

property cdf

Cumulative distribution function evaluated on the internal grid.

Returns:

Discretized CDF values over self.x.

Return type:

torch.Tensor

property pdf

Probability density function evaluated on the internal grid.

Returns:

Discretized PDF values over self.x.

Return type:

torch.Tensor

property pdf_max

Maximum value of the discretized PDF.

Returns:

Maximum PDF value.

Return type:

torch.Tensor

sample(num_samples: int, standardize: bool = False)[source]

Sample from the broken power-law prior.

Parameters:
  • num_samples (int) – Number of samples to draw.

  • standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False

Returns:

Samples drawn from the prior.

Return type:

torch.Tensor

class fast_lisa_subtraction.priors.population.PowerLaw(alpha, minimum, maximum, name=None, device='cpu')[source]

Bases: Prior

Power-law prior distribution.

\[\begin{split}p(\theta) = \begin{cases} \theta^{\alpha}, & \theta_{\min} \le \theta \le \theta_{\max} \\ 0, & \text{otherwise} \end{cases}\end{split}\]

where \(\theta_{\min}\) and \(\theta_{\max}\) are the minimum and maximum values of the prior, respectively.

Parameters:
  • alpha (float) – Power-law index \(\alpha\).

  • minimum (float) – Minimum value of the prior \(\theta_{\min}\).

  • maximum (float) – Maximum value of the prior \(\theta_{\max}\).

  • name (str, optional) – Name of the prior parameter.

  • device (str, optional) – Torch device used for sampling.

property cdf

Cumulative distribution function evaluated on the internal grid.

Returns:

Discretized CDF values over self.x.

Return type:

torch.Tensor

log_prob(x, standardize=False)[source]

Evaluate the log-probability of the power-law prior.

Parameters:
  • x (array-like or torch.Tensor) – Points at which to evaluate log p(x).

  • standardize (bool, optional) – If True, interpret x as standardized and de-standardize before evaluating.

Returns:

Log-probability values.

Return type:

torch.Tensor

property pdf

Probability density function evaluated on the internal grid.

Returns:

Discretized PDF values over self.x.

Return type:

torch.Tensor

property pdf_max

Maximum value of the discretized PDF.

Returns:

Maximum PDF value.

Return type:

torch.Tensor

sample(num_samples, standardize=False)[source]

Sample from the power-law prior.

Parameters:
  • num_samples (int) – Number of samples to draw.

  • standardize (bool, optional) – Whether to standardize the samples to zero mean and unit variance (for training purposes). Default: False

Returns:

Samples drawn from the prior.

Return type:

torch.Tensor

class fast_lisa_subtraction.priors.population.PowerLawPlusPeak(alpha, beta, minimum, maximum, peak, name='Parameter')[source]

Bases: Prior

Power-law plus peak prior distribution.

Parameters:
  • alpha (float) – Power-law index for the background.

  • beta (float) – Power-law index for the peak component.

  • minimum (float) – Lower bound of the support.

  • maximum (float) – Upper bound of the support.

  • peak (float) – Peak location or scale parameter (implementation-specific).

  • name (str, optional) – Name of the prior parameter.

sample(N)[source]

Sample from the power-law plus peak prior.

Parameters:

N (int) – Number of samples to draw.

Raises:

NotImplementedError – Sampling for this prior is not implemented.