pycbc.inference.models package

Submodules

pycbc.inference.models.analytic module

This modules provides models that have analytic solutions for the log likelihood.

class pycbc.inference.models.analytic.TestEggbox(variable_params, **kwargs)[source]

Bases: BaseModel

The test distribution is an ‘eggbox’ function:

\[\log \mathcal{L}(\Theta) = \left[ 2+\prod_{i=1}^{n}\cos\left(\frac{\theta_{i}}{2}\right)\right]^{5}\]

The number of dimensions is set by the number of variable_params that are passed.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • **kwargs – All other keyword arguments are passed to BaseModel.

name = 'test_eggbox'
class pycbc.inference.models.analytic.TestNormal(variable_params, mean=None, cov=None, **kwargs)[source]

Bases: BaseModel

The test distribution is an multi-variate normal distribution.

The number of dimensions is set by the number of variable_params that are passed. For details on the distribution used, see scipy.stats.multivariate_normal.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • mean (array-like, optional) – The mean values of the parameters. If None provide, will use 0 for all parameters.

  • cov (array-like, optional) – The covariance matrix of the parameters. If None provided, will use unit variance for all parameters, with cross-terms set to 0.

  • **kwargs – All other keyword arguments are passed to BaseModel.

Examples

Create a 2D model with zero mean and unit variance:

>>> m = TestNormal(['x', 'y'])

Set the current parameters and evaluate the log posterior:

>>> m.update(x=-0.2, y=0.1)
>>> m.logposterior
-1.8628770664093453

See the current stats that were evaluated:

>>> m.current_stats
{'logjacobian': 0.0, 'loglikelihood': -1.8628770664093453, 'logprior': 0.0}
name = 'test_normal'
class pycbc.inference.models.analytic.TestPosterior(variable_params, posterior_file, nsamples, **kwargs)[source]

Bases: BaseModel

Build a test posterior from a set of samples using a kde

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • posterior_file (hdf file) – A compatible pycbc inference output file which posterior samples can be read from.

  • nsamples (int) – Number of samples to draw from posterior file to build KDE.

  • **kwargs – All other keyword arguments are passed to BaseModel.

name = 'test_posterior'
class pycbc.inference.models.analytic.TestPrior(variable_params, **kwargs)[source]

Bases: BaseModel

Uses the prior as the test distribution.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied. Must have length 2.

  • **kwargs – All other keyword arguments are passed to BaseModel.

name = 'test_prior'
class pycbc.inference.models.analytic.TestRosenbrock(variable_params, **kwargs)[source]

Bases: BaseModel

The test distribution is the Rosenbrock function:

\[\log \mathcal{L}(\Theta) = -\sum_{i=1}^{n-1}[ (1-\theta_{i})^{2}+100(\theta_{i+1} - \theta_{i}^{2})^{2}]\]

The number of dimensions is set by the number of variable_params that are passed.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • **kwargs – All other keyword arguments are passed to BaseModel.

name = 'test_rosenbrock'
class pycbc.inference.models.analytic.TestVolcano(variable_params, **kwargs)[source]

Bases: BaseModel

The test distribution is a two-dimensional ‘volcano’ function:

\[\Theta = \sqrt{\theta_{1}^{2} + \theta_{2}^{2}} \log \mathcal{L}(\Theta) = 25\left(e^{\frac{-\Theta}{35}} + \frac{1}{2\sqrt{2\pi}} e^{-\frac{(\Theta-5)^{2}}{8}}\right)\]
Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied. Must have length 2.

  • **kwargs – All other keyword arguments are passed to BaseModel.

name = 'test_volcano'

pycbc.inference.models.base module

Base class for models.

class pycbc.inference.models.base.BaseModel(variable_params, static_params=None, prior=None, sampling_transforms=None, waveform_transforms=None, **kwargs)[source]

Bases: object

Base class for all models.

Given some model \(h\) with parameters \(\Theta\), Bayes Theorem states that the probability of observing parameter values \(\vartheta\) is:

\[p(\vartheta|h) = \frac{p(h|\vartheta) p(\vartheta)}{p(h)}.\]

Here:

  • \(p(\vartheta|h)\) is the posterior probability;

  • \(p(h|\vartheta)\) is the likelihood;

  • \(p(\vartheta)\) is the prior;

  • \(p(h)\) is the evidence.

This class defines properties and methods for evaluating the log likelihood, log prior, and log posteror. A set of parameter values is set using the update method. Calling the class’s log(likelihood|prior|posterior) properties will then evaluate the model at those parameter values.

Classes that inherit from this class must implement a _loglikelihood function that can be called by loglikelihood.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • static_params (dict, optional) – A dictionary of parameter names -> values to keep fixed.

  • prior (callable, optional) – A callable class or function that computes the log of the prior. If None provided, will use _noprior, which returns 0 for all parameter values.

  • sampling_params (list, optional) – Replace one or more of the variable_params with the given parameters for sampling.

  • replace_parameters (list, optional) – The variable_params to replace with sampling parameters. Must be the same length as sampling_params.

  • sampling_transforms (list, optional) – List of transforms to use to go between the variable_params and the sampling parameters. Required if sampling_params is not None.

  • waveform_transforms (list, optional) – A list of transforms to convert the variable_params into something understood by the likelihood model. This is useful if the prior is more easily parameterized in parameters that are different than what the likelihood is most easily defined in. Since these are used solely for converting parameters, and not for rescaling the parameter space, a Jacobian is not required for these transforms.

property current_params
property current_stats

Return the default_stats as a dict.

This does no computation. It only returns what has already been calculated. If a stat hasn’t been calculated, it will be returned as numpy.nan.

Returns:

Dictionary of stat names -> current stat values.

Return type:

dict

property default_stats

The stats that get_current_stats returns by default.

static extra_args_from_config(cp, section, skip_args=None, dtypes=None)[source]

Gets any additional keyword in the given config file.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • section (str) – The name of the section to read.

  • skip_args (list of str, optional) – Names of arguments to skip.

  • dtypes (dict, optional) – A dictionary of arguments -> data types. If an argument is found in the dict, it will be cast to the given datatype. Otherwise, the argument’s value will just be read from the config file (and thus be a string).

Returns:

Dictionary of keyword arguments read from the config file.

Return type:

dict

classmethod from_config(cp, **kwargs)[source]

Initializes an instance of this class from the given config file.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • **kwargs – All additional keyword arguments are passed to the class. Any provided keyword will over ride what is in the config file.

get_current_stats(names=None)[source]

Return one or more of the current stats as a tuple.

This function does no computation. It only returns what has already been calculated. If a stat hasn’t been calculated, it will be returned as numpy.nan.

Parameters:

names (list of str, optional) – Specify the names of the stats to retrieve. If None (the default), will return default_stats.

Returns:

The current values of the requested stats, as a tuple. The order of the stats is the same as the names.

Return type:

tuple

property logjacobian

The log jacobian of the sampling transforms at the current postion.

If no sampling transforms were provided, will just return 0.

Parameters:

**params – The keyword arguments should specify values for all of the variable args and all of the sampling args.

Returns:

The value of the jacobian.

Return type:

float

property loglikelihood

The log likelihood at the current parameters.

This will initially try to return the current_stats.loglikelihood. If that raises an AttributeError, will call _loglikelihood` to calculate it and store it to current_stats.

property logposterior

Returns the log of the posterior of the current parameter values.

The logprior is calculated first. If the logprior returns -inf (possibly indicating a non-physical point), then the loglikelihood is not called.

property logprior

Returns the log prior at the current parameters.

name = None
static prior_from_config(cp, variable_params, static_params, prior_section, constraint_section)[source]

Gets arguments and keyword arguments from a config file.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • variable_params (list) – List of variable model parameter names.

  • static_params (dict) – Dictionary of static model parameters and their values.

  • prior_section (str) – Section to read prior(s) from.

  • constraint_section (str) – Section to read constraint(s) from.

Returns:

The prior.

Return type:

pycbc.distributions.JointDistribution

prior_rvs(size=1, prior=None)[source]

Returns random variates drawn from the prior.

If the sampling_params are different from the variable_params, the variates are transformed to the sampling_params parameter space before being returned.

Parameters:
  • size (int, optional) – Number of random values to return for each parameter. Default is 1.

  • prior (JointDistribution, optional) – Use the given prior to draw values rather than the saved prior.

Returns:

A field array of the random values.

Return type:

FieldArray

property sampling_params

Returns the sampling parameters.

If sampling_transforms is None, this is the same as the variable_params.

property static_params

Returns the model’s static arguments.

update(**params)[source]

Updates the current parameter positions and resets stats.

If any sampling transforms are specified, they are applied to the params before being stored.

property variable_params

Returns the model parameters.

write_metadata(fp, group=None)[source]

Writes metadata to the given file handler.

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

class pycbc.inference.models.base.ModelStats[source]

Bases: object

Class to hold model’s current stat values.

getstats(names, default=nan)[source]

Get the requested stats as a tuple.

If a requested stat is not an attribute (implying it hasn’t been stored), then the default value is returned for that stat.

Parameters:
  • names (list of str) – The names of the stats to get.

  • default (float, optional) – What to return if a requested stat is not an attribute of self. Default is numpy.nan.

Returns:

A tuple of the requested stats.

Return type:

tuple

getstatsdict(names, default=nan)[source]

Get the requested stats as a dictionary.

If a requested stat is not an attribute (implying it hasn’t been stored), then the default value is returned for that stat.

Parameters:
  • names (list of str) – The names of the stats to get.

  • default (float, optional) – What to return if a requested stat is not an attribute of self. Default is numpy.nan.

Returns:

A dictionary of the requested stats.

Return type:

dict

property statnames

Returns the names of the stats that have been stored.

class pycbc.inference.models.base.SamplingTransforms(variable_params, sampling_params, replace_parameters, sampling_transforms)[source]

Bases: object

Provides methods for transforming between sampling parameter space and model parameter space.

apply(samples, inverse=False)[source]

Applies the sampling transforms to the given samples.

Parameters:
  • samples (dict or FieldArray) – The samples to apply the transforms to.

  • inverse (bool, optional) – Whether to apply the inverse transforms (i.e., go from the sampling args to the variable_params). Default is False.

Returns:

The transformed samples, along with the original samples.

Return type:

dict or FieldArray

classmethod from_config(cp, variable_params)[source]

Gets sampling transforms specified in a config file.

Sampling parameters and the parameters they replace are read from the sampling_params section, if it exists. Sampling transforms are read from the sampling_transforms section(s), using transforms.read_transforms_from_config.

An AssertionError is raised if no sampling_params section exists in the config file.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • variable_params (list) – List of parameter names of the original variable params.

Returns:

A sampling transforms class.

Return type:

SamplingTransforms

logjacobian(**params)[source]

Returns the log of the jacobian needed to transform pdfs in the variable_params parameter space to the sampling_params parameter space.

Let \(\mathbf{x}\) be the set of variable parameters, \(\mathbf{y} = f(\mathbf{x})\) the set of sampling parameters, and \(p_x(\mathbf{x})\) a probability density function defined over \(\mathbf{x}\). The corresponding pdf in \(\mathbf{y}\) is then:

\[p_y(\mathbf{y}) = p_x(\mathbf{x})\left|\mathrm{det}\,\mathbf{J}_{ij}\right|,\]

where \(\mathbf{J}_{ij}\) is the Jacobian of the inverse transform \(\mathbf{x} = g(\mathbf{y})\). This has elements:

\[\mathbf{J}_{ij} = \frac{\partial g_i}{\partial{y_j}}\]

This function returns \(\log \left|\mathrm{det}\,\mathbf{J}_{ij}\right|\).

Parameters:

**params – The keyword arguments should specify values for all of the variable args and all of the sampling args.

Returns:

The value of the jacobian.

Return type:

float

pycbc.inference.models.base.check_for_cartesian_spins(which, variable_params, static_params, waveform_transforms, cp, ignore)[source]

Checks that if any spin parameters exist, cartesian spins also exist.

This looks for parameters starting with spinN in the variable and static params, where N is either 1 or 2 (specified by the which argument). If any parameters are found with those names, the params and the output of the waveform transforms are checked to see that there is at least one of spinN(x|y|z). If not, a ValueError is raised.

This check will not be done if the config file has an section given by the ignore argument.

Parameters:
  • which ({1, 2}) – Which component to check for. Must be either 1 or 2.

  • variable_params (list) – List of the variable parameters.

  • static_params (dict) – The dictionary of static params.

  • waveform_transforms (list) – List of the transforms that will be applied to the variable and static params before being passed to the waveform generator.

  • cp (ConfigParser) – The config file.

  • ignore (str) – The section to check for in the config file. If the section is present in the config file, the check will not be done.

pycbc.inference.models.base.read_sampling_params_from_config(cp, section_group=None, section='sampling_params')[source]

Reads sampling parameters from the given config file.

Parameters are read from the [({section_group}_){section}] section. The options should list the variable args to transform; the parameters they point to should list the parameters they are to be transformed to for sampling. If a multiple parameters are transformed together, they should be comma separated. Example:

[sampling_params]
mass1, mass2 = mchirp, logitq
spin1_a = logitspin1_a

Note that only the final sampling parameters should be listed, even if multiple intermediate transforms are needed. (In the above example, a transform is needed to go from mass1, mass2 to mchirp, q, then another one needed to go from q to logitq.) These transforms should be specified in separate sections; see transforms.read_transforms_from_config for details.

Parameters:
  • cp (WorkflowConfigParser) – An open config parser to read from.

  • section_group (str, optional) – Append {section_group}_ to the section name. Default is None.

  • section (str, optional) – The name of the section. Default is ‘sampling_params’.

Returns:

  • sampling_params (list) – The list of sampling parameters to use instead.

  • replaced_params (list) – The list of variable args to replace in the sampler.

pycbc.inference.models.base_data module

Base classes for mofdels with data.

class pycbc.inference.models.base_data.BaseDataModel(variable_params, data, recalibration=None, gates=None, injection_file=None, no_save_data=False, **kwargs)[source]

Bases: BaseModel

Base class for models that require data and a waveform generator.

This adds propeties for the log of the likelihood that the data contain noise, lognl, and the log likelihood ratio loglr.

Classes that inherit from this class must define _loglr and _lognl functions, in addition to the _loglikelihood requirement inherited from BaseModel.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • data (dict) – A dictionary of data, in which the keys are the detector names and the values are the data.

  • recalibration (dict of pycbc.calibration.Recalibrate, optional) – Dictionary of detectors -> recalibration class instances for recalibrating data.

  • gates (dict of tuples, optional) – Dictionary of detectors -> tuples of specifying gate times. The sort of thing returned by pycbc.gate.gates_from_cli.

  • injection_file (str, optional) – If an injection was added to the data, the name of the injection file used. If provided, the injection parameters will be written to file when write_metadata is called.

  • **kwargs – All other keyword arguments are passed to BaseModel.

See BaseModel for additional attributes and properties.

property data

Dictionary mapping detector names to data.

Type:

dict

property detectors

Returns the detectors used.

Type:

list

property loglr

The log likelihood ratio at the current parameters, or the inner product <s|h> and <h|h> if set the flag self.return_sh_hh to be True.

This will initially try to return the current_stats.loglr. If that raises an AttributeError, will call _loglr` to calculate it and store it to current_stats.

property lognl

The log likelihood of the model assuming the data is noise.

This will initially try to return the current_stats.lognl. If that raises an AttributeError, will call _lognl` to calculate it and store it to current_stats.

property logplr

Returns the log of the prior-weighted likelihood ratio at the current parameter values.

The logprior is calculated first. If the logprior returns -inf (possibly indicating a non-physical point), then loglr is not called.

write_metadata(fp, group=None)[source]

Adds data to the metadata that’s written.

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

pycbc.inference.models.brute_marg module

This module provides model classes that do brute force marginalization using at the likelihood level.

class pycbc.inference.models.brute_marg.BruteLISASkyModesMarginalize(variable_params, cores=1, loop_polarization=False, base_model=None, **kwds)[source]

Bases: BaseGaussianNoise

classmethod from_config(cp, **kwargs)[source]

Initializes an instance of this class from the given config file.

In addition to [model], a data_section (default [data]) must be in the configuration file. The data section specifies settings for loading data and estimating PSDs. See the online documentation for more details.

The following options are read from the [model] section, in addition to name (which must be set):

  • {{DET}}-low-frequency-cutoff = FLOAT : The low frequency cutoff to use for each detector {{DET}}. A cutoff must be provided for every detector that may be analyzed (any additional detectors are ignored).

  • {{DET}}-high-frequency-cutoff = FLOAT : (Optional) A high frequency cutoff for each detector. If not provided, the Nyquist frequency is used.

  • check-for-valid-times = : (Optional) If provided, will check that there are no data quality flags on during the analysis segment and the segment used for PSD estimation in each detector. To check for flags, pycbc.dq.query_flag() is used, with settings pulled from the dq-* options in the [data] section. If a detector has bad data quality during either the analysis segment or PSD segment, it will be removed from the analysis.

  • shift-psd-times-to-valid = : (Optional) If provided, the segment used for PSD estimation will automatically be shifted left or right until a continous block of data with no data quality issues can be found. If no block can be found with a maximum shift of +/- the requested psd segment length, the detector will not be analyzed.

  • err-on-missing-detectors = : Raises an error if any detector is removed from the analysis because a valid time could not be found. Otherwise, a warning is printed to screen and the detector is removed from the analysis.

  • normalize = : (Optional) Turn on the normalization factor.

  • ignore-failed-waveforms = : Sets the ignore_failed_waveforms attribute.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • data_section (str, optional) – The name of the section to load data options from.

  • **kwargs – All additional keyword arguments are passed to the class. Any provided keyword will override what is in the config file.

name = 'brute_lisa_sky_modes_marginalize'
reconstruct(seed=None)[source]

Reconstruct a point from unwrapping the 8-fold sky symmetry

class pycbc.inference.models.brute_marg.BruteParallelGaussianMarginalize(variable_params, cores=10, base_model=None, marginalize_phase=None, **kwds)[source]

Bases: BaseGaussianNoise

name = 'brute_parallel_gaussian_marginalize'
class pycbc.inference.models.brute_marg.likelihood_wrapper(model)[source]

Bases: object

pycbc.inference.models.data_utils module

Utilities for loading data for models.

exception pycbc.inference.models.data_utils.NoValidDataError[source]

Bases: Exception

This should be raised if a continous segment of valid data could not be found.

pycbc.inference.models.data_utils.check_for_nans(strain_dict)[source]

Checks if any data in a dictionary of strains has NaNs.

If any NaNs are found, a ValueError is raised.

Parameters:

strain_dict (dict) – Dictionary of detectors -> pycbc.types.timeseries.TimeSeries.

pycbc.inference.models.data_utils.check_validtimes(detector, gps_start, gps_end, shift_to_valid=False, max_shift=None, segment_name='DATA', **kwargs)[source]

Checks DQ server to see if the given times are in a valid segment.

If the shift_to_valid flag is provided, the times will be shifted left or right to try to find a continous valid block nearby. The shifting starts by shifting the times left by 1 second. If that does not work, it shifts the times right by one second. This continues, increasing the shift time by 1 second, until a valid block could be found, or until the shift size exceeds max_shift.

If the given times are not in a continuous valid segment, or a valid block cannot be found nearby, a NoValidDataError is raised.

Parameters:
  • detector (str) – The name of the detector to query; e.g., ‘H1’.

  • gps_start (int) – The GPS start time of the segment to query.

  • gps_end (int) – The GPS end time of the segment to query.

  • shift_to_valid (bool, optional) – If True, will try to shift the gps start and end times to the nearest continous valid segment of data. Default is False.

  • max_shift (int, optional) – The maximum number of seconds to try to shift left or right to find a valid segment. Default is gps_end - gps_start.

  • segment_name (str, optional) – The status flag to query; passed to pycbc.dq.query_flag(). Default is “DATA”.

  • **kwargs – All other keyword arguments are passed to pycbc.dq.query_flag().

Returns:

  • use_start (int) – The start time to use. If shift_to_valid is True, this may differ from the given GPS start time.

  • use_end (int) – The end time to use. If shift_to_valid is True, this may differ from the given GPS end time.

pycbc.inference.models.data_utils.create_data_parser()[source]

Creates an argument parser for loading GW data.

pycbc.inference.models.data_utils.data_from_cli(opts, check_for_valid_times=False, shift_psd_times_to_valid=False, err_on_missing_detectors=False)[source]

Loads the data needed for a model from the given command-line options.

Gates specifed on the command line are also applied.

Parameters:
  • opts (ArgumentParser parsed args) – Argument options parsed from a command line string (the sort of thing returned by parser.parse_args).

  • check_for_valid_times (bool, optional) – Check that valid data exists in the requested gps times. Default is False.

  • shift_psd_times_to_valid (bool, optional) – If estimating the PSD from data, shift the PSD times to a valid segment if needed. Default is False.

  • err_on_missing_detectors (bool, optional) – Raise a NoValidDataError if any detector does not have valid data. Otherwise, a warning is printed, and that detector is skipped.

Returns:

  • strain_dict (dict) – Dictionary of detectors -> time series strain.

  • psd_strain_dict (dict or None) – If opts.psd_(start|end)_time were set, a dctionary of detectors -> time series data to use for PSD estimation. Otherwise, None.

pycbc.inference.models.data_utils.data_opts_from_config(cp, section, filter_flow)[source]

Loads data options from a section in a config file.

Parameters:
  • cp (WorkflowConfigParser) – Config file to read.

  • section (str) – The section to read. All options in the section will be loaded as-if they wre command-line arguments.

  • filter_flow (dict) – Dictionary of detectors -> inner product low frequency cutoffs. If a data-conditioning-low-freq cutoff wasn’t provided for any of the detectors, these values will be used. Otherwise, the data-conditioning-low-freq must be less than the inner product cutoffs. If any are not, a ValueError is raised.

Returns:

opts – An argument parser namespace that was constructed as if the options were specified on the command line.

Return type:

parsed argparse.ArgumentParser

pycbc.inference.models.data_utils.detectors_with_valid_data(detectors, gps_start_times, gps_end_times, pad_data=None, err_on_missing_detectors=False, **kwargs)[source]

Determines which detectors have valid data.

Parameters:
  • detectors (list of str) – Names of the detector names to check.

  • gps_start_times (dict) – Dictionary of detector name -> start time listing the GPS start times of the segment to check for each detector.

  • gps_end_times (dict) – Dictionary of detector name -> end time listing the GPS end times of the segment to check for each detector.

  • pad_data (dict, optional) – Dictionary of detector name -> pad time to add to the beginning/end of the GPS start/end times before checking. A pad time for every detector in detectors must be given. Default (None) is to apply no pad to the times.

  • err_on_missing_detectors (bool, optional) – If True, a NoValidDataError will be raised if any detector does not have continous valid data in its requested segment. Otherwise, the detector will not be included in the returned list of detectors with valid data. Default is False.

  • **kwargs – All other keyword arguments are passed to check_validtimes.

Returns:

A dictionary of detector name -> valid times giving the detectors with valid data and their segments. If shift_to_valid was passed to check_validtimes this may not be the same as the input segments. If no valid times could be found for a detector (and err_on_missing_detectors is False), it will not be included in the returned dictionary.

Return type:

dict

pycbc.inference.models.data_utils.fd_data_from_strain_dict(opts, strain_dict, psd_strain_dict=None)[source]

Converts a dictionary of time series to the frequency domain, and gets the PSDs.

Parameters:
  • opts (ArgumentParser parsed args) – Argument options parsed from a command line string (the sort of thing returned by parser.parse_args).

  • strain_dict (dict) – Dictionary of detectors -> time series data.

  • psd_strain_dict (dict, optional) – Dictionary of detectors -> time series data to use for PSD estimation. If not provided, will use the strain_dict. This is ignored if opts.psd_estimation is not set. See pycbc.psd.psd_from_cli_multi_ifos() for details.

Returns:

  • stilde_dict (dict) – Dictionary of detectors -> frequency series data.

  • psd_dict (dict) – Dictionary of detectors -> frequency-domain PSDs.

pycbc.inference.models.data_utils.gate_overwhitened_data(stilde_dict, psd_dict, gates)[source]

Applies gates to overwhitened data.

Parameters:
  • stilde_dict (dict) – Dictionary of detectors -> frequency series data to apply the gates to.

  • psd_dict (dict) – Dictionary of detectors -> PSD to use for overwhitening.

  • gates (dict) – Dictionary of detectors -> gates.

Returns:

Dictionary of detectors -> frequency series data with the gates applied after overwhitening. The returned data is not overwhitened.

Return type:

dict

pycbc.inference.models.data_utils.strain_from_cli_multi_ifos(*args, **kwargs)[source]

Wrapper around strain.from_cli_multi_ifos that tries a few times before quiting.

When running in a parallel environment, multiple concurrent queries to the segment data base can cause time out errors. If that happens, this will sleep for a few seconds, then try again a few times before giving up.

pycbc.inference.models.gated_gaussian_noise module

This module provides model classes that assume the noise is Gaussian and introduces a gate to remove given times from the data, using the inpainting method to fill the removed part such that it does not enter the likelihood.

class pycbc.inference.models.gated_gaussian_noise.BaseGatedGaussian(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, static_params=None, highpass_waveforms=False, **kwargs)[source]

Bases: BaseGaussianNoise

Base model for gated gaussian.

Provides additional routines for applying a time-domain gate to data. See GatedGaussianNoise for more details.

property data

Dictionary mapping detector names to data.

Type:

dict

det_lognl(det)[source]

Returns the log likelihood of the noise in the given detector:

\[\log p(d_i|n_i) = \log \alpha_i - \frac{1}{2} \left<d_i | d_i\right>.\]
Parameters:

det (str) – The name of the detector.

Returns:

The log likelihood of the noise in the requested detector.

Return type:

float

det_lognorm(det)[source]

The log of the likelihood normalization in the given detector.

If self.normalize is False, will just return 0.

classmethod from_config(cp, data_section='data', data=None, psds=None, **kwargs)[source]

Adds highpass filtering to keyword arguments based on config file.

get_data()[source]

Return a copy of the data.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

get_gate_times()[source]

Gets the time to apply a gate based on the current sky position.

If the parameter gatefunc is set to 'hmeco', the gate times will be calculated based on the hybrid MECO of the given set of parameters; see get_gate_times_hmeco for details. Otherwise, the gate times will just be retrieved from the t_gate_start and t_gate_end parameters.

Warning

Since the normalization of the likelihood is currently not being calculated, it is recommended that you do not use gatefunc, instead using fixed gate times.

Returns:

Dictionary of detector names -> (gate start, gate width)

Return type:

dict

get_gate_times_hmeco()[source]

Gets the time to apply a gate based on the current sky position.

Returns:

Dictionary of detector names -> (gate start, gate width)

Return type:

dict

get_gated_data()[source]

Return a copy of the gated data.

The gated data will be cached for faster retrieval.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

abstract get_gated_waveforms()[source]

Generates and gates waveforms using the current parameters.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

get_residuals()[source]

Generates the residuals d-h using the current parameters.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

get_waveforms()[source]

The waveforms generated using the current parameters.

If the waveforms haven’t been generated yet, they will be generated, resized to the same length as the data, and cached. If the highpass_waveforms attribute is set, a highpass filter will also be applied to the waveforms.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

property normalize

Determines if the loglikelihood includes the normalization term.

property psds

Dictionary of detectors -> PSD frequency series.

If no PSD was provided for a detector, this will just be a frequency series of ones.

property td_data

The data in the time domain.

whiten(data, whiten, inplace=False)[source]

Whitens the given data.

Parameters:
  • data (dict) – Dictionary of detector names -> FrequencySeries.

  • whiten ({0, 1, 2}) – Integer indicating what level of whitening to apply. Levels are: 0: no whitening; 1: whiten; 2: overwhiten.

  • inplace (bool, optional) – If True, modify the data in place. Otherwise, a copy will be created for whitening.

Returns:

Dictionary of FrequencySeries after the requested whitening has been applied.

Return type:

dict

write_metadata(fp, group=None)[source]

Adds writing the psds, and analyzed detectors.

The analyzed detectors, their analysis segments, and the segments used for psd estimation are written as analyzed_detectors, {{detector}}_analysis_segment, and {{detector}}_psd_segment, respectively. These are either written to the specified group’s attrs, or to the top level attrs if group is None.

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

class pycbc.inference.models.gated_gaussian_noise.GatedGaussianMargPol(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, static_params=None, polarization_samples=1000, **kwargs)[source]

Bases: BaseGatedGaussian

Gated gaussian model with numerical marginalization over polarization.

This implements the GatedGaussian likelihood with an explicit numerical marginalization over polarization angle. This is accomplished using a fixed set of integration points distribution uniformation between 0 and 2pi. By default, 1000 integration points are used. The ‘polarization_samples’ argument can be passed to set an alternate number of integration points.

get_gated_waveforms()[source]

Generates and gates waveforms using the current parameters.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

get_waveforms()[source]

The waveforms generated using the current parameters.

If the waveforms haven’t been generated yet, they will be generated, resized to the same length as the data, and cached. If the highpass_waveforms attribute is set, a highpass filter will also be applied to the waveforms.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

name = 'gated_gaussian_margpol'
class pycbc.inference.models.gated_gaussian_noise.GatedGaussianNoise(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, static_params=None, **kwargs)[source]

Bases: BaseGatedGaussian

Model that applies a time domain gate, assuming stationary Gaussian noise.

The gate start and end times are set by providing t_gate_start and t_gate_end parameters, respectively. This will cause the gated times to be excised from the analysis. For more details on the likelihood function and its derivation, see arXiv:2105.05238.

Warning

The normalization of the likelihood depends on the gate times. However, at the moment, the normalization is not calculated, as it depends on the determinant of the truncated covariance matrix (see Eq. 4 of arXiv:2105.05238). For this reason it is recommended that you only use this model for fixed gate times.

get_gated_residuals()[source]

Generates the gated residuals d-h using the current parameters.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

get_gated_waveforms()[source]

Generates and gates waveforms using the current parameters.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

name = 'gated_gaussian_noise'

pycbc.inference.models.gaussian_noise module

This module provides model classes that assume the noise is Gaussian.

class pycbc.inference.models.gaussian_noise.BaseGaussianNoise(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, static_params=None, ignore_failed_waveforms=False, no_save_data=False, **kwargs)[source]

Bases: BaseDataModel

Model for analyzing GW data with assuming a wide-sense stationary Gaussian noise model.

This model will load gravitational wave data and calculate the log noise likelihood _lognl and normalization. It also implements the _loglikelihood function as the sum of the log likelihood ratio and the lognl. It does not implement a log likelihood ratio function _loglr, however, since that can differ depending on the signal model. Models that analyze GW data assuming it is stationary Gaussian should therefore inherit from this class and implement their own _loglr function.

For more details on the inner product used, the log likelihood of the noise, and the normalization factor, see GaussianNoise.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • data (dict) – A dictionary of data, in which the keys are the detector names and the values are the data (assumed to be unwhitened). All data must have the same frequency resolution.

  • low_frequency_cutoff (dict) – A dictionary of starting frequencies, in which the keys are the detector names and the values are the starting frequencies for the respective detectors to be used for computing inner products.

  • psds (dict, optional) – A dictionary of FrequencySeries keyed by the detector names. The dictionary must have a psd for each detector specified in the data dictionary. If provided, the inner products in each detector will be weighted by 1/psd of that detector.

  • high_frequency_cutoff (dict, optional) – A dictionary of ending frequencies, in which the keys are the detector names and the values are the ending frequencies for the respective detectors to be used for computing inner products. If not provided, the minimum of the largest frequency stored in the data and a given waveform will be used.

  • normalize (bool, optional) – If True, the normalization factor \(alpha\) will be included in the log likelihood. See GaussianNoise for details. Default is to not include it.

  • static_params (dict, optional) – A dictionary of parameter names -> values to keep fixed.

  • ignore_failed_waveforms (bool, optional) – If the waveform generator raises an error when it tries to generate, treat the point as having zero likelihood. This allows the parameter estimation to continue. Otherwise, an error will be raised, stopping the run. Default is False.

  • **kwargs – All other keyword arguments are passed to BaseDataModel.

ignore_failed_waveforms

If True, points in parameter space that cause waveform generation to fail (i.e., they raise a FailedWaveformError) will be treated as points with zero likelihood. Otherwise, such points will cause the model to raise a FailedWaveformError.

Type:

bool

det_lognl(det)[source]

Returns the log likelihood of the noise in the given detector:

\[\log p(d_i|n_i) = \log \alpha_i - \frac{1}{2} \left<d_i | d_i\right>.\]
Parameters:

det (str) – The name of the detector.

Returns:

The log likelihood of the noise in the requested detector.

Return type:

float

det_lognorm(det)[source]

The log of the likelihood normalization in the given detector.

If self.normalize is False, will just return 0.

classmethod from_config(cp, data_section='data', data=None, psds=None, **kwargs)[source]

Initializes an instance of this class from the given config file.

In addition to [model], a data_section (default [data]) must be in the configuration file. The data section specifies settings for loading data and estimating PSDs. See the online documentation for more details.

The following options are read from the [model] section, in addition to name (which must be set):

  • {{DET}}-low-frequency-cutoff = FLOAT : The low frequency cutoff to use for each detector {{DET}}. A cutoff must be provided for every detector that may be analyzed (any additional detectors are ignored).

  • {{DET}}-high-frequency-cutoff = FLOAT : (Optional) A high frequency cutoff for each detector. If not provided, the Nyquist frequency is used.

  • check-for-valid-times = : (Optional) If provided, will check that there are no data quality flags on during the analysis segment and the segment used for PSD estimation in each detector. To check for flags, pycbc.dq.query_flag() is used, with settings pulled from the dq-* options in the [data] section. If a detector has bad data quality during either the analysis segment or PSD segment, it will be removed from the analysis.

  • shift-psd-times-to-valid = : (Optional) If provided, the segment used for PSD estimation will automatically be shifted left or right until a continous block of data with no data quality issues can be found. If no block can be found with a maximum shift of +/- the requested psd segment length, the detector will not be analyzed.

  • err-on-missing-detectors = : Raises an error if any detector is removed from the analysis because a valid time could not be found. Otherwise, a warning is printed to screen and the detector is removed from the analysis.

  • normalize = : (Optional) Turn on the normalization factor.

  • ignore-failed-waveforms = : Sets the ignore_failed_waveforms attribute.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • data_section (str, optional) – The name of the section to load data options from.

  • **kwargs – All additional keyword arguments are passed to the class. Any provided keyword will override what is in the config file.

property high_frequency_cutoff

The high frequency cutoff of the inner product.

property kmax

Dictionary of ending indices for the inner product.

This is determined from the high frequency cutoff and the delta_f of the data using pycbc.filter.matchedfilter.get_cutoff_indices(). If no high frequency cutoff was provided, this will be the indice corresponding to the Nyquist frequency.

property kmin

Dictionary of starting indices for the inner product.

This is determined from the lower frequency cutoff and the delta_f of the data using pycbc.filter.matchedfilter.get_cutoff_indices().

property lognorm

The log of the normalization of the log likelihood.

property low_frequency_cutoff

The low frequency cutoff of the inner product.

property normalize

Determines if the loglikelihood includes the normalization term.

property psd_segments

Dictionary giving times used for PSD estimation for each detector.

If a detector’s PSD was not estimated from data, or the segment wasn’t provided, that detector will not be in the dictionary.

property psds

Dictionary of detectors -> PSD frequency series.

If no PSD was provided for a detector, this will just be a frequency series of ones.

set_psd_segments(psds)[source]

Sets the PSD segments from a dictionary of PSDs.

This attempts to get the PSD segment from a psd_segment attribute of each detector’s PSD frequency series. If that attribute isn’t set, then that detector is not added to the dictionary of PSD segments.

Parameters:

psds (dict) – Dictionary of detector name -> PSD frequency series. The segment used for each PSD will try to be retrieved from the PSD’s .psd_segment attribute.

update(**params)[source]

Updates the current parameter positions and resets stats.

If any sampling transforms are specified, they are applied to the params before being stored.

property weight

Dictionary of detectors -> frequency series of inner-product weights.

The weights are \(\sqrt{4 \Delta f / S_n(f)}\). This is set when the PSDs are set.

property whitened_data

Dictionary of detectors -> whitened data frequency series.

The whitened data is the data multiplied by the inner-product weight. Note that this includes the \(\sqrt{4 \Delta f}\) factor. This is set when the PSDs are set.

write_metadata(fp, group=None)[source]

Adds writing the psds, analyzed detectors, and lognl.

The analyzed detectors, their analysis segments, and the segments used for psd estimation are written as analyzed_detectors, {{detector}}_analysis_segment, and {{detector}}_psd_segment, respectively. These are either written to the specified group’s attrs, or to the top level attrs if group is None.

The total and each detector’s lognl is written to the sample group’s attrs. If a group is specified, the group name will be prependend to the lognl labels with {group}__, with any / in the group path replaced with __. For example, if group is /a/b, the lognl will be written as a__b__lognl in the sample’s group attrs.

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

class pycbc.inference.models.gaussian_noise.GaussianNoise(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, static_params=None, det_frame_waveform=False, **kwargs)[source]

Bases: BaseGaussianNoise

Model that assumes data is stationary Gaussian noise.

With Gaussian noise the log likelihood functions for signal \(\log p(d|\Theta, h)\) and for noise \(\log p(d|n)\) are given by:

\[\begin{split}\log p(d|\Theta, h) &= \log\alpha -\frac{1}{2} \sum_i \left< d_i - h_i(\Theta) | d_i - h_i(\Theta) \right> \\ \log p(d|n) &= \log\alpha -\frac{1}{2} \sum_i \left<d_i | d_i\right>\end{split}\]

where the sum is over the number of detectors, \(d_i\) is the data in each detector, and \(h_i(\Theta)\) is the model signal in each detector. The (discrete) inner product is given by:

\[\left<a_i | b_i\right> = 4\Re \Delta f \sum_{k=k_{\mathrm{min}}}^{k_{\mathrm{max}}} \frac{\tilde{a}_i^{*}[k] \tilde{b}_i[k]}{S^{(i)}_n[k]},\]

where \(\Delta f\) is the frequency resolution (given by 1 / the observation time \(T\)), \(k\) is an index over the discretely sampled frequencies \(f = k \Delta_f\), and \(S^{(i)}_n[k]\) is the PSD in the given detector. The upper cutoff on the inner product \(k_{\max}\) is by default the Nyquist frequency \(k_{\max} = N/2+1\), where \(N = \lfloor T/\Delta t \rfloor\) is the number of samples in the time domain, but this can be set manually to a smaller value.

The normalization factor \(\alpha\) is:

\[\alpha = \prod_{i} \frac{1}{\left(\pi T\right)^{N/2} \prod_{k=k_\mathrm{min}}^{k_{\mathrm{max}}} S^{(i)}_n[k]},\]

where the product is over the number of detectors. By default, the normalization constant is not included in the log likelihood, but it can be turned on using the normalize keyword argument.

Note that the log likelihood ratio has fewer terms than the log likelihood, since the normalization and \(\left<d_i|d_i\right>\) terms cancel:

\[\log \mathcal{L}(\Theta) = \sum_i \left[ \left<h_i(\Theta)|d_i\right> - \frac{1}{2} \left<h_i(\Theta)|h_i(\Theta)\right> \right]\]

Upon initialization, the data is whitened using the given PSDs. If no PSDs are given the data and waveforms returned by the waveform generator are assumed to be whitened.

For more details on initialization parameters and definition of terms, see models.BaseDataModel.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • data (dict) – A dictionary of data, in which the keys are the detector names and the values are the data (assumed to be unwhitened). The list of keys must match the waveform generator’s detectors keys, and the epoch of every data set must be the same as the waveform generator’s epoch.

  • low_frequency_cutoff (dict) – A dictionary of starting frequencies, in which the keys are the detector names and the values are the starting frequencies for the respective detectors to be used for computing inner products.

  • psds (dict, optional) – A dictionary of FrequencySeries keyed by the detector names. The dictionary must have a psd for each detector specified in the data dictionary. If provided, the inner products in each detector will be weighted by 1/psd of that detector.

  • high_frequency_cutoff (dict, optional) – A dictionary of ending frequencies, in which the keys are the detector names and the values are the ending frequencies for the respective detectors to be used for computing inner products. If not provided, the minimum of the largest frequency stored in the data and a given waveform will be used.

  • normalize (bool, optional) – If True, the normalization factor \(alpha\) will be included in the log likelihood. Default is to not include it.

  • static_params (dict, optional) – A dictionary of parameter names -> values to keep fixed.

  • det_frame_waveform (bool) – If True, the waveform will be generated directly in the detector frame using the FDomainDirectDetFrameGenerator. This requires the approximant be implemented in get_fd_det_waveform(). If False, the FDomainDetFrameGenerator will be used instead. Defaults to False.

  • **kwargs – All other keyword arguments are passed to BaseDataModel.

Examples

Create a signal, and set up the model using that signal:

>>> from pycbc import psd as pypsd
>>> from pycbc.inference.models import GaussianNoise
>>> from pycbc.waveform.generator import (FDomainDetFrameGenerator,
...                                       FDomainCBCGenerator)
>>> seglen = 4
>>> sample_rate = 2048
>>> N = seglen*sample_rate/2+1
>>> fmin = 30.
>>> static_params = {'approximant': 'IMRPhenomD', 'f_lower': fmin,
...                  'mass1': 38.6, 'mass2': 29.3,
...                  'spin1z': 0., 'spin2z': 0., 'ra': 1.37, 'dec': -1.26,
...                  'polarization': 2.76, 'distance': 3*500.}
>>> variable_params = ['tc']
>>> tsig = 3.1
>>> generator = FDomainDetFrameGenerator(
...     FDomainCBCGenerator, 0., detectors=['H1', 'L1'],
...     variable_args=variable_params,
...     delta_f=1./seglen, **static_params)
>>> signal = generator.generate(tc=tsig)
>>> psd = pypsd.aLIGOZeroDetHighPower(N, 1./seglen, 20.)
>>> psds = {'H1': psd, 'L1': psd}
>>> low_frequency_cutoff = {'H1': fmin, 'L1': fmin}
>>> model = GaussianNoise(variable_params, signal, low_frequency_cutoff,
                          psds=psds, static_params=static_params)

Set the current position to the coalescence time of the signal:

>>> model.update(tc=tsig)

Now compute the log likelihood ratio and prior-weighted likelihood ratio; since we have not provided a prior, these should be equal to each other:

>>> print('{:.2f}'.format(model.loglr))
282.43
>>> print('{:.2f}'.format(model.logplr))
282.43

Print all of the default_stats:

>>> print(',\n'.join(['{}: {:.2f}'.format(s, v)
...                   for (s, v) in sorted(model.current_stats.items())]))
H1_cplx_loglr: 177.76+0.00j,
H1_optimal_snrsq: 355.52,
L1_cplx_loglr: 104.67+0.00j,
L1_optimal_snrsq: 209.35,
logjacobian: 0.00,
loglikelihood: 0.00,
loglr: 282.43,
logprior: 0.00

Compute the SNR; for this system and PSD, this should be approximately 24:

>>> from pycbc.conversions import snr_from_loglr
>>> x = snr_from_loglr(model.loglr)
>>> print('{:.2f}'.format(x))
23.77

Since there is no noise, the SNR should be the same as the quadrature sum of the optimal SNRs in each detector:

>>> x = (model.det_optimal_snrsq('H1') +
...      model.det_optimal_snrsq('L1'))**0.5
>>> print('{:.2f}'.format(x))
23.77

Toggle on the normalization constant:

>>> model.normalize = True
>>> model.loglikelihood
835397.8757405131

Using the same model, evaluate the log likelihood ratio at several points in time and check that the max is at tsig:

>>> import numpy
>>> times = numpy.linspace(tsig-1, tsig+1, num=101)
>>> loglrs = numpy.zeros(len(times))
>>> for (ii, t) in enumerate(times):
...     model.update(tc=t)
...     loglrs[ii] = model.loglr
>>> print('tsig: {:.2f}, time of max loglr: {:.2f}'.format(
...     tsig, times[loglrs.argmax()]))
tsig: 3.10, time of max loglr: 3.10

Create a prior and use it (see distributions module for more details):

>>> from pycbc import distributions
>>> uniform_prior = distributions.Uniform(tc=(tsig-0.2,tsig+0.2))
>>> prior = distributions.JointDistribution(variable_params, uniform_prior)
>>> model = GaussianNoise(variable_params,
...     signal, low_frequency_cutoff, psds=psds, prior=prior,
...     static_params=static_params)
>>> model.update(tc=tsig)
>>> print('{:.2f}'.format(model.logplr))
283.35
>>> print(',\n'.join(['{}: {:.2f}'.format(s, v)
...                   for (s, v) in sorted(model.current_stats.items())]))
H1_cplx_loglr: 177.76+0.00j,
H1_optimal_snrsq: 355.52,
L1_cplx_loglr: 104.67+0.00j,
L1_optimal_snrsq: 209.35,
logjacobian: 0.00,
loglikelihood: 0.00,
loglr: 282.43,
logprior: 0.92
det_cplx_loglr(det)[source]

Returns the complex log likelihood ratio in the given detector.

Parameters:

det (str) – The name of the detector.

Returns:

The complex log likelihood ratio.

Return type:

complex float

det_optimal_snrsq(det)[source]

Returns the opitmal SNR squared in the given detector.

Parameters:

det (str) – The name of the detector.

Returns:

The opimtal SNR squared.

Return type:

float

get_waveforms()[source]

The waveforms generated using the current parameters.

If the waveforms haven’t been generated yet, they will be generated.

Returns:

Dictionary of detector names -> FrequencySeries.

Return type:

dict

multi_loglikelihood(models)[source]

Calculate a multi-model (signal) likelihood

property multi_signal_support

The list of classes that this model supports in a multi-signal likelihood

name = 'gaussian_noise'
pycbc.inference.models.gaussian_noise.catch_waveform_error(method)[source]

Decorator that will catch no waveform errors.

This can be added to a method in an inference model. The decorator will call the model’s _nowaveform_return method if either of the following happens when the wrapped method is executed:

  • A NoWaveformError is raised.

  • A RuntimeError or FailedWaveformError is raised and the model has an ignore_failed_waveforms attribute that is set to True.

This requires the model to have a _nowaveform_handler method.

pycbc.inference.models.gaussian_noise.create_waveform_generator(variable_params, data, waveform_transforms=None, recalibration=None, gates=None, generator_class=<class 'pycbc.waveform.generator.FDomainDetFrameGenerator'>, **static_params)[source]

Creates a waveform generator for use with a model.

Parameters:
  • variable_params (list of str) – The names of the parameters varied.

  • data (dict) – Dictionary mapping detector names to either a <pycbc.types.TimeSeries TimeSeries> or <pycbc.types.FrequencySeries FrequencySeries>.

  • waveform_transforms (list, optional) – The list of transforms applied to convert variable parameters into parameters that will be understood by the waveform generator.

  • recalibration (dict, optional) – Dictionary mapping detector names to <pycbc.calibration.Recalibrate> instances for recalibrating data.

  • gates (dict of tuples, optional) – Dictionary of detectors -> tuples of specifying gate times. The sort of thing returned by pycbc.gate.gates_from_cli().

  • generator_class (detector-frame fdomain generator, optional) – Class to use for generating waveforms. Default is waveform.generator.FDomainDetFrameGenerator.

  • **static_params – All other keyword arguments are passed as static parameters to the waveform generator.

Returns:

A waveform generator for frequency domain generation.

Return type:

pycbc.waveform.FDomainDetFrameGenerator

pycbc.inference.models.gaussian_noise.get_values_from_injection(cp, injection_file, update_cp=True)[source]

Replaces all FROM_INJECTION values in a config file with the corresponding value from the injection.

This looks for any options that start with FROM_INJECTION[:ARG] in a config file. It then replaces that value with the corresponding value from the injection file. An argument may be optionally provided, in which case the argument will be retrieved from the injection file. Functions of parameters in the injection file may be used; the syntax and functions available is the same as the --parameters argument in executables such as pycbc_inference_extract_samples. If no ARG is provided, then the option name will try to be retrieved from the injection.

For example,

mass1 = FROM_INJECTION

will cause mass1 to be retrieved from the injection file, while:

mass1 = FROM_INJECTION:'primary_mass(mass1, mass2)'

will cause the larger of mass1 and mass2 to be retrieved from the injection file. Note that if spaces are in the argument, it must be encased in single quotes.

The injection file may contain only one injection. Otherwise, a ValueError will be raised.

Parameters:
  • cp (ConfigParser) – The config file within which to replace values.

  • injection_file (str or None) – The injection file to get values from. A ValueError will be raised if there are any FROM_INJECTION values in the config file, and injection file is None, or if there is more than one injection.

  • update_cp (bool, optional) – Update the config parser with the replaced parameters. If False, will just retrieve the parameter values to update, without updating the config file. Default is True.

Returns:

The parameters that were replaced, as a tuple of section name, option, value.

Return type:

list

pycbc.inference.models.hierarchical module

Hierarchical model definitions.

class pycbc.inference.models.hierarchical.HierarchicalModel(variable_params, submodels, **kwargs)[source]

Bases: BaseModel

Model that is a combination of other models.

Sub-models are treated as being independent of each other, although they can share parameters. In other words, the hierarchical likelihood is:

\[p(\mathbf{D}|\mathbf{\vartheta}, \mathbf{H}) = \prod_{I}^{K} p(\mathbf{d}_I|\mathbf{\vartheta}, H_{I})\]

Submodels are provided as a dictionary upon initialization with a unique label assigned to each model, e.g., {'event1' -> model1, 'event2' -> model2}. Variable and static parameters that are specific to each submodel should be prepended with {label}__, where {label}__ is the label associated with the given submodel. Shared parameters across multiple models have no labels prepended. To specify shared models over a subset of models, separate models with an underscore. For example, event1_event2__foo will result in foo being common between models event1 and event2. For more details on parameter naming see HierarchicalParam.

All waveform and sampling transforms, as well as prior evaluation, are handled by this model, not the sub-models. Parameters created by waveform transforms should therefore also have sub-model names prepended to them, to indicate which models they should be provided to for likelihood evaluation.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • submodels (dict) – Dictionary of model labels -> model instances of all the submodels.

  • **kwargs – All other keyword arguments are passed to BaseModel.

classmethod from_config(cp, **kwargs)[source]

Initializes an instance of this class from the given config file.

Sub-models are initialized before initializing this class. The model section must have a submodels argument that lists the names of all the submodels to generate as a space-separated list. Each sub-model should have its own [{label}__model] section that sets up the model for that sub-model. For example:

[model]
name = hierarchical
submodels = event1 event2

[event1__model]
<event1 model options>

[event2__model]
<event2 model options>

Similarly, all other sections that are specific to a model should start with the model’s label. All sections starting with a model’s label will be passed to that model’s from_config method with the label removed from the section name. For example, if a sub-model requires a data section to be specified, it should be titled [{label}__data]. Upon initialization, the {label}__ will be stripped from the section header and passed to the model.

No model labels should preceed the variable_params, static_params, waveform_transforms, or sampling_transforms sections. Instead, the parameters specified in these sections should follow the naming conventions described in HierachicalParam to determine which sub-model(s) they belong to. (Sampling parameters can follow any naming convention, as they are only handled by the hierarchical model.) This is because the hierarchical model handles all transforms, communication with the sampler, file IO, and prior calculation. Only sub-model’s loglikelihood functions are called.

Metadata for each sub-model is written to the output hdf file under groups given by the sub-model label. For example, if we have two submodels labelled event1 and event2, there will be groups with the same names in the top level of the output that contain that model’s subdata. For instance, if event1 used the gaussian_noise model, the GW data and PSDs will be found in event1/data and the low frequency cutoff used for that model will be in the attrs of the event1 group.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • **kwargs – All additional keyword arguments are passed to the class. Any provided keyword will override what is in the config file.

property hstatic_params

The static params with HierarchicalParam instances used as dictionary keys.

property hvariable_params

The variable params as a tuple of HierarchicalParam instances.

name = 'hierarchical'
property static_params

Returns the model’s static arguments.

property variable_params

Returns the model parameters.

write_metadata(fp, group=None)[source]

Adds data to the metadata that’s written.

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

class pycbc.inference.models.hierarchical.HierarchicalParam(fullname, possible_models)[source]

Bases: str

Sub-class of str for hierarchical parameter names.

This adds attributes that keep track of the model label(s) the parameter is associated with, along with the name that is passed to the models.

The following conventions are used for parsing parameter names:

  • Model labels and parameter names are separated by the delim class attribute, which by default is __, e.g., event1__mass.

  • Multiple model labels can be provided by separating the model labels with the model_delim class attribute, which by default is _, e.g., event1_event2__mass. Note that this means that individual model labels cannot contain _, else they’ll be parsed as separate models.

  • Parameters that have no model labels prepended to them (i.e., there is no __ in the name) are common to all models.

These parsing rules are applied by the HierarchicalParam.parse() method.

Parameters:
  • fullname (str) – Name of the hierarchical parameter. Should have format {model1}[_{model2}[_{...}]]__{param}.

  • possible_models (set of str) – The possible sub-models a parameter can belong to. Should a set of model labels.

fullname

The full name of the parameter, including model labels. For example, e1_e2__foo.

Type:

str

models

The model labels the parameter is associated with. For example, e1_e2__foo yields models e1, e2.

Type:

set

subname

The name of the parameter without the model labels prepended to it. For example, e1_e2__foo yields foo.

Type:

str

delim = '__'
classmethod from_subname(model_label, subname)[source]

Creates a HierarchicalParam from the given subname and model label.

model_delim = '_'
classmethod parse(fullname, possible_models)[source]

Parses the full parameter name into the models the parameter is associated with and the parameter name that is passed to the models.

Parameters:
  • fullname (str) – The full name of the parameter, which includes both the model label(s) and the parameter name.

  • possible_models (set) – Set of model labels the parameter can be associated with.

Returns:

  • models (list) – List of the model labels the parameter is associated with.

  • subp (str) – Parameter name that is passed to the models. This is the parameter name with the model label(s) stripped from it.

class pycbc.inference.models.hierarchical.JointPrimaryMarginalizedModel(variable_params, submodels, **kwargs)[source]

Bases: HierarchicalModel

This likelihood model can be used for cases when one of the submodels can be marginalized to accelerate the total likelihood. This likelihood model also allows for further acceleration of other models during marginalization, if some extrinsic parameters can be tightly constrained by the primary model. More specifically, such as the EM + GW parameter estimation, the sky localization can be well measured. For LISA + 3G multiband observation, SOBHB signals’ (tc, ra, dec) can be tightly constrained by 3G network, so this model is also useful for this case.

classmethod from_config(cp, **kwargs)[source]

Initializes an instance of this class from the given config file. For more details, see from_config in HierarchicalModel.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • **kwargs – All additional keyword arguments are passed to the class. Any provided keyword will override what is in the config file.

name = 'joint_primary_marginalized'
others_lognl()[source]

Calculate the combined lognl from all others sub-models.

reconstruct(rec=None, seed=None)[source]

Reconstruct marginalized parameters by using the primary model’s reconstruct method, total_loglr, and others_lognl.

total_loglr()[source]

Computes the total log likelihood ratio,

\[\log \mathcal{L}(\Theta) = \sum_i \left<h_i(\Theta)|d_i\right> - \frac{1}{2}\left<h_i(\Theta)|h_i(\Theta)\right>,\]

at the current parameter values \(\Theta\).

Returns:

The value of the log likelihood ratio.

Return type:

float

update_all_models(**params)[source]

This update method is also useful for loglr checking, the original update method in base module can’t update parameters in submodels correctly in loglr checking.

write_metadata(fp, group=None)[source]

Adds metadata to the output files

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

class pycbc.inference.models.hierarchical.MultiSignalModel(variable_params, submodels, **kwargs)[source]

Bases: HierarchicalModel

Model for multiple signals which share data

Sub models are treated as if the signals overlap in data. This requires constituent models to implement a specific method to handle this case. All models must be of the same type or the specific model is responsible for implement cross-compatibility with another model. Each model h_i is responsible for calculating its own loglikelihood ratio for itself, and must also implement a method to calculate crossterms of the form <h_i | h_j> which arise from the full calculation of <d - h|d - h>. This model inherits from the HierarchicalModel so the syntax for configuration files is the same. The primary model is used to determine the noise terms <d | d>, which by default will be the first model used.

name = 'multi_signal'
write_metadata(fp, group=None)[source]

Adds metadata to the output files

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

pycbc.inference.models.hierarchical.hpiter(params, possible_models)[source]

Turns a list of parameter strings into a list of HierarchicalParams.

Parameters:
  • params (list of str) – List of parameter names.

  • possible_models (set) – Set of model labels the parameters can be associated with.

Returns:

Iterator of HierarchicalParam instances.

Return type:

iterator

pycbc.inference.models.hierarchical.map_params(params)[source]

Creates a map of models -> parameters.

Parameters:

params (list of HierarchicalParam instances) – The list of hierarchical parameter names to parse.

Returns:

Dictionary of model labels -> associated parameters.

Return type:

dict

pycbc.inference.models.marginalized_gaussian_noise module

This module provides model classes that assume the noise is Gaussian and allows for the likelihood to be marginalized over phase and/or time and/or distance.

class pycbc.inference.models.marginalized_gaussian_noise.MarginalizedHMPolPhase(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, polarization_samples=100, coa_phase_samples=100, static_params=None, **kwargs)[source]

Bases: BaseGaussianNoise

Numerically marginalizes waveforms with higher modes over polarization and phase.

This class implements the Gaussian likelihood with an explicit numerical marginalization over polarization angle and orbital phase. This is accomplished using a fixed set of integration points distributed uniformly between 0 and 2:math:pi for both the polarization and phase. By default, 100 integration points are used for each parameter, giving \(10^4\) evaluation points in total. This can be modified using the polarization_samples and coa_phase_samples arguments.

This only works with waveforms that return separate spherical harmonic modes for each waveform. For a list of currently supported approximants, see pycbc.waveform.waveform_modes.fd_waveform_mode_approximants() and pycbc.waveform.waveform_modes.td_waveform_mode_approximants().

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • data (dict) – A dictionary of data, in which the keys are the detector names and the values are the data (assumed to be unwhitened). All data must have the same frequency resolution.

  • low_frequency_cutoff (dict) – A dictionary of starting frequencies, in which the keys are the detector names and the values are the starting frequencies for the respective detectors to be used for computing inner products.

  • psds (dict, optional) – A dictionary of FrequencySeries keyed by the detector names. The dictionary must have a psd for each detector specified in the data dictionary. If provided, the inner products in each detector will be weighted by 1/psd of that detector.

  • high_frequency_cutoff (dict, optional) – A dictionary of ending frequencies, in which the keys are the detector names and the values are the ending frequencies for the respective detectors to be used for computing inner products. If not provided, the minimum of the largest frequency stored in the data and a given waveform will be used.

  • normalize (bool, optional) – If True, the normalization factor \(alpha\) will be included in the log likelihood. See GaussianNoise for details. Default is to not include it.

  • polarization_samples (int, optional) – How many points to use in polarization. Default is 100.

  • coa_phase_samples (int, optional) – How many points to use in phase. Defaults is 100.

  • **kwargs – All other keyword arguments are passed to BaseGaussianNoise.

name = 'marginalized_hmpolphase'
phase_fac(m)[source]

The phase \(\exp[i m \phi]\).

class pycbc.inference.models.marginalized_gaussian_noise.MarginalizedPhaseGaussianNoise(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, static_params=None, **kwargs)[source]

Bases: GaussianNoise

The likelihood is analytically marginalized over phase.

This class can be used with signal models that can be written as:

\[\tilde{h}(f; \Theta, \phi) = A(f; \Theta)e^{i\Psi(f; \Theta) + i \phi},\]

where \(\phi\) is an arbitrary phase constant. This phase constant can be analytically marginalized over with a uniform prior as follows: assuming the noise is stationary and Gaussian (see GaussianNoise for details), the posterior is:

\[\begin{split}p(\Theta,\phi|d) &\propto p(\Theta)p(\phi)p(d|\Theta,\phi) \\ &\propto p(\Theta)\frac{1}{2\pi}\exp\left[ -\frac{1}{2}\sum_{i}^{N_D} \left< h_i(\Theta,\phi) - d_i, h_i(\Theta,\phi) - d_i \right>\right].\end{split}\]

Here, the sum is over the number of detectors \(N_D\), \(d_i\) and \(h_i\) are the data and signal in detector \(i\), respectively, and we have assumed a uniform prior on \(\phi \in [0, 2\pi)\). With the form of the signal model given above, the inner product in the exponent can be written as:

\[\begin{split}-\frac{1}{2}\left<h_i - d_i, h_i- d_i\right> &= \left<h_i, d_i\right> - \frac{1}{2}\left<h_i, h_i\right> - \frac{1}{2}\left<d_i, d_i\right> \\ &= \Re\left\{O(h^0_i, d_i)e^{-i\phi}\right\} - \frac{1}{2}\left<h^0_i, h^0_i\right> - \frac{1}{2}\left<d_i, d_i\right>,\end{split}\]

where:

\[\begin{split}h_i^0 &\equiv \tilde{h}_i(f; \Theta, \phi=0); \\ O(h^0_i, d_i) &\equiv 4 \int_0^\infty \frac{\tilde{h}_i^*(f; \Theta,0)\tilde{d}_i(f)}{S_n(f)}\mathrm{d}f.\end{split}\]

Gathering all of the terms that are not dependent on \(\phi\) together:

\[\alpha(\Theta, d) \equiv \exp\left[-\frac{1}{2}\sum_i \left<h^0_i, h^0_i\right> + \left<d_i, d_i\right>\right],\]

we can marginalize the posterior over \(\phi\):

\[\begin{split}p(\Theta|d) &\propto p(\Theta)\alpha(\Theta,d)\frac{1}{2\pi} \int_{0}^{2\pi}\exp\left[\Re \left\{ e^{-i\phi} \sum_i O(h^0_i, d_i) \right\}\right]\mathrm{d}\phi \\ &\propto p(\Theta)\alpha(\Theta, d)\frac{1}{2\pi} \int_{0}^{2\pi}\exp\left[ x(\Theta,d)\cos(\phi) + y(\Theta, d)\sin(\phi) \right]\mathrm{d}\phi.\end{split}\]

The integral in the last line is equal to \(2\pi I_0(\sqrt{x^2+y^2})\), where \(I_0\) is the modified Bessel function of the first kind. Thus the marginalized posterior is:

\[p(\Theta|d) \propto I_0\left(\left|\sum_i O(h^0_i, d_i)\right|\right) p(\Theta)\exp\left[\frac{1}{2}\sum_i\left( \left<h^0_i, h^0_i\right> - \left<d_i, d_i\right> \right)\right]\]
name = 'marginalized_phase'
class pycbc.inference.models.marginalized_gaussian_noise.MarginalizedPolarization(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, polarization_samples=1000, **kwargs)[source]

Bases: DistMarg, BaseGaussianNoise

This likelihood numerically marginalizes over polarization angle

This class implements the Gaussian likelihood with an explicit numerical marginalization over polarization angle. This is accomplished using a fixed set of integration points distribution uniformation between 0 and 2pi. By default, 1000 integration points are used. The ‘polarization_samples’ argument can be passed to set an alternate number of integration points.

name = 'marginalized_polarization'
class pycbc.inference.models.marginalized_gaussian_noise.MarginalizedTime(variable_params, data, low_frequency_cutoff, psds=None, high_frequency_cutoff=None, normalize=False, sample_rate=None, **kwargs)[source]

Bases: DistMarg, BaseGaussianNoise

This likelihood numerically marginalizes over time

This likelihood is optimized for marginalizing over time, but can also handle marginalization over polarization, phase (where appropriate), and sky location. The time series is interpolated using a quadratic apparoximation for sub-sample times.

name = 'marginalized_time'

pycbc.inference.models.relbin module

This module provides model classes and functions for implementing a relative binning likelihood for parameter estimation.

class pycbc.inference.models.relbin.Relative(variable_params, data, low_frequency_cutoff, fiducial_params=None, gammas=None, epsilon=0.5, earth_rotation=False, earth_rotation_mode=2, marginalize_phase=True, **kwargs)[source]

Bases: DistMarg, BaseGaussianNoise

Model that assumes the likelihood in a region around the peak is slowly varying such that a linear approximation can be made, and likelihoods can be calculated at a coarser frequency resolution. For more details on the implementation, see https://arxiv.org/abs/1806.08792.

This model requires the use of a fiducial waveform whose parameters are near the peak of the likelihood. The fiducial waveform and all template waveforms used in likelihood calculation are currently generated using the SPAtmplt approximant.

For more details on initialization parameters and definition of terms, see BaseGaussianNoise.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • data (dict) – A dictionary of data, in which the keys are the detector names and the values are the data (assumed to be unwhitened). All data must have the same frequency resolution.

  • low_frequency_cutoff (dict) – A dictionary of starting frequencies, in which the keys are the detector names and the values are the starting frequencies for the respective detectors to be used for computing inner products.

  • figucial_params (dict) – A dictionary of waveform parameters to be used for generating the fiducial waveform. Keys must be parameter names in the form ‘PARAM_ref’ where PARAM is a recognized extrinsic parameter or an intrinsic parameter compatible with the chosen approximant.

  • gammas (array of floats, optional) – Frequency powerlaw indices to be used in computing frequency bins.

  • epsilon (float, optional) – Tuning parameter used in calculating the frequency bins. Lower values will result in higher resolution and more bins.

  • earth_rotation (boolean, optional) – Default is False. If True, then vary the fp/fc polarization values as a function of frequency bin, using a predetermined PN approximation for the time offsets.

  • **kwargs – All other keyword arguments are passed to BaseGaussianNoise.

calculate_hihjs(models)[source]

Pre-calculate the hihj inner products on a grid

combine_layout()[source]
static extra_args_from_config(cp, section, skip_args=None, dtypes=None)[source]

Adds reading fiducial waveform parameters from config file.

get_waveforms(params)[source]

Get the waveform polarizations for each ifo

init_from_frequencies(data, h00, fbin_ind, ifo)[source]
property likelihood_function
max_curvature_from_reference()[source]

Return the maximum change in slope between frequency bins relative to the reference waveform.

multi_loglikelihood(models)[source]

Calculate a multi-model (signal) likelihood

property multi_signal_support

The list of classes that this model supports in a multi-signal likelihood

name = 'relative'
setup_antenna(earth_rotation, mode, fedges)[source]
summary_product(h1, h2, bins, ifo)[source]

Calculate the summary values for the inner product <h1|h2>

write_metadata(fp, group=None)[source]

Adds writing the fiducial parameters and epsilon to file’s attrs.

Parameters:
  • fp (pycbc.inference.io.BaseInferenceFile instance) – The inference file to write to.

  • group (str, optional) – If provided, the metadata will be written to the attrs specified by group, i.e., to fp[group].attrs. Otherwise, metadata is written to the top-level attrs (fp.attrs).

class pycbc.inference.models.relbin.RelativeTime(*args, sample_rate=4096, **kwargs)[source]

Bases: Relative

Heterodyne likelihood optimized for time marginalization. In addition it supports phase (dominant-mode), sky location, and polarization marginalization.

get_snr(wfs)[source]

Return hp/hc maximized SNR time series

name = 'relative_time'
property ref_snr
class pycbc.inference.models.relbin.RelativeTimeDom(*args, sample_rate=4096, **kwargs)[source]

Bases: RelativeTime

Heterodyne likelihood optimized for time marginalization and only dominant-mode waveforms. This enables the ability to do inclination marginalization in addition to the other forms supportedy by RelativeTime.

get_snr(wfs)[source]

Return hp/hc maximized SNR time series

name = 'relative_time_dom'
pycbc.inference.models.relbin.setup_bins(f_full, f_lo, f_hi, chi=1.0, eps=0.1, gammas=None)[source]

Construct frequency bins for use in a relative likelihood model. For details, see [Barak, Dai & Venumadhav 2018].

Parameters:
  • f_full (array) – The full resolution array of frequencies being used in the analysis.

  • f_lo (float) – The starting frequency used in matched filtering. This will be the left edge of the first frequency bin.

  • f_hi (float) – The ending frequency used in matched filtering. This will be the right edge of the last frequency bin.

  • chi (float, optional) – Tunable parameter, see [Barak, Dai & Venumadhav 2018]

  • eps (float, optional) – Tunable parameter, see [Barak, Dai & Venumadhav 2018]. Lower values result in larger number of bins.

  • gammas (array, optional) – Frequency powerlaw indices to be used in computing bins.

Returns:

  • nbin (int) – Number of bins.

  • fbin (numpy.array of floats) – Bin edge frequencies.

  • fbin_ind (numpy.array of ints) – Indices of bin edges in full frequency array.

pycbc.inference.models.relbin_cpu module

Optimized inner loop functions for the relative likelihood model

pycbc.inference.models.relbin_cpu.likelihood_parts(freqs, fp, fc, dtc, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_det(freqs, dtc, hp, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_det_multi(freqs, dtc, hp, h00, dtc2, hp2, h002, a0, a1)
pycbc.inference.models.relbin_cpu.likelihood_parts_multi(freqs, fp, fc, dtc, hp, hc, h00, fp2, fc2, dtc2, hp2, hc2, h002, a0, a1)
pycbc.inference.models.relbin_cpu.likelihood_parts_multi_v(freqs, fp, fc, dtc, hp, hc, h00, fp2, fc2, dtc2, hp2, hc2, h002, a0, a1)
pycbc.inference.models.relbin_cpu.likelihood_parts_v(freqs, fp, fc, dtc, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_v_pol(freqs, fp, fc, dtc, pol_phase, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_v_pol_time(freqs, fp, fc, times, dtc, pol_phase, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_v_time(freqs, fp, fc, times, dtc, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_vector(freqs, fp, fc, dtc, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_vectorp(freqs, fp, fc, dtc, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.likelihood_parts_vectort(freqs, fp, fc, dtc, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.snr_predictor(freqs, tstart, delta_t, num_samples, hp, hc, h00, a0, a1, b0, b1)
pycbc.inference.models.relbin_cpu.snr_predictor_dom(freqs, tstart, delta_t, num_samples, hp, h00, a0, a1, b0, b1)

pycbc.inference.models.single_template module

This module provides model classes that assume the noise is Gaussian.

class pycbc.inference.models.single_template.SingleTemplate(variable_params, data, low_frequency_cutoff, sample_rate=32768, marginalize_phase=True, **kwargs)[source]

Bases: DistMarg, BaseGaussianNoise

Model that assumes we know all the intrinsic parameters.

This model assumes we know all the intrinsic parameters, and are only maximizing over the extrinsic ones. We also assume a dominant mode waveform approximant only and non-precessing.

Parameters:
  • variable_params ((tuple of) string(s)) – A tuple of parameter names that will be varied.

  • data (dict) – A dictionary of data, in which the keys are the detector names and the values are the data (assumed to be unwhitened). All data must have the same frequency resolution.

  • low_frequency_cutoff (dict) – A dictionary of starting frequencies, in which the keys are the detector names and the values are the starting frequencies for the respective detectors to be used for computing inner products.

  • sample_rate (int, optional) – The sample rate to use. Default is 32768.

  • polarization_samples (int, optional) – Parameter to specify how finely to marginalize over polarization angle. If None, then polarization must be a parameter.

  • **kwargs – All other keyword arguments are passed to BaseGaussianNoise; see that class for details.

calculate_hihjs(models)[source]

Pre-calculate the hihj inner products on a grid

multi_loglikelihood(models)[source]

Calculate a multi-model (signal) likelihood

property multi_signal_support

The list of classes that this model supports in a multi-signal likelihood

name = 'single_template'

pycbc.inference.models.tools module

Common utility functions for calculation of likelihoods

class pycbc.inference.models.tools.DistMarg[source]

Bases: object

Help class to add bookkeeping for likelihood marginalization

property current_params

The current parameters

If a parameter has been vector marginalized, the likelihood should expect an array for the given parameter. This allows transparent vectorization for many models.

draw_ifos(snrs, peak_snr_threshold=4.0, log=True, precalculate_marginalization_points=False, **kwargs)[source]

Helper utility to determine which ifos we should use based on the reference SNR time series.

draw_sky_times(snrs, size=None)[source]

Draw ra, dec, and tc together using SNR timeseries to determine monte-carlo weights.

draw_times(snrs, size=None)[source]

Draw times consistent with the incoherent network SNR

Parameters:

snrs (dist of TimeSeries)

get_precalc_antenna_factors(ifo)[source]

Get the antenna factors for marginalized samples if they exist

marginalize_loglr(sh_total, hh_total, skip_vector=False, return_peak=False)[source]

Return the marginal likelihood

Parameters:
  • sh_total (float or ndarray) – The total <s|h> inner product summed over detectors

  • hh_total (float or ndarray) – The total <h|h> inner product summed over detectors

  • skip_vector (bool, False) – If true, and input is a vector, do not marginalize over that vector, instead return the likelihood values as a vector.

premarg_draw()[source]

Choose random samples from prechosen set

reconstruct(rec=None, seed=None, set_loglr=None)[source]

Reconstruct the distance or vectored marginalized parameter of this class.

reset_vector_params()[source]

Redraw vector params from their priors

setup_marginalization(variable_params, marginalize_phase=False, marginalize_distance=False, marginalize_distance_param='distance', marginalize_distance_samples=10000, marginalize_distance_interpolator=False, marginalize_distance_snr_range=None, marginalize_distance_density=None, marginalize_vector_params=None, marginalize_vector_samples=1000.0, marginalize_sky_initial_samples=1000000.0, **kwargs)[source]

Setup the model for use with distance marginalization

This function sets up precalculations for distance / phase marginalization. For distance margininalization it modifies the model to internally remove distance as a parameter.

Parameters:
  • variable_params (list of strings) – The set of variable parameters

  • marginalize_phase (bool, False) – Do analytic marginalization (appopriate only for 22 mode waveforms)

  • marginalize_distance (bool, False) – Marginalize over distance

  • marginalize_distance_param (str) – Name of the parameter that is used to determine the distance. This might be ‘distance’ or a parameter which can be converted to distance by a provided univariate transformation.

  • marginalize_distance_interpolator (bool) – Use a pre-calculated interpolating function for the distance marginalized likelihood.

  • marginalize_distance_snr_range (tuple of floats, (1, 50)) – The SNR range for the interpolating function to be defined in. If a sampler goes outside this range, the logl will be returned as -numpy.inf.

  • marginalize_distance_density (tuple of intes, (1000, 1000)) – The dimensions of the interpolation grid over (sh, hh).

Returns:

  • variable_params (list of strings) – Set of variable params (missing distance-related parameter).

  • kwags (dict) – The keyword arguments to the model initialization, may be modified from the original set by this function.

setup_peak_lock(sample_rate=4096, snrs=None, peak_lock_snr=None, peak_lock_ratio=10000.0, peak_lock_region=4, **kwargs)[source]

Determine where to constrain marginalization based on the observed reference SNR peaks.

Parameters:
  • sample_rate (float) – The SNR sample rate

  • snrs (Dict of SNR time series) – Either provide this or the model needs a function to get the reference SNRs.

  • peak_lock_snr (float) – The minimum SNR to bother restricting from the prior range

  • peak_lock_ratio (float) – The likelihood ratio (not log) relative to the peak to act as a threshold bounding region.

  • peak_lock_region (int) – Number of samples to inclue beyond the strict region determined by the relative likelihood

snr_draw(wfs=None, snrs=None, size=None)[source]

Improve the monte-carlo vector marginalization using the SNR time series of each detector

pycbc.inference.models.tools.draw_sample(loglr, size=None)[source]

Draw a random index from a 1-d vector with loglr weights

pycbc.inference.models.tools.marginalize_likelihood(sh, hh, logw=None, phase=False, distance=False, skip_vector=False, interpolator=None, return_peak=False, return_complex=False)[source]

Return the marginalized likelihood.

Apply various marginalizations to the data, including phase, distance, and brute-force vector marginalizations. Several options relate to how the distance marginalization is approximated and others allow for special return products to aid in parameter reconstruction.

Parameters:
  • sh (complex float or numpy.ndarray) – The data-template inner product

  • hh (complex float or numpy.ndarray) – The template-template inner product

  • logw – log weighting factors if vector marginalization is used, if not given, each sample is assumed to be equally weighted

  • phase (bool, False) – Enable phase marginalization. Only use if orbital phase can be related to just a single overall phase (e.g. not true for waveform with sub-dominant modes)

  • skip_vector (bool, False) – Don’t apply marginalization of vector component of input (i.e. leave as vector).

  • interpolator (function, None) – If provided, internal calculation is skipped in favor of a precalculated interpolating function which takes in sh/hh and returns the likelihood.

  • return_peak (bool, False) – Return the peak likelihood and index if using passing an array as input in addition to the marginalized over the array likelihood.

  • return_complex (bool, False) – Return the sh / hh data products before applying phase marginalization. This option is intended to aid in reconstucting phase marginalization and is unlikely to be useful for other purposes.

Returns:

loglr – The marginalized loglikehood ratio

Return type:

float

pycbc.inference.models.tools.setup_distance_marg_interpolant(dist_marg, phase=False, snr_range=(1, 50), density=(1000, 1000))[source]

Create the interpolant for distance marginalization

Parameters:
  • dist_marg (tuple of two arrays) – The (dist_loc, dist_weight) tuple which defines the grid for integrating over distance

  • snr_range (tuple of (float, float)) – Tuple of min, max SNR that the interpolant is expected to work for.

  • density (tuple of (float, float)) – The number of samples in either dimension of the 2d interpolant

Returns:

interp – Function which returns the precalculated likelihood for a given inner product sh/hh.

Return type:

function

pycbc.inference.models.tools.str_to_bool(sval)[source]

Ensure value is a bool if it can be converted

pycbc.inference.models.tools.str_to_tuple(sval, ftype)[source]

Convenience parsing to convert str to tuple

Module contents

This package provides classes and functions for evaluating Bayesian statistics assuming various noise models.

class pycbc.inference.models.CallModel(model, callstat, return_all_stats=True)[source]

Bases: object

Wrapper class for calling models from a sampler.

This class can be called like a function, with the parameter values to evaluate provided as a list in the same order as the model’s variable_params. In that case, the model is updated with the provided parameters and then the callstat retrieved. If return_all_stats is set to True, then all of the stats specified by the model’s default_stats will be returned as a tuple, in addition to the stat value.

The model’s attributes are promoted to this class’s namespace, so that any attribute and method of model may be called directly from this class.

This class must be initalized prior to the creation of a Pool object.

Parameters:
  • model (Model instance) – The model to call.

  • callstat (str) – The statistic to call.

  • return_all_stats (bool, optional) – Whether or not to return all of the other statistics along with the callstat value.

Examples

Create a wrapper around an instance of the TestNormal model, with the callstat set to logposterior:

>>> from pycbc.inference.models import TestNormal, CallModel
>>> model = TestNormal(['x', 'y'])
>>> call_model = CallModel(model, 'logposterior')

Now call on a set of parameter values:

>>> call_model([0.1, -0.2])
(-1.8628770664093453, (0.0, 0.0, -1.8628770664093453))

Note that a tuple of all of the model’s default_stats were returned in addition to the logposterior value. We can shut this off by toggling return_all_stats:

>>> call_model.return_all_stats = False
>>> call_model([0.1, -0.2])
-1.8628770664093453

Attributes of the model can be called from the call model. For example:

>>> call_model.variable_params
('x', 'y')
pycbc.inference.models.available_models()[source]

List the currently available models.

pycbc.inference.models.get_model(model_name)[source]

Retrieve the given model.

Parameters:

model_name (str) – The name of the model to get.

Returns:

The requested model.

Return type:

model

pycbc.inference.models.get_models()[source]

Returns the dictionary of current models.

Ensures that plugins are added to the dictionary first.

pycbc.inference.models.read_from_config(cp, **kwargs)[source]

Initializes a model from the given config file.

The section must have a name argument. The name argument corresponds to the name of the class to initialize.

Parameters:
  • cp (WorkflowConfigParser) – Config file parser to read.

  • **kwargs – All other keyword arguments are passed to the from_config method of the class specified by the name argument.

Returns:

The initialized model.

Return type:

cls

pycbc.inference.models.register_model(model)[source]

Makes a custom model available to PyCBC.

The provided model will be added to the dictionary of models that PyCBC knows about, using the model’s name attribute. If the name is the same as a model that already exists in PyCBC, a warning will be printed.

Parameters:

model (pycbc.inference.models.base.BaseModel) – The model to use. The model should be a sub-class of BaseModel to ensure it has the correct API for use within pycbc_inference.