PyCBC inference documentation (pycbc.inference
)¶
Introduction¶
This page gives details on how to use the various parameter estimation
executables and modules available in PyCBC. The pycbc.inference
subpackage
contains classes and functions for evaluating probability distributions,
likelihoods, and running Bayesian samplers.
Sampling the parameter space (pycbc_inference
)¶
Overview¶
The executable pycbc_inference
is designed to sample the parameter space
and save the samples in an HDF file. A highlevel description of the
pycbc_inference
algorithm is
Read priors from a configuration file.
Setup the model to use. If the model uses data, then:
 Read gravitationalwave strain from a gravitationalwave model or use recolored fake strain.
 Estimate a PSD.
Run a sampler to estimate the posterior distribution of the model.
Write the samples and metadata to an HDF file.
The model, data, sampler, parameters to vary and their priors are specified in
one or more configuration files, which are passed to the program using the
configfile
option. Other commandline options determine what
parallelization settings to use. For a full listing of all options run
pycbc_inference help
. Below, we give details on how to set up a
configuration file and provide examples of how to run pycbc_inference
.
Configuring the model, sampler, priors, and data¶
The configuration file(s) uses WorkflowConfigParser
syntax. The required
sections are: [model]
, [sampler]
, and [variable_params]
. In
addition, multiple [prior]
sections must be provided that define the prior
distribution to use for the parameters in [variable_params]
. If a model
uses data a [data]
section must also be provided.
These sections may be split up over multiple files. In that case, all of the
files should be provided as spaceseparated arguments to the configfile
.
Providing multiple files is equivalent to providing a single file with
everything across the files combined. If the same section is specified in
multiple files, the all of the options will be combined.
Configuration files allow for referencing values in other sections using the
syntax ${sectionoption}
. See the examples below for an example of this.
When providing multiple configuration files, sections in other files may be
referenced, since in the multiple files are combined into a single file in
memory when the files are loaded.
Configuring the model¶
The [model]
section sets up what model to use for the analysis. At minimum,
a name
argument must be provided, specifying which model to use. For
example:
[model]
name = gaussian_noise
In this case, the GaussianNoise
model would be used. (Examples of using this
model on a BBH injection and on GW150914 are given below.) Other arguments to
configure the model may also be set in this section. The recognized arguments
depend on the model. The currently available models are:
Name  Class 

'gaussian_noise' 
pycbc.inference.models.gaussian_noise.GaussianNoise 
'marginalized_phase' 
pycbc.inference.models.marginalized_gaussian_noise.MarginalizedPhaseGaussianNoise 
'relative' 
pycbc.inference.models.relbin.Relative 
'single_template' 
pycbc.inference.models.single_template.SingleTemplate 
'test_eggbox' 
pycbc.inference.models.analytic.TestEggbox 
'test_normal' 
pycbc.inference.models.analytic.TestNormal 
'test_prior' 
pycbc.inference.models.analytic.TestPrior 
'test_rosenbrock' 
pycbc.inference.models.analytic.TestRosenbrock 
'test_volcano' 
pycbc.inference.models.analytic.TestVolcano 
Refer to the models’ from_config
method to see what configuration arguments
are available.
Any model name that starts with test_
is an analytic test distribution that
requires no data or waveform generation. See the section below on running on an
analytic distribution for more details.
Configuring the sampler¶
The [sampler]
section sets up what sampler to use for the analysis. As
with the [model]
section, a name
must be provided to specify which
sampler to use. The currently available samplers are:
Name  Class 

'cpnest' 
pycbc.inference.sampler.cpnest.CPNestSampler 
'dynesty' 
pycbc.inference.sampler.dynesty.DynestySampler 
'emcee' 
pycbc.inference.sampler.emcee.EmceeEnsembleSampler 
'emcee_pt' 
pycbc.inference.sampler.emcee_pt.EmceePTSampler 
'epsie' 
pycbc.inference.sampler.epsie.EpsieSampler 
'multinest' 
pycbc.inference.sampler.multinest.MultinestSampler 
'ultranest' 
pycbc.inference.sampler.ultranest.UltranestSampler 
See example of trying different samplers
Configuration options for the sampler should also be specified in the
[sampler]
section. For example:
[sampler]
name = emcee
nwalkers = 5000
niterations = 1000
checkpointinterval = 100
This would tell pycbc_inference
to run the
EmceeEnsembleSampler
with 5000 walkers for 1000 iterations, checkpointing every 100th iteration.
Refer to the samplers’ from_config
method to see what configuration options
are available.
Burnin tests may also be configured for MCMC samplers in the config file. The
options for the burnin should be placed in [samplerburn_in]
. At minimum,
a burnintest
argument must be given in this section. This argument
specifies which test(s) to apply. Multiple tests may be combined using standard
python logic operators. For example:
[samplerburn_in]
burnintest = nacl & max_posterior
In this case, the sampler would be considered to be burned in when both the
nacl
and max_posterior
tests were satisfied. Setting this to nacl 
max_postrior
would instead consider the sampler to be burned in when either
the nacl
or max_posterior
tests were satisfied. For more information
on what tests are available, see the pycbc.inference.burn_in
module.
Thinning samples (MCMC only)¶
The default behavior for the MCMC samplers (emcee
, emcee_pt
) is to save
every iteration of the Markov chains to the output file. This can quickly lead
to very large files. For example, a BBH analysis (~15 parameters) with 200
walkers, 20 temperatures may take ~50 000 iterations to acquire ~5000
independent samples. This will lead to a file that is ~ 50 000 iterations x 200
walkers x 20 temperatures x 15 parameters x 8 bytes ~ 20GB. Quieter signals
can take an order of magnitude more iterations to converge, leading to O(100GB)
files. Clearly, since we only obtain 5000 independent samples from such a run,
the vast majority of these samples are of little interest.
To prevent large file size growth, samples may be thinned before they are
written to disk. Two thinning options are available, both of which are set in
the [sampler]
section of the configuration file. They are:
thininterval
: This will thin the samples by the given integer before writing the samples to disk. File sizes can still grow unbounded, but at a slower rate. The interval must be less than the checkpoint interval.maxsamplesperchain
: This will cap the maximum number of samples per walker and per temperature to the given integer. This ensures that file sizes never exceed ~maxsamplesperchain
xnwalkers
xntemps
xnparameters
x 8 bytes. Once the limit is reached, samples will be thinned on disk, and new samples will be thinned to match. The thinning interval will grow with longer runs as a result. To ensure that enough samples exist to determine burn in and to measure an autocorrelation length,maxsamplesperchain
must be greater than or equal to 100.
The thinned interval that was used for thinning samples is saved to the output
file’s thinned_by
attribute (stored in the HDF file’s .attrs
). Note
that this is not the autocorrelation length (ACL), which is the amount that the
samples need to be further thinned to obtain independent samples.
Note
In the output file creates by the MCMC samplers, we adopt the convention
that “iteration” means iteration of the sampler, not index of the samples.
For example, if a burn in test is used, burn_in_iteration
will be
stored to the sampler_info
group in the output file. This gives the
iteration of the sampler at which burn in occurred, not the sample on disk.
To determine which samples an iteration corresponds to in the file, divide
iteration by thinned_by
.
Likewise, we adopt the convention that autocorrelation length (ACL) is
the autocorrelation length of the thinned samples (the number of samples on
disk that you need to skip to get independent samples) whereas
autocorrelation time (ACT) is the autocorrelation length in terms of
iteration (it is the number of iterations that you need to skip to get
independent samples); i.e., ACT = thinned_by x ACL
. The ACT is (up to
measurement resolution) independent of the thinning used, and thus is
useful for comparing the performance of the sampler.
Configuring the prior¶
What parameters to vary to obtain a posterior distribution are determined by
[variable_params]
section. For example:
[variable_params]
x =
y =
This would tell pycbc_inference
to sample a posterior over two parameters
called x
and y
.
A prior must be provided for every parameter in [variable_params]
. This
is done by adding sections named [prior{param}]
where {param}
is the
name of the parameter the prior is for. For example, to provide a prior for the
x
parameter in the above example, you would need to add a section called
[priorx]
. If the prior couples more than one parameter together in a joint
distribution, the parameters should be provided as a +
separated list,
e.g., [priorx+y+z]
.
The prior sections specify what distribution to use for the parameter’s prior,
along with any settings for that distribution. Similar to the model
and
sampler
sections, each prior
section must have a name
argument that
identifies the distribution to use. Distributions are defined in the
pycbc.distributions
module. The currently available distributions
are:
Name  Class 

'arbitrary' 
pycbc.distributions.arbitrary.Arbitrary 
'cos_angle' 
pycbc.distributions.angular.CosAngle 
'external' 
pycbc.distributions.external.External 
'fromfile' 
pycbc.distributions.arbitrary.FromFile 
'gaussian' 
pycbc.distributions.gaussian.Gaussian 
'independent_chip_chieff' 
pycbc.distributions.spins.IndependentChiPChiEff 
'sin_angle' 
pycbc.distributions.angular.SinAngle 
'uniform' 
pycbc.distributions.uniform.Uniform 
'uniform_angle' 
pycbc.distributions.angular.UniformAngle 
'uniform_f0_tau' 
pycbc.distributions.qnm.UniformF0Tau 
'uniform_log10' 
pycbc.distributions.uniform_log.UniformLog10 
'uniform_power_law' 
pycbc.distributions.power_law.UniformPowerLaw 
'uniform_radius' 
pycbc.distributions.power_law.UniformRadius 
'uniform_sky' 
pycbc.distributions.sky_location.UniformSky 
'uniform_solidangle' 
pycbc.distributions.angular.UniformSolidAngle 
Static parameters¶
A [static_params]
section may be provided to list any parameters that
will remain fixed throughout the run. For example:
[static_params]
approximant = IMRPhenomPv2
f_lower = 18
Setting data¶
Many models, such as the GaussianNoise
model, require data to be provided. To
do so, a [data]
section must be included that provides information about
what data to load, and how to condition it.
The type of data to be loaded depends on the model. For example, if you are
using the GaussianNoise
or MarginalizedPhaseGaussianNoise
models (the
typical case), one will need to load gravitationalwave data. This is
accomplished using tools provided in the pycbc.strain
module. The
full set of options are:
Name  Syntax  Description 

instruments 
INSTRUMENTS [INSTRUMENTS …]  Instruments to analyze, eg. H1 L1. 
triggertime 
TRIGGER_TIME  Reference GPS time (at geocenter) from which the (anlaysispsd)(startend)time options are measured. The integer seconds will be used. Default is 0; i.e., if not provided, the analysis and psd times should be in GPS seconds. 
analysisstarttime 
IFO:TIME [IFO:TIME …]  The start time to use for the analysis, measured with respect to the triggertime. If psdinverselength is provided, the given start time will be padded by half that length to account for wraparound effects. 
analysisendtime 
IFO:TIME [IFO:TIME …]  The end time to use for the analysis, measured with respect to the triggertime. If psdinverselength is provided, the given end time will be padded by half that length to account for wraparound effects. 
psdstarttime 
IFO:TIME [IFO:TIME …]  Start time to use for PSD estimation, measured with respect to the triggertime. 
psdendtime 
IFO:TIME [IFO:TIME …]  End time to use for PSD estimation, measured with respect to the triggertime. 
dataconditioninglowfreq 
IFO:FLOW [IFO:FLOW …]  Low frequency cutoff of the data. Needed for PSD estimation and when creating fake strain. If not provided, will use the model’s lowfrequencycutoff. 
Options to select the method of PSD generation:
The options psdmodel , psdfile , asdfile , and psdestimation are
mutually exclusive. 

psdmodel 
IFO:MODEL [IFO:MODEL …]  Get PSD from given analytical model. Choose from any available PSD model. 
psdfile 
IFO:FILE [IFO:FILE …]  Get PSD using given PSD ASCII file 
asdfile 
IFO:FILE [IFO:FILE …]  Get PSD using given ASD ASCII file 
psdestimation 
IFO:FILE [IFO:FILE …]  Measure PSD from the data, using given average method. Choose from mean, median or medianmean. 
psdsegmentlength 
IFO:LENGTH [IFO:LENGTH …]  (Required for psdestimation ) The segment length for
PSD estimation (s) 
psdsegmentstride 
IFO:STRIDE [IFO:STRIDE …]  (Required for psdestimation ) The separation between
consecutive segments (s) 
psdnumsegments 
IFO:NUM [IFO:NUM …]  (Optional, used only with psdestimation ). If given
PSDs will be estimated using only this number of
segments. If more data is given than needed to make
this number of segments than excess data will not be
used in the PSD estimate. If not enough data is given
the code will fail. 
psdinverselength 
IFO:LENGTH [IFO:LENGTH …]  (Optional) The maximum length of the impulse response of the overwhitening filter (s) 
psdoutput 
IFO:FILE [IFO:FILE …]  (Optional) Write PSD to specified file 
psdvarsegment 
SECONDS  Length of segment when calculating the PSD variability. 
psdvarshortsegment 
SECONDS  Length of short segment for outliers removal in PSD variability calculation. 
psdvarlongsegment 
SECONDS  Length of long segment when calculating the PSD variability. 
psdvarpsdduration 
SECONDS  Duration of short segments for PSD estimation. 
psdvarpsdstride 
SECONDS  Separation between PSD estimation segments. 
psdvarlowfreq 
HERTZ  Minimum frequency to consider in strain bandpass. 
psdvarhighfreq 
HERTZ  Maximum frequency to consider in strain bandpass. 
Options for obtaining h(t):
These options are used for generating h(t) either by reading from a file
or by generating it. This is only needed if the PSD is to be estimated
from the data, ie. if the psdestimation option is given. This group
supports reading from multiple ifos simultaneously. 

strainhighpass 
IFO:FREQUENCY [IFO:FREQUENCY …]  High pass frequency 
paddata 
IFO:LENGTH [IFO:LENGTH …]  Extra padding to remove highpass corruption (integer seconds) 
taperdata 
IFO:LENGTH [IFO:LENGTH …]  Taper ends of data to zero using the supplied length as a window (integer seconds) 
samplerate 
IFO:RATE [IFO:RATE …]  The sample rate to use for h(t) generation (integer Hz). 
channelname 
IFO:CHANNEL [IFO:CHANNEL …]  The channel containing the gravitational strain data 
framecache 
IFO:FRAME_CACHE [IFO:FRAME_CACHE …]  Cache file containing the frame locations. 
framefiles 
IFO:FRAME_FILES [IFO:FRAME_FILES …]  list of frame files 
hdfstore 
IFO:HDF_STORE_FILE [IFO:HDF_STORE_FILE …]  Store of time series data in hdf format 
frametype 
IFO:FRAME_TYPE [IFO:FRAME_TYPE …]  (optional) Replaces framefiles. Use datafind to get the needed frame file(s) of this type. 
framesieve 
IFO:FRAME_SIEVE [IFO:FRAME_SIEVE …]  (optional), Only use frame files where the URL matches the regular expression given. 
fakestrain 
IFO:CHOICE [IFO:CHOICE …]  Name of model PSD for generating fake gaussian noise.
Choose from any available PSD model, or zeroNoise . 
fakestrainseed 
IFO:SEED [IFO:SEED …]  Seed value for the generation of fake colored gaussian noise 
fakestrainfromfile 
IFO:FILE [IFO:FILE …]  File containing ASD for generating fake noise from it. 
injectionfile 
IFO:FILE [IFO:FILE …]  (optional) Injection file used to add waveforms into the strain 
sgburstinjectionfile 
IFO:FILE [IFO:FILE …]  (optional) Injection file used to add sineGaussian burst waveforms into the strain 
injectionscalefactor 
IFO:VAL [IFO:VAL …]  Multiple injections by this factor before injecting into the data. 
injectionfref 
IFO:VALUE  Reference frequency in Hz for creating CBC injections from an XML file. 
injectionffinal 
IFO:VALUE  Override the f_final field of a CBC XML injection file. 
gatingfile 
IFO:FILE [IFO:FILE …]  (optional) Text file of gating segments to apply. Format of each line (units s) : gps_time zeros_half_width pad_half_width 
autogatingthreshold 
IFO:SIGMA [IFO:SIGMA …]  If given, find and gate glitches producing a deviation larger than SIGMA in the whitened strain time series. 
autogatingmaxiterations 
SIGMA  If given, iteratively apply autogating 
autogatingcluster 
IFO:SECONDS [IFO:SECONDS …]  Length of clustering window for detecting glitches for autogating. 
autogatingwidth 
IFO:SECONDS [IFO:SECONDS …]  Halfwidth of the gating window. 
autogatingtaper 
IFO:SECONDS [IFO:SECONDS …]  Taper the strain before and after each gating window over a duration of SECONDS. 
autogatingpad 
IFO:SECONDS [IFO:SECONDS …]  Ignore the given length of whitened strain at the ends of a segment, to avoid filters ringing. 
normalizestrain 
IFO:VALUE [IFO:VALUE …]  (optional) Divide frame data by constant. 
zpkz 
IFO:VALUE [IFO:VALUE …]  (optional) Zeropolegain (zpk) filter strain. A list of zeros for transfer function 
zpkp 
IFO:VALUE [IFO:VALUE …]  (optional) Zeropolegain (zpk) filter strain. A list of poles for transfer function 
zpkk 
IFO:VALUE [IFO:VALUE …]  (optional) Zeropolegain (zpk) filter strain. Transfer function gain 
Options for gating data:  
gate 
IFO:CENTRALTIME:HALFDUR:TAPERDUR [IFO:CENTRALTIME:HALFDUR:TAPERDUR …]  Apply one or more gates to the data before filtering. 
gateoverwhitened 
Overwhiten data first, then apply the gates specified
in gate . Overwhitening allows for sharper tapers to
be used, since lines are not blurred. 

psdgate 
IFO:CENTRALTIME:HALFDUR:TAPERDUR [IFO:CENTRALTIME:HALFDUR:TAPERDUR …]  Apply one or more gates to the data used for computing the PSD. Gates are applied prior to FFTing the data for PSD estimation. 
Options for quering data quality (DQ):  
dqsegmentname 
DQ_SEGMENT_NAME  The status flag to query for data quality. Default is “DATA”. 
dqsource 
{any,GWOSC,dqsegdb}  Where to look for DQ information. If “any” (the default) will first try GWOSC, then dqsegdb. 
dqserver 
DQ_SERVER  The server to use for dqsegdb. 
vetodefiner 
VETO_DEFINER  Path to a veto definer file that defines groups of flags, which themselves define a set of DQ segments. 
As indicated in the table, the psdmodel
and fakestrain
options can
accept an analytical PSD as an argument. The available PSD models are:
Advanced configuration settings¶
The following are additional settings that may be provided in the configuration file, in order to do more sophisticated analyses.
Sampling transforms¶
One or more of the variable_params
may be transformed to a different
parameter space for purposes of sampling. This is done by specifying a
[sampling_params]
section. This section specifies which
variable_params
to replace with which parameters for sampling. This must be
followed by one or more [sampling_transforms{sampling_params}]
sections
that provide the transform class to use. For example, the following would cause
the sampler to sample in chirp mass (mchirp
) and mass ratio (q
) instead
of mass1
and mass2
:
[sampling_params]
mass1, mass2: mchirp, q
[sampling_transformsmchirp+q]
name = mass1_mass2_to_mchirp_q
Transforms are provided by the pycbc.transforms
module. The currently
available transforms are:
Note
Both a jacobian
and inverse_jacobian
must be defined in order to use
a transform class for a sampling transform. Not all transform classes in
pycbc.transforms
have these defined. Check the class
documentation to see if a Jacobian is defined.
Waveform transforms¶
There can be any number of variable_params
with any name. No parameter name
is special (with the exception of parameters that start with calib_
; see
below).
However, when doing parameter estimation with CBC waveforms, certain parameter names must be provided for waveform generation. The parameter names recognized by the CBC waveform generators are:
Parameter  Description 

'mass1' 
The mass of the first component object in the binary (in solar masses). 
'mass2' 
The mass of the second component object in the binary (in solar masses). 
'spin1x' 
The x component of the first binary component’s dimensionless spin. 
'spin1y' 
The y component of the first binary component’s dimensionless spin. 
'spin1z' 
The z component of the first binary component’s dimensionless spin. 
'spin2x' 
The x component of the second binary component’s dimensionless spin. 
'spin2y' 
The y component of the second binary component’s dimensionless spin. 
'spin2z' 
The z component of the second binary component’s dimensionless spin. 
'eccentricity' 
Eccentricity. 
'lambda1' 
The dimensionless tidal deformability parameter of object 1. 
'lambda2' 
The dimensionless tidal deformability parameter of object 2. 
'dquad_mon1' 
Quadrupolemonopole parameter / m_1^5 1. 
'dquad_mon2' 
Quadrupolemonopole parameter / m_2^5 1. 
'lambda_octu1' 
The octupolar tidal deformability parameter of object 1. 
'lambda_octu2' 
The octupolar tidal deformability parameter of object 2. 
'quadfmode1' 
The quadrupolar fmode angular frequency of object 1. 
'quadfmode2' 
The quadrupolar fmode angular frequency of object 2. 
'octufmode1' 
The octupolar fmode angular frequency of object 1. 
'octufmode2' 
The octupolar fmode angular frequency of object 2. 
'distance' 
Luminosity distance to the binary (in Mpc). 
'coa_phase' 
Coalesence phase of the binary (in rad). 
'inclination' 
Inclination (rad), defined as the angle between the total angular momentum J and the lineofsight. 
'long_asc_nodes' 
Longitude of ascending nodes axis (rad). 
'mean_per_ano' 
Mean anomaly of the periastron (rad). 
'delta_t' 
The time step used to generate the waveform (in s). 
'f_lower' 
The starting frequency of the waveform (in Hz). 
'approximant' 
A string that indicates the chosen approximant. 
'f_ref' 
The reference frequency. 
'phase_order' 
The pN order of the orbital phase. The default of 1 indicates that all implemented orders are used. 
'spin_order' 
The pN order of the spin corrections. The default of 1 indicates that all implemented orders are used. 
'tidal_order' 
The pN order of the tidal corrections. The default of 1 indicates that all implemented orders are used. 
'amplitude_order' 
The pN order of the amplitude. The default of 1 indicates that all implemented orders are used. 
'eccentricity_order' 
The pN order of the eccentricity corrections.The default of 1 indicates that all implemented orders are used. 
'frame_axis' 
Allow to choose among orbital_l, view and total_j 
'modes_choice' 
Allow to turn on among orbital_l, view and total_j 
'side_bands' 
Flag for generating sidebands 
'mode_array' 
Choose which (l,m) modes to include when generating a waveform. Only if approximant supports this feature.By default pass None and let lalsimulation use it’s default behaviour.Example: mode_array = [ [2,2], [2,2] ] 
'numrel_data' 
Sets the NR flags; only needed for NR waveforms. 
'delta_f' 
The frequency step used to generate the waveform (in Hz). 
'f_final' 
The ending frequency of the waveform. The default (0) indicates that the choice is made by the respective approximant. 
'f_final_func' 
Use the given frequency function to compute f_final based on the parameters of the waveform. 
'tc' 
Coalescence time (s). 
'ra' 
Right ascension (rad). 
'dec' 
Declination (rad). 
'polarization' 
Polarization (rad). 
It is possible to specify a variable_param
that is not one of these
parameters. To do so, you must provide one or more
[waveforms_transforms{param}]
section(s) that define transform(s) from the
arbitrary variable_params
to the needed waveform parameter(s) {param}
.
For example, in the following we provide a prior on chirp_distance
. Since
distance
, not chirp_distance
, is recognized by the CBC waveforms
module, we provide a transform to go from chirp_distance
to distance
:
[variable_params]
chirp_distance =
[priorchirp_distance]
name = uniform
minchirp_distance = 1
maxchirp_distance = 200
[waveform_transformsdistance]
name = chirp_distance_to_distance
A useful transform for these purposes is the
CustomTransform
, which allows
for arbitrary transforms using any function in the pycbc.conversions
,
pycbc.coordinates
, or pycbc.cosmology
modules, along with
numpy math functions. For example, the following would use the ILoveQ
relationship pycbc.conversions.dquadmon_from_lambda()
to relate the
quadrupole moment of a neutron star dquad_mon1
to its tidal deformation
lambda1
:
[variable_params]
lambda1 =
[waveform_transformsdquad_mon1]
name = custom
inputs = lambda1
dquad_mon1 = dquadmon_from_lambda(lambda1)
Note
A Jacobian is not necessary for waveform transforms, since the transforms are only being used to convert a set of parameters into something that the waveform generator understands. This is why in the above example we are able to use a custom transform without needing to provide a Jacobian.
Some common transforms are predefined in the code. These are: the mass
parameters mass1
and mass2
can be substituted with mchirp
and
eta
or mchirp
and q
. The component spin parameters spin1x
,
spin1y
, and spin1z
can be substituted for polar coordinates
spin1_a
, spin1_azimuthal
, and spin1_polar
(ditto for spin2
).
Calibration parameters¶
If any calibration parameters are used (prefix calib_
), a [calibration]
section must be included. This section must have a name
option that
identifies what calibration model to use. The models are described in
pycbc.calibration
. The [calibration]
section must also include
reference values fc0
, fs0
, and qinv0
, as well as paths to ASCII
transfer function files for the test mass actuation, penultimate mass
actuation, sensing function, and digital filter for each IFO being used in the
analysis. E.g. for an analysis using H1 only, the required options would be
h1fc0
, h1fs0
, h1qinv0
, h1transferfunctionatst
,
h1transferfunctionapu
, h1transferfunctionc
,
h1transferfunctiond
.
Constraints¶
One or more constraints may be applied to the parameters; these are
specified by the [constraint]
section(s). Additional constraints may be
supplied by adding more [constraint{tag}]
sections. Any tag may be used; the
only requirement is that they be unique. If multiple constraint sections are
provided, the union of all constraints is applied. Alternatively, multiple
constraints may be joined in a single argument using numpy’s logical operators.
The parameter that constraints are applied to may be any parameter in
variable_params
or any output parameter of the transforms. Functions may be
applied to these parameters to obtain constraints on derived parameters. Any
function in the conversions, coordinates, or cosmology module may be used,
along with any numpy ufunc. So, in the following example, the mass ratio (q) is
constrained to be <= 4 by using a function from the conversions module.
[variable_params]
mass1 =
mass2 =
[priormass1]
name = uniform
minmass1 = 3
maxmass1 = 12
[priormass2]
name = uniform
minmass2 = 1
minmass2 = 3
[constraint1]
name = custom
constraint_arg = q_from_mass1_mass2(mass1, mass2) <= 4
Checkpointing and output files¶
While pycbc_inference
is running it will create a checkpoint file which
is named {outputfile}.checkpoint
, where {outputfile}
was the name
of the file you specified with the outputfile
command. When it
checkpoints it will dump results to this file; when finished, the file is
renamed to {outputfile}
. A {outputfile}.bkup
is also created, which
is a copy of the checkpoint file. This is kept in case the checkpoint file gets
corrupted during writing. The .bkup
file is deleted at the end of the run,
unless savebackup
is turned on.
When pycbc_inference
starts, it checks if either
{outputfile}.checkpoint
or {outputfile}.bkup
exist (in that order).
If at least one of them exists, pycbc_inference
will attempt to load them
and continue to run from the last checkpoint state they were in.
The output/checkpoint file are HDF files. To peruse the structure of the file
you can use the h5ls commandline utility. More advanced utilities for
reading and writing from/to them are provided by the sampler IO classes in
pycbc.inference.io
. To load one of these files in python do:
from pycbc.inference import io
fp = io.loadfile(filename, "r")
Here, fp
is an instance of a sampler IO class. Basically, this is an
instance of an h5py.File
handler, with additional
convenience functions added on top. For example, if you want all of the samples
of all of the variable parameters in the file, you can do:
samples = fp.read_samples(fp.variable_params)
This will return a FieldArray
of all
of the samples.
Each sampler has it’s own sampler IO class that adds different convenience
functions, depending on the sampler that was used. For more details on these
classes, see the pycbc.inference.io
module.