# Simulated BBH example¶

This example recovers the parameters of a simulated binary black-hole (BBH) that has similar parameters has GW150914.

## 1. Create the injection¶

First, we need to create an `injection.hdf`

file that specifies the
parameters of the simulated signal. To do that we will use
`pycbc_create_injection`

. Like `pycbc_inference`

,
`pycbc_create_injections`

uses a configuration file to set the parameters of
the injections it will create. To create a binary-black hole with parameters
similar to GW150914, use the following configuration file:

```
[variable_params]
[static_params]
tc = 1126259462.420
mass1 = 37
mass2 = 32
ra = 2.2
dec = -1.25
inclination = 2.5
coa_phase = 1.5
polarization = 1.75
distance = 100
f_ref = 20
f_lower = 18
approximant = IMRPhenomPv2
taper = start
```

Note the similarity to the configuration file for `pycbc_inference`

: you must
have a `[variable_params]`

section. If we wanted to randomize one or more
of the parameters, we would list them there, then add `[prior]`

sections to
specify what distribution to draw the parameters from. In this case, however,
we want to fix the parameters, so we just put all of the necessary parameters
in the `[static_params]`

section.

To create the injection file, run:

```
#!/bin/sh
pycbc_create_injections --verbose \
--config-files injection.ini \
--ninjections 1 \
--seed 10 \
--output-file injection.hdf \
--variable-params-section variable_params \
--static-params-section static_params \
--dist-section prior \
--force
```

This will create the `injection.hdf`

file, which we will give to
`pycbc_inference`

. For more information on generating injection files, run
`pycbc_create_injections --help`

.

## 2. Setup the configuration files¶

Now we need to set up the configuration for `pycbc_inference`

. Since we
will be analyzing data, we will need to provide several additional options in a
`[data]`

section. To keep the configuration files easy to read, we will split
the data, sampler, and prior settings into their own configuration files.

Here are the model and prior settings we will use:

```
[model]
name = gaussian_noise
low-frequency-cutoff = 20.0
[variable_params]
; waveform parameters that will vary in MCMC
delta_tc =
mass1 =
mass2 =
spin1_a =
spin1_azimuthal =
spin1_polar =
spin2_a =
spin2_azimuthal =
spin2_polar =
distance =
coa_phase =
inclination =
polarization =
ra =
dec =
[static_params]
; waveform parameters that will not change in MCMC
approximant = IMRPhenomPv2
f_lower = 20
f_ref = 20
; we'll set the tc by using the trigger time in the data
; section of the config file + delta_tc
trigger_time = ${data|trigger-time}
[prior-delta_tc]
; coalescence time prior
name = uniform
min-delta_tc = -0.1
max-delta_tc = 0.1
[waveform_transforms-tc]
; we need to provide tc to the waveform generator
name = custom
inputs = delta_tc
tc = ${data|trigger-time} + delta_tc
[prior-mass1]
name = uniform
min-mass1 = 10.
max-mass1 = 80.
[prior-mass2]
name = uniform
min-mass2 = 10.
max-mass2 = 80.
[prior-spin1_a]
name = uniform
min-spin1_a = 0.0
max-spin1_a = 0.99
[prior-spin1_polar+spin1_azimuthal]
name = uniform_solidangle
polar-angle = spin1_polar
azimuthal-angle = spin1_azimuthal
[prior-spin2_a]
name = uniform
min-spin2_a = 0.0
max-spin2_a = 0.99
[prior-spin2_polar+spin2_azimuthal]
name = uniform_solidangle
polar-angle = spin2_polar
azimuthal-angle = spin2_azimuthal
; The waveform generator expects spins to be in cartesian coordinates, with
; names spin(1|2)(x|y|z). We therefore need to provide a waveform transform
; that converts the spherical coordinates that we have defined the spin prior
; in to cartesian coordinates.
[waveform_transforms-spin1x+spin1y+spin1z]
name = spherical_to_cartesian
x = spin1x
y = spin1y
z = spin1z
radial = spin1_a
polar = spin1_polar
azimuthal = spin1_azimuthal
[waveform_transforms-spin2x+spin2y+spin2z]
name = spherical_to_cartesian
x = spin2x
y = spin2y
z = spin2z
radial = spin2_a
polar = spin2_polar
azimuthal = spin2_azimuthal
[prior-distance]
; following gives a uniform volume prior
name = uniform_radius
min-distance = 10
max-distance = 1000
[prior-coa_phase]
; coalescence phase prior
name = uniform_angle
[prior-inclination]
; inclination prior
name = sin_angle
[prior-ra+dec]
; sky position prior
name = uniform_sky
[prior-polarization]
; polarization prior
name = uniform_angle
```

In the `[model]`

section we have set the model to be `gaussian_noise`

. As
described above, this is the standard model to use for CBC signals. It assumes
that the noise is wide-sense stationary Gaussian noise. Notice that we provide
a low frequency argument which is the lower bound for the likelihood integral.
(See the `GaussianNoise`

docs for details.)

The `GaussianNoise`

model will need to generate model waveforms in order to
evaluate the likelihood. This means that we need to provide it with a waveform
approximant to use. Which model to use is set by the `approximant`

argument
in the `[static_params]`

section. Here, we are using `IMRPhenomPv2`

. This
is a frequency-domain, precessing model that uses the dominant,
\(\ell=|m|=2\) mode. For this reason, we are varying all three components
of each object’s spin, along with the masses, location, orientation, and phase
of the signal. We need to provide a lower frequency cutoff (`f_lower`

),
which is the starting frequency of the waveform. This must be \(\leq\) the
smallest low frequency cutoff set in the model section.

Note

In this example we have to sample over a reference phase for the waveform
(`coa_phase`

). This is because we are using the `GaussianNoise`

model. For
dominant-mode only waveforms, it is possible to analytically marginalize
over the phase using the `MarginalizedPhaseGaussianNoise`

model. This can
speed up the convergence of the sampler by a factor of 3 or faster. To use
the marginalized phase model, change the model name to
`marginalized_phase`

, and remove `coa_phase`

from the
`variable_params`

and the prior. However, the marginalized phase model
should not be used with fully precessing models or models that include
higher modes. You can use it with `IMRPhenomPv2`

due to some
simplifications that the approximant makes.

We also need a prior for the coaslesence time `tc`

. We have done this by
setting a reference time in the `static_params`

section, and varying a
+/-0.1s window around it with the `delta_tc`

parameter. Notice that the
trigger time is not set to a value; instead, we reference the `trigger-time`

option that is set in the `[data]`

section. This way, we only need to set the
trigger time in one place; we can reuse this prior file for different BBH
events by simply providing a different data configuration file.

Here is the data configuration file we will use:

```
[data]
instruments = H1 L1
trigger-time = 1126259462.42
analysis-start-time = -6
analysis-end-time = 2
; strain settings
sample-rate = 2048
fake-strain = H1:aLIGOaLIGODesignSensitivityT1800044 L1:aLIGOaLIGODesignSensitivityT1800044
fake-strain-seed = H1:44 L1:45
; psd settings
psd-estimation = median-mean
psd-inverse-length = 8
psd-segment-length = 8
psd-segment-stride = 4
psd-start-time = -256
psd-end-time = 256
; even though we're making fake strain, the strain
; module requires a channel to be provided, so we'll
; just make one up
channel-name = H1:STRAIN L1:STRAIN
; Providing an injection file will cause a simulated
; signal to be added to the data
injection-file = injection.hdf
; We'll use a high-pass filter so as not to get numerical errors from the large
; amplitude low frequency noise. Here we use 15 Hz, which is safely below the
; low frequency cutoff of our likelihood integral (20 Hz)
strain-high-pass = 15
; The pad-data argument is for the high-pass filter: 8s are added to the
; beginning/end of the analysis/psd times when the data is loaded. After the
; high pass filter is applied, the additional time is discarded. This pad is
; *in addition to* the time added to the analysis start/end time for the PSD
; inverse length. Since it is discarded before the data is transformed for the
; likelihood integral, it has little affect on the run time.
pad-data = 8
```

In this case, we are generating fake Gaussian noise (via the `fake-strain`

option) that is colored by the Advanced LIGO updated design sensitivity curve. (Note that this is ~3 times more
sensitive than what the LIGO detectors were when GW150914 was detected.) The
duration of data that will be analyzed is set by the
`analysis-(start|end)-time`

arguments. These values are measured with respect
to the `trigger-time`

. The analyzed data should be long enough such that it
encompasses the longest waveform admitted by our prior, plus our timing
uncertainty (which is determined by the prior on `delta_tc`

). Waveform
duration is approximately determined by the total mass of a system. The lowest
total mass (`= mass1 + mass2`

) admitted by our prior is 20 solar masses. This
corresponds to a duration of ~6 seconds, so we start the analysis time 6
seconds before the trigger time. (See the `pycbc.waveform`

module for
utilities to estimate waveform duration.) Since the trigger time is
approximately where we expect the merger to happen, we only need a small amount
of time afterward to account for the timing uncertainty and ringdown. Here, we
choose 2 seconds, which is a good safety margin.

We also have to provide arguments for estimating a PSD. Although we know the
exact shape of the PSD in this case, we will still estimate it from the
generated data, as this most closely resembles what you do with a real event.
To do this, we have set `psd-estimation`

to `median-mean`

and we have set
`psd-segment-length`

, `psd-segment-stride`

, and `psd-(start|end)-time`

(which are with respect to the trigger time). This means that a Welch-like
method will be used to estimate the PSD. Specifically, we will use 512s of data
centered on the trigger time to estimate the PSD. This time will be divided up
into 8s-long segments (the segment length) each overlapping by 4s (the segment
stride). The data in each segment will be transformed to the frequency domain.
Two median values will be determined in each frequency bin from across all
even/odd segments, then averaged to obtain the PSD.

The beginning and end of the analysis segment will be corrupted by the
convolution of the inverse PSD with the data. To limit the amount of time that
is corrupted, we set `psd-inverse-length`

to `8`

. This limits the
corruption to at most the first and last four seconds of the data segment. To
account for the corruption, `psd-inverse-length/2`

seconds are
subtracted/added by the code from/to the analysis start/end times specified by
the user before the data are transformed to the frequency domain.
Consequently, our data will have a frequency resolution of \(\Delta f =
1/16\,\) Hz. The 4s at the beginning/end of the segment are effectively ignored
since the waveform is contained entirely in the -6/+2s we set with the analysis
start/end time.

Finally, we will use the `emcee_pt`

sampler with the following settings:

```
[sampler]
name = emcee_pt
nwalkers = 200
ntemps = 20
effective-nsamples = 1000
checkpoint-interval = 2000
max-samples-per-chain = 1000
[sampler-burn_in]
burn-in-test = nacl & max_posterior
;
; Sampling transforms
;
[sampling_params]
; parameters on the left will be sampled in
; parametes on the right
mass1, mass2 : mchirp, q
[sampling_transforms-mchirp+q]
; inputs mass1, mass2
; outputs mchirp, q
name = mass1_mass2_to_mchirp_q
```

Here, we will use 200 walkers and 20 temperatures. We will checkpoint (i.e.,
dump results to file) every 2000 iterations. Since we have provided an
`effective-nsamples`

argument and a `[sampler-burn_in]`

section,
`pycbc_inference`

will run until it has acquired 1000 independent samples
after burn-in, which is determined by a combination of the `nacl`

and
`max_posterior`

tests;
i.e., the sampler will be considered converged when both of these tests are
satisfied.

The number of independent samples is checked at each checkpoint: after dumping
the results, the burn-in test is applied and an autocorrelation length is
calculated. The number of independent samples is then
`nwalkers x (the number of iterations since burn in)/ACL`

. If this number
exceeds `effective-nsamples`

, `pycbc_inference`

will finalize the results
and exit.

## 3. Run¶

To perform the analysis, run:

```
#!/bin/sh
# sampler parameters
PRIOR_CONFIG=gw150914_like.ini
DATA_CONFIG=data.ini
SAMPLER_CONFIG=emcee_pt-gw150914_like.ini
OUTPUT_PATH=inference.hdf
# the following sets the number of cores to use; adjust as needed to
# your computer's capabilities
NPROCS=10
# run sampler
# Running with OMP_NUM_THREADS=1 stops lalsimulation
# from spawning multiple jobs that would otherwise be used
# by pycbc_inference and cause a reduced runtime.
OMP_NUM_THREADS=1 \
pycbc_inference --verbose \
--seed 12 \
--config-file ${PRIOR_CONFIG} ${DATA_CONFIG} ${SAMPLER_CONFIG} \
--output-file ${OUTPUT_PATH} \
--nprocesses ${NPROCS} \
--force
```

Since we are generating waveforms and analyzing a 15 dimensional parameter
space, this run will be much more computationally expensive than the analytic
example. We recommend running this on a cluster or a computer with a
large number of cores. In the example, we have set the parallelization to use
10 cores. With these settings, it should checkpoint approximately every hour or
two. The run should complete in a few hours. If you would like to acquire more
samples, increase `effective-nsamples`

.

Note

We have found that the `emcee_pt`

sampler struggles to accumulate more
than ~2000 independent samples with 200 walkers and 20 temps. The basic
issue is that the ACT starts to grow at the same rate as new iterations,
so that the number of independent samples remains the same. Increasing
the number of walkers and decreasing the number of temperatures can help,
but this sometimes leads to the sampler not fully converging before passing
the burn in tests. If you want more than 2000 samples, we currently
recommend doing multiple independent runs with different seed values, then
combining posterior samples after they have finished.