GW150914 example with gaussian noise model
To run on GW150914, we can use the same sampler
, prior
and model
configuration files
as was used for the simulated BBH example. We only need to change the
data configuration file, so that we will run on real gravitational-wave data.
First, we need to download the data from the Gravitational Wave Open Science Center. Run:
wget https://www.gwosc.org/eventapi/html/GWTC-1-confident/GW150914/v3/H-H1_GWOSC_16KHZ_R1-1126257415-4096.gwf wget https://www.gwosc.org/eventapi/html/GWTC-1-confident/GW150914/v3/L-L1_GWOSC_16KHZ_R1-1126257415-4096.gwf
This will download the appropriate data (“frame”) files to your current working directory. You can now use the following data configuration file:
[data]
instruments = H1 L1
trigger-time = 1126259462.43
; See the documentation at
; http://pycbc.org/pycbc/latest/html/inference.html#simulated-bbh-example
; for details on the following settings:
analysis-start-time = -6
analysis-end-time = 2
psd-estimation = median-mean
psd-start-time = -256
psd-end-time = 256
psd-inverse-length = 8
psd-segment-length = 8
psd-segment-stride = 4
; The frame files must be downloaded from GWOSC before running. Here, we
; assume that the files have been downloaded to the same directory. Adjust
; the file path as necessary if not.
frame-files = H1:H-H1_GWOSC_16KHZ_R1-1126257415-4096.gwf L1:L-L1_GWOSC_16KHZ_R1-1126257415-4096.gwf
channel-name = H1:GWOSC-16KHZ_R1_STRAIN L1:GWOSC-16KHZ_R1_STRAIN
; this will cause the data to be resampled to 2048 Hz:
sample-rate = 2048
; We'll use a high-pass filter so as not to get numerical errors from the large
; amplitude low frequency noise. Here we use 15 Hz, which is safely below the
; low frequency cutoff of our likelihood integral (20 Hz)
strain-high-pass = 15
; The pad-data argument is for the high-pass filter: 8s are added to the
; beginning/end of the analysis/psd times when the data is loaded. After the
; high pass filter is applied, the additional time is discarded. This pad is
; *in addition to* the time added to the analysis start/end time for the PSD
; inverse length. Since it is discarded before the data is transformed for the
; likelihood integral, it has little affect on the run time.
pad-data = 8
The frame-files
argument points to the data files that we just downloaded
from GWOSC. If you downloaded the files to a different directory, modify this
argument accordingly to point to the correct location.
Note
If you are running on a cluster that has a LIGO_DATAFIND_SERVER
(e.g.,
LIGO Data Grid clusters, Atlas) you do not need to copy frame
files. Instead, replace the frame-files
argument with frame-type
,
and set it to H1:H1_LOSC_16_V1 L1:L1_LOSC_16_V1
.
Now run:
#!/bin/sh
# configuration files
PRIOR_CONFIG=gw150914_like.ini
DATA_CONFIG=data.ini
SAMPLER_CONFIG=emcee_pt-gw150914_like.ini
OUTPUT_PATH=inference.hdf
# the following sets the number of cores to use; adjust as needed to
# your computer's capabilities
NPROCS=10
# run sampler
# Running with OMP_NUM_THREADS=1 stops lalsimulation
# from spawning multiple jobs that would otherwise be used
# by pycbc_inference and cause a reduced runtime.
OMP_NUM_THREADS=1 \
pycbc_inference --verbose \
--seed 1897234 \
--config-file ${PRIOR_CONFIG} ${DATA_CONFIG} ${SAMPLER_CONFIG} \
--output-file ${OUTPUT_PATH} \
--nprocesses ${NPROCS} \
--force