pycbc.types package¶
Submodules¶
pycbc.types.aligned module¶
This module provides a class derived from numpy.ndarray that also indicates whether or not its memory is aligned. It further provides functions for creating zeros and empty (unitialized) arrays with this class.
pycbc.types.array module¶
This modules provides a device independent Array class based on PyCUDA and Numpy.

class
pycbc.types.array.
Array
(initial_array, dtype=None, copy=True)[source]¶ Bases:
object
Array used to do numeric calculations on a various compute devices. It is a convience wrapper around numpy, and pycuda.

abs_max_loc
()[source]¶ Return the maximum elementwise norm in the array along with the index location

almost_equal_elem
(other, tol, relative=True)[source]¶ Compare whether two array types are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]other[i]) <= tol*abs(self[i]) for all elements of the array.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]other[i]) <= tol for all elements of the array.
Other metadata (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters:  other – Another Python object, that should be tested for almostequality with ‘self’, elementbyelement.
 tol – A nonnegative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance.
 relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
Returns: ‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same.
Return type: boolean

almost_equal_norm
(other, tol, relative=True)[source]¶ Compare whether two array types are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(selfother)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(selfother)) <= tol
Other metadata (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters:  other – another Python object, that should be tested for almostequality with ‘self’, based on their norms.
 tol – a nonnegative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance.
 relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
Returns: ‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same.
Return type: boolean

data
¶ Returns the internal python array

dtype
¶

itemsize
¶

kind
¶

multiply_and_add
(other, mult_fac)[source]¶ Return other multiplied by mult_fac and with self added. Self is modified in place and returned as output. Precisions of inputs must match.

nbytes
¶

ndim
¶

precision
¶

ptr
¶ Returns a pointer to the memory of this array

save
(path, group=None)[source]¶ Save array to a Numpy .npy, hdf, or text file. When saving a complex array as text, the real and imaginary parts are saved as the first and second column respectively. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters:  path (string) – Destination file path. Must end with either .hdf, .npy or .txt.
 group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.

shape
¶

view
(dtype)[source]¶ Return a ‘view’ of the array with its bytes now interpreted according to ‘dtype’. The location in memory is unchanged and changing elements in a view of an array will also change the original array.
Parameters: dtype (numpy dtype (one of float32, float64, complex64 or complex128)) – The new dtype that should be used to interpret the bytes of self


pycbc.types.array.
empty
(length, dtype=<class 'numpy.float64'>)[source]¶ Return an empty Array (no initialization)

pycbc.types.array.
load_array
(path, group=None)[source]¶ Load an Array from a .hdf, .txt or .npy file. The default data types will be double precision floating point.
Parameters:  path (string) – source file path. Must end with either .npy or .txt.
 group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.
pycbc.types.array_cpu module¶
Numpy based CPU backend for PyCBC Array

pycbc.types.array_cpu.
abs_arg_max
()¶

pycbc.types.array_cpu.
abs_arg_max_complex
¶

pycbc.types.array_cpu.
abs_max_loc
()¶

pycbc.types.array_cpu.
clear
()¶

pycbc.types.array_cpu.
cumsum
()¶

pycbc.types.array_cpu.
dot
()¶

pycbc.types.array_cpu.
empty
()¶

pycbc.types.array_cpu.
inner
()¶ Return the inner product of the array with complex conjugation.

pycbc.types.array_cpu.
inner_real
¶

pycbc.types.array_cpu.
max
()¶

pycbc.types.array_cpu.
max_loc
()¶

pycbc.types.array_cpu.
min
()¶

pycbc.types.array_cpu.
multiply_and_add
()¶ Return other multiplied by mult_fac and with self added. Self will be modified in place. This requires all inputs to be of the same precision.

pycbc.types.array_cpu.
numpy
()¶

pycbc.types.array_cpu.
ptr
()¶

pycbc.types.array_cpu.
squared_norm
()¶ Return the elementwise squared norm of the array

pycbc.types.array_cpu.
sum
()¶

pycbc.types.array_cpu.
take
()¶

pycbc.types.array_cpu.
vdot
()¶ Return the inner product of the array with complex conjugation.

pycbc.types.array_cpu.
weighted_inner
()¶ Return the inner product of the array with complex conjugation.

pycbc.types.array_cpu.
zeros
()¶
pycbc.types.frequencyseries module¶
Provides a class representing a frequency series.

class
pycbc.types.frequencyseries.
FrequencySeries
(initial_array, delta_f=None, epoch='', dtype=None, copy=True)[source]¶ Bases:
pycbc.types.array.Array
Models a frequency series consisting of uniformly sampled scalar values.
Parameters:  initial_array (arraylike) – Array containing sampled data.
 delta_f (float) – Frequency between consecutive samples in Hertz.
 epoch ({None, lal.LIGOTimeGPS}, optional) – Start time of the associated time domain data in seconds.
 dtype ({None, datatype}, optional) – Sample data type.
 copy (boolean, optional) – If True, samples are copied to a new array.

epoch
¶ Time at 0 index.
Type: lal.LIGOTimeGPS

almost_equal_elem
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two frequency series are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]other[i]) <= tol*abs(self[i]) for all elements of the series.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]other[i]) <= tol for all elements of the series.
The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other metadata (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters:  other (another Python object, that should be tested for) – almostequality with ‘self’, elementbyelement.
 tol (a nonnegative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
 relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
 dtol (a nonnegative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_fs agree within the tolerance,

almost_equal_norm
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two frequency series are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(selfother)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(selfother)) <= tol
The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other metadata (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters:  other (another Python object, that should be tested for) – almostequality with ‘self’, based on their norms.
 tol (a nonnegative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
 relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
 dtol (a nonnegative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_fs agree within the tolerance,

cyclic_time_shift
(dt)[source]¶ Shift the data and timestamps by a given number of seconds
Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cycliclly rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.
Parameters: dt (float) – Amount of time to shift the vector. Returns: data – The time shifted frequency series. Return type: pycbc.types.FrequencySeries

delta_f
Frequency between consecutive samples in Hertz.

delta_t
¶ Return the time between samples if this were a time series. This assume the time series is even in length!

duration
¶ Return the time duration of this vector

end_time
¶ Return the end time of this vector

epoch
Frequency series epoch as a LIGOTimeGPS.

lal
()[source]¶ Produces a LAL frequency series object equivalent to self.
Returns: lal_data – LAL frequency series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self was ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self. Return type: {lal.*FrequencySeries} Raises: TypeError
– If frequency series is stored in GPU memory.

match
(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]¶ Return the match between the two TimeSeries or FrequencySeries.
Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. Beware, this may remove high frequency content or the end of the vector.
Parameters:  other (TimeSeries or FrequencySeries) – The input vector containing a waveform.
 psd (Frequency Series) – A power spectral density to weight the overlap.
 low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match.
 high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match.
 index (int) – The number of samples to shift to get the match.
Returns:  match (float)
 index (int) – The number of samples to shift to get the match.

sample_frequencies
Array of the sample frequencies.

sample_rate
¶ Return the sample rate this would have in the time domain. This assumes even length time series!

save
(path, group=None, ifo='P1')[source]¶ Save frequency series to a Numpy .npy, hdf, or text file. The first column contains the sample frequencies, the second contains the values. In the case of a complex frequency series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters:  path (string) – Destination file path. Must end with either .hdf, .npy or .txt.
 group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.

start_time
¶ Return the start time of this vector

to_timeseries
(delta_t=None)[source]¶ Return the Fourier transform of this time series.
Note that this assumes even length time series!
Parameters:  delta_t ({None, float}, optional) – The time resolution of the returned series. By default the
 is determined by length and delta_f of this frequency (resolution) –
 series. –
Returns: The inverse fourier transform of this frequency series.
Return type:

pycbc.types.frequencyseries.
load_frequencyseries
(path, group=None)[source]¶ Load a FrequencySeries from a .hdf, .txt or .npy file. The default data types will be double precision floating point.
Parameters:  path (string) – source file path. Must end with either .npy or .txt.
 group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.
pycbc.types.optparse module¶
This modules contains extensions for use with argparse

class
pycbc.types.optparse.
DictWithDefaultReturn
[source]¶ Bases:
collections.defaultdict

default_set
= False¶

ifo_set
= False¶


class
pycbc.types.optparse.
MultiDetOptionAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
argparse.Action

class
pycbc.types.optparse.
MultiDetOptionActionSpecial
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
pycbc.types.optparse.MultiDetOptionAction
This class in an extension of the MultiDetOptionAction class to handle cases where the : is already a special character. For example the channel name is something like H1:CHANNEL_NAME. Here the channel name must be provided uniquely for each ifo. The dictionary key is set to H1 and the value to H1:CHANNEL_NAME for this example.

class
pycbc.types.optparse.
MultiDetOptionAppendAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶

pycbc.types.optparse.
convert_to_process_params_dict
(opt)[source]¶ Takes the namespace object (opt) from the multidetector interface and returns a dictionary of command line options that will be handled correctly by the register_to_process_params ligolw function.

pycbc.types.optparse.
copy_opts_for_single_ifo
(opt, ifo)[source]¶ Takes the namespace object (opt) from the multidetector interface and returns a namespace object for a single ifo that can be used with functions expecting output from the singledetector interface.

pycbc.types.optparse.
ensure_one_opt
(opt, parser, opt_list)[source]¶ Check that one and only one in the opt_list is defined in opt
Parameters:

pycbc.types.optparse.
ensure_one_opt_multi_ifo
(opt, parser, ifo, opt_list)[source]¶ Check that one and only one in the opt_list is defined in opt
Parameters:

pycbc.types.optparse.
nonnegative_float
(s)[source]¶ Ensure argument is a positive real number or zero and return it as float.
To be used as type in argparse arguments.

pycbc.types.optparse.
positive_float
(s)[source]¶ Ensure argument is a positive real number and return it as float.
To be used as type in argparse arguments.

pycbc.types.optparse.
required_opts
(opt, parser, opt_list, required_by=None)[source]¶ Check that all the opts are defined
Parameters:
pycbc.types.timeseries module¶
Provides a class representing a time series.

class
pycbc.types.timeseries.
TimeSeries
(initial_array, delta_t=None, epoch='', dtype=None, copy=True)[source]¶ Bases:
pycbc.types.array.Array
Models a time series consisting of uniformly sampled scalar values.
Parameters:  initial_array (arraylike) – Array containing sampled data.
 delta_t (float) – Time between consecutive samples in seconds.
 epoch ({None, lal.LIGOTimeGPS}, optional) – Time of the first sample in seconds.
 dtype ({None, datatype}, optional) – Sample data type.
 copy (boolean, optional) – If True, samples are copied to a new array.

delta_t
¶

duration
¶

start_time
¶

end_time
¶

sample_times
¶

sample_rate
¶

add_into
(other)[source]¶ Return the sum of the two time series accounting for the time stamp.
The other vector will be resized and time shifted wiht subsample precision before adding. This assumes that one can assume zeros outside of the original vector range.

almost_equal_elem
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two time series are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]other[i]) <= tol*abs(self[i]) for all elements of the series.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]other[i]) <= tol for all elements of the series.
The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other metadata (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters:  other (another Python object, that should be tested for) – almostequality with ‘self’, elementbyelement.
 tol (a nonnegative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
 relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
 dtol (a nonnegative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_ts agree within the tolerance,

almost_equal_norm
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two time series are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(selfother)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(selfother)) <= tol
The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other metadata (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters:  other (another Python object, that should be tested for) – almostequality with ‘self’, based on their norms.
 tol (a nonnegative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
 relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
 dtol (a nonnegative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_ts agree within the tolerance,

crop
(left, right)[source]¶ Remove given seconds from either end of time series
Parameters: Returns: cropped – The reduced time series
Return type: pycbc.types.TimeSeries

cyclic_time_shift
(dt)[source]¶ Shift the data and timestamps by a given number of seconds
Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cycliclly rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.
Parameters: dt (float) – Amount of time to shift the vector. Returns: data – The time shifted time series. Return type: pycbc.types.TimeSeries

delta_f
¶ Return the delta_f this ts would have in the frequency domain

delta_t
Time between consecutive samples in seconds.

detrend
(type='linear')[source]¶ Remove linear trend from the data
Remove a linear trend from the data to improve the approximation that the data is circularly convolved, this helps reduce the size of filter transients from a circular convolution / filter.
Parameters:  type (str) – The choice of detrending. The default (‘linear’) removes a linear
 squares fit. 'constant' removes only the mean of the data. (least) –

duration
Duration of time series in seconds.

end_time
Time series end time as a LIGOTimeGPS.

filter_psd
(segment_duration, delta_f, flow)[source]¶ Calculate the power spectral density of this time series.
Use the pycbc.psd.welch method to estimate the psd of this time segment. The psd is then truncated in the time domain to the segment duration and interpolated to the requested sample frequency.
Parameters: Returns: psd – Frequency series containing the estimated PSD.
Return type:

fir_zero_filter
(coeff)[source]¶ Filter the timeseries with a set of FIR coefficients
Parameters: coeff (numpy.ndarray) – FIR coefficients. Should be and odd length and symmetric. Returns:  filtered_series (pycbc.types.TimeSeries) – Return the filtered timeseries, which has been properly shifted to account
 for the FIR filter delay and the corrupted regions zeroed out.

gate
(time, zero_width=0.25, taper_width=0.25)[source]¶ Gate out portion of time series
Parameters: Returns: data – Gated time series
Return type: pycbc.types.TimeSeris

highpass_fir
(frequency, order, beta=5.0, remove_corrupted=True)[source]¶ Highpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)
Parameters:  Series (Time) – The time series to be highpassed.
 frequency (float) – The frequency below which is suppressed.
 order (int) – Number of corrupted samples on each side of the time series
 beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.
 remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.

lal
()[source]¶ Produces a LAL time series object equivalent to self.
Returns: lal_data – LAL time series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self is ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self. Return type: {lal.*TimeSeries} Raises: TypeError
– If time series is stored in GPU memory.

lowpass_fir
(frequency, order, beta=5.0, remove_corrupted=True)[source]¶ Lowpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)
Parameters:  Series (Time) – The time series to be lowpassed.
 frequency (float) – The frequency below which is suppressed.
 order (int) – Number of corrupted samples on each side of the time series
 beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.
 remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.

match
(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]¶ Return the match between the two TimeSeries or FrequencySeries.
Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. This may remove high frequency content or the end of the vector.
Parameters:  other (TimeSeries or FrequencySeries) – The input vector containing a waveform.
 psd (Frequency Series) – A power spectral density to weight the overlap.
 low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match.
 high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match.
Returns:  match (float)
 index (int) – The number of samples to shift to get the match.

notch_fir
(f1, f2, order, beta=5.0, remove_corrupted=True)[source]¶ notch filter the time series using an FIR filtered generated from the ideal response passed through a timedomain kaiser window (beta = 5.0)
The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band.
Parameters:  Series (Time) – The time series to be notched.
 f1 (float) – The start of the frequency suppression.
 f2 (float) – The end of the frequency suppression.
 order (int) – Number of corrupted samples on each side of the time series
 beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.

prepend_zeros
(num)[source]¶ Prepend num zeros onto the beginning of this TimeSeries. Update also epoch to include this prepending.

psd
(segment_duration, **kwds)[source]¶ Calculate the power spectral density of this time series.
Use the pycbc.psd.welch method to estimate the psd of this time segment. For more complete options, please see that function.
Parameters:  segment_duration (float) – Duration in seconds to use for each sample of the spectrum.
 kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method.
Returns: psd – Frequency series containing the estimated PSD.
Return type:

qtransform
(delta_t=None, delta_f=None, logfsteps=None, frange=None, qrange=(4, 64), mismatch=0.2, return_complex=False)[source]¶ Return the interpolated 2d qtransform of this data
Parameters:  delta_t ({self.delta_t, float}) – The time resolution to interpolate to
 delta_f (float, Optional) – The frequency resolution to interpolate to
 logfsteps (int) – Do a log interpolation (incompatible with delta_f option) and set the number of steps to take.
 frange ({(30, nyquist*0.8), tuple of ints}) – frequency range
 qrange ({(4, 64), tuple}) – q range
 mismatch (float) – Mismatch between frequency tiles
 return_complex ({False, bool}) – return the raw complex series instead of the normalized power.
Returns:  times (numpy.ndarray) – The time that the qtransform is sampled.
 freqs (numpy.ndarray) – The frequencies that the qtransform is sampled.
 qplane (numpy.ndarray (2d)) – The two dimensional interpolated qtransform of this time series.

sample_rate
The sample rate of the time series.

sample_times
Array containing the sample times.

save
(path, group=None)[source]¶ Save time series to a Numpy .npy, hdf, or text file. The first column contains the sample times, the second contains the values. In the case of a complex time series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters:  path (string) – Destination file path. Must end with either .hdf, .npy or .txt.
 group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.

save_to_wav
(file_name)[source]¶ Save this time series to a wav format audio file.
Parameters: file_name (string) – The output file name

start_time
Return time series start time as a LIGOTimeGPS.

time_slice
(start, end)[source]¶ Return the slice of the time series that contains the time range in GPS seconds.

to_frequencyseries
(delta_f=None)[source]¶ Return the Fourier transform of this time series
Parameters:  delta_f ({None, float}, optional) – The frequency resolution of the returned frequency series. By
 the resolution is determined by the duration of the timeseries. (default) –
Returns: The fourier transform of this time series.
Return type:

whiten
(segment_duration, max_filter_duration, trunc_method='hann', remove_corrupted=True, low_frequency_cutoff=None, return_psd=False, **kwds)[source]¶ Return a whitened time series
Parameters:  segment_duration (float) – Duration in seconds to use for each sample of the spectrum.
 max_filter_duration (int) – Maximum length of the timedomain filter in seconds.
 trunc_method ({None, 'hann'}) – Function used for truncating the timedomain filter. None produces a hard truncation at max_filter_len.
 remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the whitening is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.
 low_frequency_cutoff ({None, float}) – Low frequency cutoff to pass to the inverse spectrum truncation. This should be matched to a known low frequency cutoff of the data if there is one.
 return_psd ({False, Boolean}) – Return the estimated and conditioned PSD that was used to whiten the data.
 kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method.
Returns: whitened_data – The whitened time series
Return type:

pycbc.types.timeseries.
load_timeseries
(path, group=None)[source]¶ Load a TimeSeries from a .hdf, .txt or .npy file. The default data types will be double precision floating point.
Parameters:  path (string) – source file path. Must end with either .npy or .txt.
 group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.