pycbc.types package

Submodules

pycbc.types.aligned module

This module provides a class derived from numpy.ndarray that also indicates whether or not its memory is aligned. It further provides functions for creating zeros and empty (unitialized) arrays with this class.

pycbc.types.aligned.check_aligned(ndarr)[source]
pycbc.types.aligned.empty(n, dtype)[source]
pycbc.types.aligned.zeros(n, dtype)[source]

pycbc.types.array module

This modules provides a device independent Array class based on PyCUDA and Numpy.

class pycbc.types.array.Array(initial_array, dtype=None, copy=True)[source]

Bases: object

Array used to do numeric calculations on a various compute devices. It is a convience wrapper around numpy, and pycuda.

abs_arg_max()[source]

Return location of the maximum argument max

abs_max_loc()[source]

Return the maximum elementwise norm in the array along with the index location

almost_equal_elem(other, tol, relative=True)[source]

Compare whether two array types are almost equal, element by element.

If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the array.

If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the array.

Other meta-data (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.

Parameters:
  • other – Another Python object, that should be tested for almost-equality with ‘self’, element-by-element.

  • tol – A non-negative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance.

  • relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).

Returns:

‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same.

Return type:

boolean

almost_equal_norm(other, tol, relative=True)[source]

Compare whether two array types are almost equal, normwise.

If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).

If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol

Other meta-data (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.

Parameters:
  • other – another Python object, that should be tested for almost-equality with ‘self’, based on their norms.

  • tol – a non-negative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance.

  • relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).

Returns:

‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same.

Return type:

boolean

astype(dtype)[source]
clear()[source]

Clear out the values of the array.

conj()[source]

Return complex conjugate of Array.

copy()[source]

Return copy of this array

cumsum()[source]

Return the cumulative sum of the the array.

property data

Returns the internal python array

dot(other)[source]

Return the dot product

property dtype
fill(value)[source]
imag()[source]

Return imaginary part of Array

inner(other)[source]

Return the inner product of the array with complex conjugation.

property itemsize
property kind
lal()[source]

Returns a LAL Object that contains this data

max()[source]

Return the maximum value in the array.

max_loc()[source]

Return the maximum value in the array along with the index location

min()[source]

Return the maximum value in the array.

multiply_and_add(other, mult_fac)[source]

Return other multiplied by mult_fac and with self added. Self is modified in place and returned as output. Precisions of inputs must match.

property nbytes
property ndim
numpy()[source]

Returns a Numpy Array that contains this data

property precision
property ptr

Returns a pointer to the memory of this array

real()[source]

Return real part of Array

resize(new_size)[source]

Resize self to new_size

roll(shift)[source]

shift vector

save(path, group=None)[source]

Save array to a Numpy .npy, hdf, or text file. When saving a complex array as text, the real and imaginary parts are saved as the first and second column respectively. When using hdf format, the data is stored as a single vector, along with relevant attributes.

Parameters:
  • path (string) – Destination file path. Must end with either .hdf, .npy or .txt.

  • group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.

Raises:

ValueError – If path does not end in .npy or .txt.

property shape
squared_norm()[source]

Return the elementwise squared norm of the array

sum()[source]

Return the sum of the the array.

take(indices)[source]
trim_zeros()[source]

Remove the leading and trailing zeros.

vdot(other)[source]

Return the inner product of the array with complex conjugation.

view(dtype)[source]

Return a ‘view’ of the array with its bytes now interpreted according to ‘dtype’. The location in memory is unchanged and changing elements in a view of an array will also change the original array.

Parameters:

dtype (numpy dtype (one of float32, float64, complex64 or complex128)) – The new dtype that should be used to interpret the bytes of self

weighted_inner(other, weight)[source]

Return the inner product of the array with complex conjugation.

pycbc.types.array.check_same_len_precision(a, b)[source]

Check that the two arguments have the same length and precision. Raises ValueError if they do not.

pycbc.types.array.common_kind(*dtypes)[source]
pycbc.types.array.complex_same_precision_as(data)[source]
pycbc.types.array.empty(length, dtype=<class 'numpy.float64'>)[source]

Return an empty Array (no initialization)

pycbc.types.array.force_precision_to_match(scalar, precision)[source]
pycbc.types.array.load_array(path, group=None)[source]

Load an Array from an HDF5, ASCII or Numpy file. The file type is inferred from the file extension, which must be .hdf, .txt or .npy.

For ASCII and Numpy files with a single column, a real array is returned. For files with two columns, the columns are assumed to contain the real and imaginary parts of a complex array respectively.

The default data types will be double precision floating point.

Parameters:
  • path (string) – Input file path. Must end with either .npy, .txt or .hdf.

  • group (string) – Additional name for internal storage use. When reading HDF files, this is the path to the HDF dataset to read.

Raises:

ValueError – If path does not end with a supported extension. For Numpy and ASCII input files, this is also raised if the array does not have 1 or 2 dimensions.

pycbc.types.array.real_same_precision_as(data)[source]
pycbc.types.array.zeros(length, dtype=<class 'numpy.float64'>)[source]

Return an Array filled with zeros.

pycbc.types.array_cpu module

Numpy based CPU backend for PyCBC Array

pycbc.types.array_cpu.abs_arg_max(self)
pycbc.types.array_cpu.abs_arg_max_complex(a)
pycbc.types.array_cpu.abs_max_loc(self)
pycbc.types.array_cpu.clear(self)
pycbc.types.array_cpu.cumsum(self)
pycbc.types.array_cpu.dot(self, other)
pycbc.types.array_cpu.empty(length, dtype=<class 'numpy.float64'>)
pycbc.types.array_cpu.inner(self, other)

Return the inner product of the array with complex conjugation.

pycbc.types.array_cpu.inner_real(a, b)
pycbc.types.array_cpu.max(self)
pycbc.types.array_cpu.max_loc(self)
pycbc.types.array_cpu.min(self)
pycbc.types.array_cpu.multiply_and_add(self, other, mult_fac)

Return other multiplied by mult_fac and with self added. Self will be modified in place. This requires all inputs to be of the same precision.

pycbc.types.array_cpu.numpy(self)
pycbc.types.array_cpu.ptr(self)
pycbc.types.array_cpu.squared_norm(self)

Return the elementwise squared norm of the array

pycbc.types.array_cpu.sum(self)
pycbc.types.array_cpu.take(self, indices)
pycbc.types.array_cpu.vdot(self, other)

Return the inner product of the array with complex conjugation.

pycbc.types.array_cpu.weighted_inner(self, other, weight)

Return the inner product of the array with complex conjugation.

pycbc.types.array_cpu.zeros(length, dtype=<class 'numpy.float64'>)

pycbc.types.config module

This module provides a wrapper to the ConfigParser utilities for pycbc. This module is described in the page here:

class pycbc.types.config.DeepCopyableConfigParser(defaults=None, dict_type=<class 'dict'>, allow_no_value=False, *, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True, default_section='DEFAULT', interpolation=<object object>, converters=<object object>)[source]

Bases: ConfigParser

The standard SafeConfigParser no longer supports deepcopy() as of python 2.7 (see http://bugs.python.org/issue16058). This subclass restores that functionality.

class pycbc.types.config.InterpolatingConfigParser(configFiles=None, overrideTuples=None, parsedFilePath=None, deleteTuples=None, skip_extended=False, sanitize_newline=True)[source]

Bases: DeepCopyableConfigParser

This is a sub-class of DeepCopyableConfigParser, which lets us add a few additional helper features that are useful in workflows.

add_options_to_section(section, items, overwrite_options=False)[source]

Add a set of options and values to a section of a ConfigParser object. Will throw an error if any of the options being added already exist, this behaviour can be overridden if desired

Parameters:
  • section (string) – The name of the section to add options+values to

  • items (list of tuples) – Each tuple contains (at [0]) the option and (at [1]) the value to add to the section of the ini file

  • overwrite_options (Boolean, optional) – By default this function will throw a ValueError if an option exists in both the original section in the ConfigParser and in the provided items. This will override so that the options+values given in items will replace the original values if the value is set to True. Default = False

check_duplicate_options(section1, section2, raise_error=False)[source]

Check for duplicate options in two sections, section1 and section2. Will return a list of the duplicate options.

Parameters:
  • section1 (string) – The name of the first section to compare

  • section2 (string) – The name of the second section to compare

  • raise_error (Boolean, optional (default=False)) – If True, raise an error if duplicates are present.

Returns:

duplicates – List of duplicate options

Return type:

List

classmethod from_cli(opts)[source]

Initialize the config parser using options parsed from the command line.

The parsed options opts must include options provided by add_workflow_command_line_group().

Parameters:

opts (argparse.ArgumentParser) – The command line arguments parsed by argparse

get_opt_tag(section, option, tag)[source]

Convenience function accessing get_opt_tags() for a single tag: see documentation for that function. NB calling get_opt_tags() directly is preferred for simplicity.

Parameters:
  • self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)

  • section (string) – The section of the ConfigParser object to read

  • option (string) – The ConfigParser option to look for

  • tag (string) – The name of the subsection to look in, if not found in [section]

Returns:

The value of the options being searched for

Return type:

string

get_opt_tags(section, option, tags)[source]

Supplement to ConfigParser.ConfigParser.get(). This will search for an option in [section] and if it doesn’t find it will also try in [section-defaultvalues], and [section-tag] for every value of tag in tags. [section-tag] will be preferred to [section-defaultvalues] values. Will raise a ConfigParser.Error if it cannot find a value.

Parameters:
  • self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)

  • section (string) – The section of the ConfigParser object to read

  • option (string) – The ConfigParser option to look for

  • tags (list of strings) – The name of subsections to look in, if not found in [section]

Returns:

The value of the options being searched for

Return type:

string

get_subsections(section_name)[source]

Return a list of subsections for the given section name

has_option_tag(section, option, tag)[source]

Convenience function accessing has_option_tags() for a single tag: see documentation for that function. NB calling has_option_tags() directly is preferred for simplicity.

Parameters:
  • self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)

  • section (string) – The section of the ConfigParser object to read

  • option (string) – The ConfigParser option to look for

  • tag (string) – The name of the subsection to look in, if not found in [section]

Returns:

Is the option in the section or [section-tag]

Return type:

Boolean

has_option_tags(section, option, tags)[source]

Supplement to ConfigParser.ConfigParser.has_option(). This will search for an option in [section] and if it doesn’t find it will also try in [section-tag] for each value in tags. Returns True if the option is found and false if not.

Parameters:
  • self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)

  • section (string) – The section of the ConfigParser object to read

  • option (string) – The ConfigParser option to look for

  • tags (list of strings) – The names of the subsection to look in, if not found in [section]

Returns:

Is the option in the section or [section-tag] (for tag in tags)

Return type:

Boolean

interpolate_string(test_string, section)[source]

Take a string and replace all example of ExtendedInterpolation formatting within the string with the exact value.

For values like ${example} this is replaced with the value that corresponds to the option called example *in the same section*

For values like ${common|example} this is replaced with the value that corresponds to the option example in the section [common]. Note that in the python3 config parser this is ${common:example} but python2.7 interprets the : the same as a = and this breaks things

Nested interpolation is not supported here.

Parameters:
  • test_string (String) – The string to parse and interpolate

  • section (String) – The current section of the ConfigParser object

Returns:

test_string – Interpolated string

Return type:

String

perform_extended_interpolation()[source]

Filter through an ini file and replace all examples of ExtendedInterpolation formatting with the exact value. For values like ${example} this is replaced with the value that corresponds to the option called example *in the same section*

For values like ${common|example} this is replaced with the value that corresponds to the option example in the section [common]. Note that in the python3 config parser this is ${common:example} but python2.7 interprets the : the same as a = and this breaks things

Nested interpolation is not supported here.

populate_shared_sections()[source]

Parse the [sharedoptions] section of the ini file.

That section should contain entries according to:

  • massparams = inspiral, tmpltbank

  • dataparams = tmpltbank

This will result in all options in [sharedoptions-massparams] being copied into the [inspiral] and [tmpltbank] sections and the options in [sharedoptions-dataparams] being copited into [tmpltbank]. In the case of duplicates an error will be raised.

read_ini_file(fpath)[source]

Read a .ini file and return it as a ConfigParser class. This function does none of the parsing/combining of sections. It simply reads the file and returns it unedited

Stub awaiting more functionality - see configparser_test.py

Parameters:

fpath (Path to .ini file, or list of paths) – The path(s) to a .ini file to be read in

Returns:

cp – The ConfigParser class containing the read in .ini file

Return type:

ConfigParser

sanitize_newline()[source]

Filter through an ini file and replace all examples of newlines with spaces. This is useful for command line conversion and allow multiline configparser inputs without added backslashes

sanity_check_subsections()[source]

This function goes through the ConfigParser and checks that any options given in the [SECTION_NAME] section are not also given in any [SECTION_NAME-SUBSECTION] sections.

split_multi_sections()[source]

Parse through the WorkflowConfigParser instance and splits any sections labelled with an “&” sign (for e.g. [inspiral&tmpltbank]) into [inspiral] and [tmpltbank] sections. If these individual sections already exist they will be appended to. If an option exists in both the [inspiral] and [inspiral&tmpltbank] sections an error will be thrown

pycbc.types.frequencyseries module

Provides a class representing a frequency series.

class pycbc.types.frequencyseries.FrequencySeries(initial_array, delta_f=None, epoch='', dtype=None, copy=True)[source]

Bases: Array

Models a frequency series consisting of uniformly sampled scalar values.

Parameters:
  • initial_array (array-like) – Array containing sampled data.

  • delta_f (float) – Frequency between consecutive samples in Hertz.

  • epoch ({None, lal.LIGOTimeGPS}, optional) – Start time of the associated time domain data in seconds.

  • dtype ({None, data-type}, optional) – Sample data type.

  • copy (boolean, optional) – If True, samples are copied to a new array.

almost_equal_elem(other, tol, relative=True, dtol=0.0)[source]

Compare whether two frequency series are almost equal, element by element.

If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the series.

If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the series.

The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.

Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.

Parameters:
  • other (another Python object, that should be tested for) – almost-equality with ‘self’, element-by-element.

  • tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.

  • relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).

  • dtol (a non-negative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries.

Returns:

boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.

Return type:

‘True’ if the data and delta_fs agree within the tolerance,

almost_equal_norm(other, tol, relative=True, dtol=0.0)[source]

Compare whether two frequency series are almost equal, normwise.

If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).

If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol

The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.

Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.

Parameters:
  • other (another Python object, that should be tested for) – almost-equality with ‘self’, based on their norms.

  • tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.

  • relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).

  • dtol (a non-negative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries.

Returns:

boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.

Return type:

‘True’ if the data and delta_fs agree within the tolerance,

at_frequency(freq)[source]

Return the value at the specified frequency

cyclic_time_shift(dt)[source]

Shift the data and timestamps by a given number of seconds

Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cycliclly rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.

Parameters:

dt (float) – Amount of time to shift the vector.

Returns:

data – The time shifted frequency series.

Return type:

pycbc.types.FrequencySeries

property delta_f

Frequency between consecutive samples in Hertz.

property delta_t

Return the time between samples if this were a time series. This assume the time series is even in length!

property duration

Return the time duration of this vector

property end_time

Return the end time of this vector

property epoch

Frequency series epoch as a LIGOTimeGPS.

get_delta_f()[source]

Return frequency between consecutive samples in Hertz.

get_epoch()[source]

Return frequency series epoch as a LIGOTimeGPS.

get_sample_frequencies()[source]

Return an Array containing the sample frequencies.

lal()[source]

Produces a LAL frequency series object equivalent to self.

Returns:

lal_data – LAL frequency series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self was ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self.

Return type:

{lal.*FrequencySeries}

Raises:

TypeError – If frequency series is stored in GPU memory.

match(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]

Return the match between the two TimeSeries or FrequencySeries.

Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. Beware, this may remove high frequency content or the end of the vector.

Parameters:
  • other (TimeSeries or FrequencySeries) – The input vector containing a waveform.

  • psd (Frequency Series) – A power spectral density to weight the overlap.

  • low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match.

  • high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match.

  • index (int) – The number of samples to shift to get the match.

Returns:

  • match (float)

  • index (int) – The number of samples to shift to get the match.

plot(**kwds)[source]

Basic plot of this frequency series

property sample_frequencies

Array of the sample frequencies.

property sample_rate

Return the sample rate this would have in the time domain. This assumes even length time series!

save(path, group=None, ifo='P1')[source]

Save frequency series to a Numpy .npy, hdf, or text file. The first column contains the sample frequencies, the second contains the values. In the case of a complex frequency series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.

Parameters:
  • path (string) – Destination file path. Must end with either .hdf, .npy or .txt.

  • group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.

Raises:

ValueError – If path does not end in .npy or .txt.

property start_time

Return the start time of this vector

to_frequencyseries()[source]

Return frequency series

to_timeseries(delta_t=None)[source]

Return the Fourier transform of this time series.

Note that this assumes even length time series!

Parameters:
  • delta_t ({None, float}, optional) – The time resolution of the returned series. By default the

  • frequency (resolution is determined by length and delta_f of this) –

  • series.

Returns:

The inverse fourier transform of this frequency series.

Return type:

TimeSeries

pycbc.types.frequencyseries.load_frequencyseries(path, group=None)[source]

Load a FrequencySeries from an HDF5, ASCII or Numpy file. The file type is inferred from the file extension, which must be .hdf, .txt or .npy.

For ASCII and Numpy files, the first column of the array is assumed to contain the frequency. If the array has two columns, a real frequency series is returned. If the array has three columns, the second and third ones are assumed to contain the real and imaginary parts of a complex frequency series.

For HDF files, the dataset is assumed to contain the attribute delta_f giving the frequency resolution in Hz. The attribute epoch, if present, is taken as the start GPS time (epoch) of the data in the series.

The default data types will be double precision floating point.

Parameters:
  • path (string) – Input file path. Must end with either .npy, .txt or .hdf.

  • group (string) – Additional name for internal storage use. When reading HDF files, this is the path to the HDF dataset to read.

Raises:

ValueError – If the path does not end in a supported extension. For Numpy and ASCII input files, this is also raised if the array does not have 2 or 3 dimensions.

pycbc.types.optparse module

This modules contains extensions for use with argparse

class pycbc.types.optparse.DictOptionAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: Action

class pycbc.types.optparse.DictWithDefaultReturn[source]

Bases: defaultdict

default_set = False
ifo_set = False
class pycbc.types.optparse.MultiDetDictOptionAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: DictOptionAction

A special case of DictOptionAction which allows one to use argument containing the detector (channel) name, such as DETECTOR:PARAM:VALUE. The first colon is the name of detector, the second colon is the name of parameter, the third colon is the value. Or similar to DictOptionAction, all arguments don’t contain the name of detector, such as PARAM:VALUE, this will assume each detector has same values of those parameters.

class pycbc.types.optparse.MultiDetMultiColonOptionAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: MultiDetOptionAction

A special case of MultiDetOptionAction which allows one to use arguments containing colons, such as V1:FOOBAR:1. The first colon is assumed to be the separator between the detector and the argument. All subsequent colons are kept as part of the argument. Unlike MultiDetOptionAction, all arguments must be prefixed by the corresponding detector.

class pycbc.types.optparse.MultiDetOptionAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: Action

class pycbc.types.optparse.MultiDetOptionActionSpecial(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: MultiDetOptionAction

This class in an extension of the MultiDetOptionAction class to handle cases where the : is already a special character. For example the channel name is something like H1:CHANNEL_NAME. Here the channel name must be provided uniquely for each ifo. The dictionary key is set to H1 and the value to H1:CHANNEL_NAME for this example.

class pycbc.types.optparse.MultiDetOptionAppendAction(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: MultiDetOptionAction

pycbc.types.optparse.convert_to_process_params_dict(opt)[source]

Takes the namespace object (opt) from the multi-detector interface and returns a dictionary of command line options that will be handled correctly by the register_to_process_params ligolw function.

pycbc.types.optparse.copy_opts_for_single_ifo(opt, ifo)[source]

Takes the namespace object (opt) from the multi-detector interface and returns a namespace object for a single ifo that can be used with functions expecting output from the single-detector interface.

pycbc.types.optparse.ensure_one_opt(opt, parser, opt_list)[source]

Check that one and only one in the opt_list is defined in opt

Parameters:
  • opt (object) – Result of option parsing

  • parser (object) – OptionParser instance.

  • opt_list (list of strings) –

pycbc.types.optparse.ensure_one_opt_multi_ifo(opt, parser, ifo, opt_list)[source]

Check that one and only one in the opt_list is defined in opt

Parameters:
  • opt (object) – Result of option parsing

  • parser (object) – OptionParser instance.

  • opt_list (list of strings) –

pycbc.types.optparse.nonnegative_float(s)[source]

Ensure argument is a positive real number or zero and return it as float.

To be used as type in argparse arguments.

pycbc.types.optparse.nonnegative_int(s)[source]

Ensure argument is a positive integer or zero and return it as int.

To be used as type in argparse arguments.

pycbc.types.optparse.positive_float(s)[source]

Ensure argument is a positive real number and return it as float.

To be used as type in argparse arguments.

pycbc.types.optparse.positive_int(s)[source]

Ensure argument is a positive integer and return it as int.

To be used as type in argparse arguments.

pycbc.types.optparse.required_opts(opt, parser, opt_list, required_by=None)[source]

Check that all the opts are defined

Parameters:
  • opt (object) – Result of option parsing

  • parser (object) – OptionParser instance.

  • opt_list (list of strings) –

  • required_by (string, optional) – the option that requires these options (if applicable)

pycbc.types.optparse.required_opts_multi_ifo(opt, parser, ifo, opt_list, required_by=None)[source]

Check that all the opts are defined

Parameters:
  • opt (object) – Result of option parsing

  • parser (object) – OptionParser instance.

  • ifo (string) –

  • opt_list (list of strings) –

  • required_by (string, optional) – the option that requires these options (if applicable)

pycbc.types.timeseries module

Provides a class representing a time series.

class pycbc.types.timeseries.TimeSeries(initial_array, delta_t=None, epoch=None, dtype=None, copy=True)[source]

Bases: Array

Models a time series consisting of uniformly sampled scalar values.

Parameters:
  • initial_array (array-like) – Array containing sampled data.

  • delta_t (float) – Time between consecutive samples in seconds.

  • epoch ({None, lal.LIGOTimeGPS}, optional) – Time of the first sample in seconds.

  • dtype ({None, data-type}, optional) – Sample data type.

  • copy (boolean, optional) – If True, samples are copied to a new array.

add_into(other, copy=True)

Return copy of self with other injected into it.

The other vector will be resized and time shifted with sub-sample precision before adding. This assumes that one can assume zeros outside of the original vector range.

almost_equal_elem(other, tol, relative=True, dtol=0.0)[source]

Compare whether two time series are almost equal, element by element.

If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the series.

If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the series.

The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.

Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.

Parameters:
  • other (another Python object, that should be tested for) – almost-equality with ‘self’, element-by-element.

  • tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.

  • relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).

  • dtol (a non-negative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries.

Returns:

boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.

Return type:

‘True’ if the data and delta_ts agree within the tolerance,

almost_equal_norm(other, tol, relative=True, dtol=0.0)[source]

Compare whether two time series are almost equal, normwise.

If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).

If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol

The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.

Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.

Parameters:
  • other (another Python object, that should be tested for) – almost-equality with ‘self’, based on their norms.

  • tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.

  • relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).

  • dtol (a non-negative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries.

Returns:

boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.

Return type:

‘True’ if the data and delta_ts agree within the tolerance,

append_zeros(num)[source]

Append num zeros onto the end of this TimeSeries.

at_time(time, nearest_sample=False, interpolate=None, extrapolate=None)[source]

Return the value at the specified gps time

Parameters:
  • nearest_sample (bool) – Return the sample at the time nearest to the chosen time rather than rounded down.

  • interpolate (str, None) – Return the interpolated value of the time series. Choices are simple linear or quadratic interpolation.

  • extrapolate (str or float, None) – Value to return if time is outsidde the range of the vector or method of extrapolating the value.

at_times(time, nearest_sample=False, interpolate=None, extrapolate=None)

Return the value at the specified gps time

Parameters:
  • nearest_sample (bool) – Return the sample at the time nearest to the chosen time rather than rounded down.

  • interpolate (str, None) – Return the interpolated value of the time series. Choices are simple linear or quadratic interpolation.

  • extrapolate (str or float, None) – Value to return if time is outsidde the range of the vector or method of extrapolating the value.

crop(left, right)[source]

Remove given seconds from either end of time series

Parameters:
  • left (float) – Number of seconds of data to remove from the left of the time series.

  • right (float) – Number of seconds of data to remove from the right of the time series.

Returns:

cropped – The reduced time series

Return type:

pycbc.types.TimeSeries

cyclic_time_shift(dt)[source]

Shift the data and timestamps by a given number of seconds

Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cyclically rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.

Parameters:

dt (float) – Amount of time to shift the vector.

Returns:

data – The time shifted time series.

Return type:

pycbc.types.TimeSeries

property delta_f

Return the delta_f this ts would have in the frequency domain

property delta_t

Time between consecutive samples in seconds.

detrend(type='linear')[source]

Remove linear trend from the data

Remove a linear trend from the data to improve the approximation that the data is circularly convolved, this helps reduce the size of filter transients from a circular convolution / filter.

Parameters:
  • type (str) – The choice of detrending. The default (‘linear’) removes a linear

  • data. (least squares fit. 'constant' removes only the mean of the) –

property duration

Duration of time series in seconds.

property end_time

Time series end time as a LIGOTimeGPS.

epoch_close(other)[source]

Check if the epoch is close enough to allow operations

filter_psd(segment_duration, delta_f, flow)[source]

Calculate the power spectral density of this time series.

Use the pycbc.psd.welch method to estimate the psd of this time segment. The psd is then truncated in the time domain to the segment duration and interpolated to the requested sample frequency.

Parameters:
  • segment_duration (float) – Duration in seconds to use for each sample of the spectrum.

  • delta_f (float) – Frequency spacing to return psd at.

  • flow (float) – The low frequency cutoff to apply when truncating the inverse spectrum.

Returns:

psd – Frequency series containing the estimated PSD.

Return type:

FrequencySeries

fir_zero_filter(coeff)[source]

Filter the timeseries with a set of FIR coefficients

Parameters:

coeff (numpy.ndarray) – FIR coefficients. Should be and odd length and symmetric.

Returns:

  • filtered_series (pycbc.types.TimeSeries) – Return the filtered timeseries, which has been properly shifted to account

  • for the FIR filter delay and the corrupted regions zeroed out.

gate(time, window=0.25, method='taper', copy=True, taper_width=0.25, invpsd=None)[source]

Gate out portion of time series

Parameters:
  • time (float) – Central time of the gate in seconds

  • window (float) – Half-length in seconds to remove data around gate time.

  • method (str) – Method to apply gate, options are ‘hard’, ‘taper’, and ‘paint’.

  • copy (bool) – If False, do operations inplace to this time series, else return new time series.

  • taper_width (float) – Length of tapering region on either side of excized data. Only applies to the taper gating method.

  • invpsd (pycbc.types.FrequencySeries) – The inverse PSD to use for painting method. If not given, a PSD is generated using default settings.

Returns:

data – Gated time series

Return type:

pycbc.types.TimeSeris

get_delta_t()[source]

Return time between consecutive samples in seconds.

get_duration()[source]

Return duration of time series in seconds.

get_end_time()[source]

Return time series end time as a LIGOTimeGPS.

get_sample_rate()[source]

Return the sample rate of the time series.

get_sample_times()[source]

Return an Array containing the sample times.

highpass_fir(frequency, order, beta=5.0, remove_corrupted=True)[source]

Highpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)

Parameters:
  • Series (Time) – The time series to be high-passed.

  • frequency (float) – The frequency below which is suppressed.

  • order (int) – Number of corrupted samples on each side of the time series

  • beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.

  • remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.

inject(other, copy=True)[source]

Return copy of self with other injected into it.

The other vector will be resized and time shifted with sub-sample precision before adding. This assumes that one can assume zeros outside of the original vector range.

lal()[source]

Produces a LAL time series object equivalent to self.

Returns:

lal_data – LAL time series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self is ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self.

Return type:

{lal.*TimeSeries}

Raises:

TypeError – If time series is stored in GPU memory.

lowpass_fir(frequency, order, beta=5.0, remove_corrupted=True)[source]

Lowpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)

Parameters:
  • Series (Time) – The time series to be low-passed.

  • frequency (float) – The frequency below which is suppressed.

  • order (int) – Number of corrupted samples on each side of the time series

  • beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.

  • remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.

match(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]

Return the match between the two TimeSeries or FrequencySeries.

Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. This may remove high frequency content or the end of the vector.

Parameters:
  • other (TimeSeries or FrequencySeries) – The input vector containing a waveform.

  • psd (Frequency Series) – A power spectral density to weight the overlap.

  • low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match.

  • high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match.

Returns:

  • match (float)

  • index (int) – The number of samples to shift to get the match.

notch_fir(f1, f2, order, beta=5.0, remove_corrupted=True)[source]

notch filter the time series using an FIR filtered generated from the ideal response passed through a time-domain kaiser window (beta = 5.0)

The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band.

Parameters:
  • Series (Time) – The time series to be notched.

  • f1 (float) – The start of the frequency suppression.

  • f2 (float) – The end of the frequency suppression.

  • order (int) – Number of corrupted samples on each side of the time series

  • beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.

plot(**kwds)[source]

Basic plot of this time series

prepend_zeros(num)[source]

Prepend num zeros onto the beginning of this TimeSeries. Update also epoch to include this prepending.

psd(segment_duration, **kwds)[source]

Calculate the power spectral density of this time series.

Use the pycbc.psd.welch method to estimate the psd of this time segment. For more complete options, please see that function.

Parameters:
  • segment_duration (float) – Duration in seconds to use for each sample of the spectrum.

  • kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method.

Returns:

psd – Frequency series containing the estimated PSD.

Return type:

FrequencySeries

qtransform(delta_t=None, delta_f=None, logfsteps=None, frange=None, qrange=(4, 64), mismatch=0.2, return_complex=False)[source]

Return the interpolated 2d qtransform of this data

Parameters:
  • delta_t ({self.delta_t, float}) – The time resolution to interpolate to

  • delta_f (float, Optional) – The frequency resolution to interpolate to

  • logfsteps (int) – Do a log interpolation (incompatible with delta_f option) and set the number of steps to take.

  • frange ({(30, nyquist*0.8), tuple of ints}) – frequency range

  • qrange ({(4, 64), tuple}) – q range

  • mismatch (float) – Mismatch between frequency tiles

  • return_complex ({False, bool}) – return the raw complex series instead of the normalized power.

Returns:

  • times (numpy.ndarray) – The time that the qtransform is sampled.

  • freqs (numpy.ndarray) – The frequencies that the qtransform is sampled.

  • qplane (numpy.ndarray (2d)) – The two dimensional interpolated qtransform of this time series.

resample(delta_t)[source]

Resample this time series to the new delta_t

Parameters:

delta_t (float) – The time step to resample the times series to.

Returns:

resampled_ts – The resample timeseries at the new time interval delta_t.

Return type:

pycbc.types.TimeSeries

property sample_rate

The sample rate of the time series.

sample_rate_close(other)[source]

Check if the sample rate is close enough to allow operations

property sample_times

Array containing the sample times.

save(path, group=None)[source]

Save time series to a Numpy .npy, hdf, or text file. The first column contains the sample times, the second contains the values. In the case of a complex time series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.

Parameters:
  • path (string) – Destination file path. Must end with either .hdf, .npy or .txt.

  • group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.

Raises:

ValueError – If path does not end in .npy or .txt.

save_to_wav(file_name)[source]

Save this time series to a wav format audio file.

Parameters:

file_name (string) – The output file name

property start_time

Return time series start time as a LIGOTimeGPS.

time_slice(start, end, mode='floor')[source]

Return the slice of the time series that contains the time range in GPS seconds.

to_astropy(name='pycbc')[source]

Return an astropy.timeseries.TimeSeries instance

to_frequencyseries(delta_f=None)[source]

Return the Fourier transform of this time series

Parameters:
  • delta_f ({None, float}, optional) – The frequency resolution of the returned frequency series. By

  • timeseries. (default the resolution is determined by the duration of the) –

Returns:

The fourier transform of this time series.

Return type:

FrequencySeries

to_timeseries()[source]

Return time series

whiten(segment_duration, max_filter_duration, trunc_method='hann', remove_corrupted=True, low_frequency_cutoff=None, return_psd=False, **kwds)[source]

Return a whitened time series

Parameters:
  • segment_duration (float) – Duration in seconds to use for each sample of the spectrum.

  • max_filter_duration (int) – Maximum length of the time-domain filter in seconds.

  • trunc_method ({None, 'hann'}) – Function used for truncating the time-domain filter. None produces a hard truncation at max_filter_len.

  • remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the whitening is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.

  • low_frequency_cutoff ({None, float}) – Low frequency cutoff to pass to the inverse spectrum truncation. This should be matched to a known low frequency cutoff of the data if there is one.

  • return_psd ({False, Boolean}) – Return the estimated and conditioned PSD that was used to whiten the data.

  • kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method.

Returns:

whitened_data – The whitened time series

Return type:

TimeSeries

pycbc.types.timeseries.load_timeseries(path, group=None)[source]

Load a TimeSeries from an HDF5, ASCII or Numpy file. The file type is inferred from the file extension, which must be .hdf, .txt or .npy.

For ASCII and Numpy files, the first column of the array is assumed to contain the sample times. If the array has two columns, a real-valued time series is returned. If the array has three columns, the second and third ones are assumed to contain the real and imaginary parts of a complex time series.

For HDF files, the dataset is assumed to contain the attributes delta_t and start_time, which should contain respectively the sampling period in seconds and the start GPS time of the data.

The default data types will be double precision floating point.

Parameters:
  • path (string) – Input file path. Must end with either .npy, .txt or .hdf.

  • group (string) – Additional name for internal storage use. When reading HDF files, this is the path to the HDF dataset to read.

Raises:

ValueError – If path does not end in a supported extension. For Numpy and ASCII input files, this is also raised if the array does not have 2 or 3 dimensions.

Module contents