pycbc.inference package
Subpackages
- pycbc.inference.io package
- Submodules
- pycbc.inference.io.base_hdf module
BaseInferenceFile
BaseInferenceFile.cmd
BaseInferenceFile.config_group
BaseInferenceFile.copy()
BaseInferenceFile.copy_info()
BaseInferenceFile.copy_metadata()
BaseInferenceFile.copy_samples()
BaseInferenceFile.data_group
BaseInferenceFile.effective_nsamples
BaseInferenceFile.extra_args_parser()
BaseInferenceFile.get_slice()
BaseInferenceFile.getattrs()
BaseInferenceFile.injections_group
BaseInferenceFile.log_evidence
BaseInferenceFile.name
BaseInferenceFile.parse_parameters()
BaseInferenceFile.read_config_file()
BaseInferenceFile.read_data()
BaseInferenceFile.read_injections()
BaseInferenceFile.read_psds()
BaseInferenceFile.read_random_state()
BaseInferenceFile.read_raw_samples()
BaseInferenceFile.read_samples()
BaseInferenceFile.sampler_group
BaseInferenceFile.samples_from_cli()
BaseInferenceFile.samples_group
BaseInferenceFile.static_params
BaseInferenceFile.thin_end
BaseInferenceFile.thin_interval
BaseInferenceFile.thin_start
BaseInferenceFile.write_command_line()
BaseInferenceFile.write_config_file()
BaseInferenceFile.write_data()
BaseInferenceFile.write_effective_nsamples()
BaseInferenceFile.write_injections()
BaseInferenceFile.write_kwargs_to_attrs()
BaseInferenceFile.write_logevidence()
BaseInferenceFile.write_psd()
BaseInferenceFile.write_random_state()
BaseInferenceFile.write_samples()
BaseInferenceFile.write_stilde()
BaseInferenceFile.write_strain()
format_attr()
- pycbc.inference.io.base_mcmc module
CommonMCMCMetadataIO
CommonMCMCMetadataIO.acl
CommonMCMCMetadataIO.act
CommonMCMCMetadataIO.burn_in_index
CommonMCMCMetadataIO.burn_in_iteration
CommonMCMCMetadataIO.extra_args_parser()
CommonMCMCMetadataIO.is_burned_in
CommonMCMCMetadataIO.iterations()
CommonMCMCMetadataIO.last_iteration()
CommonMCMCMetadataIO.nchains
CommonMCMCMetadataIO.niterations
CommonMCMCMetadataIO.nwalkers
CommonMCMCMetadataIO.raw_acls
CommonMCMCMetadataIO.raw_acts
CommonMCMCMetadataIO.thin()
CommonMCMCMetadataIO.thinned_by
CommonMCMCMetadataIO.write_niterations()
CommonMCMCMetadataIO.write_resume_point()
CommonMCMCMetadataIO.write_sampler_metadata()
EnsembleMCMCMetadataIO
MCMCMetadataIO
ensemble_read_raw_samples()
nsamples_in_chain()
thin_samples_for_writing()
write_samples()
- pycbc.inference.io.base_multitemper module
- pycbc.inference.io.base_nested_sampler module
- pycbc.inference.io.base_sampler module
- pycbc.inference.io.dynesty module
- pycbc.inference.io.emcee module
- pycbc.inference.io.emcee_pt module
- pycbc.inference.io.epsie module
EpsieFile
EpsieFile.betas
EpsieFile.name
EpsieFile.nchains
EpsieFile.read_acceptance_fraction()
EpsieFile.read_acceptance_rate()
EpsieFile.read_acceptance_ratio()
EpsieFile.read_raw_samples()
EpsieFile.seed
EpsieFile.swap_interval
EpsieFile.thin()
EpsieFile.validate()
EpsieFile.write_acceptance_ratio()
EpsieFile.write_sampler_metadata()
EpsieFile.write_samples()
EpsieFile.write_temperature_data()
- pycbc.inference.io.multinest module
- pycbc.inference.io.nessai module
- pycbc.inference.io.posterior module
- pycbc.inference.io.ptemcee module
- pycbc.inference.io.snowline module
- pycbc.inference.io.txt module
- pycbc.inference.io.ultranest module
- Module contents
- pycbc.inference.jump package
- Submodules
- pycbc.inference.jump.angular module
- pycbc.inference.jump.bounded_normal module
- pycbc.inference.jump.discrete module
- pycbc.inference.jump.normal module
- Module contents
- pycbc.inference.models package
- Submodules
- pycbc.inference.models.analytic module
- pycbc.inference.models.base module
BaseModel
BaseModel.current_params
BaseModel.current_stats
BaseModel.default_stats
BaseModel.extra_args_from_config()
BaseModel.from_config()
BaseModel.get_current_stats()
BaseModel.logjacobian
BaseModel.loglikelihood
BaseModel.logposterior
BaseModel.logprior
BaseModel.name
BaseModel.prior_from_config()
BaseModel.prior_rvs()
BaseModel.sampling_params
BaseModel.static_params
BaseModel.update()
BaseModel.variable_params
BaseModel.write_metadata()
ModelStats
SamplingTransforms
check_for_cartesian_spins()
read_sampling_params_from_config()
- pycbc.inference.models.base_data module
- pycbc.inference.models.brute_marg module
- pycbc.inference.models.data_utils module
- pycbc.inference.models.gated_gaussian_noise module
BaseGatedGaussian
BaseGatedGaussian.data
BaseGatedGaussian.det_lognl()
BaseGatedGaussian.det_lognorm()
BaseGatedGaussian.from_config()
BaseGatedGaussian.get_data()
BaseGatedGaussian.get_gate_times()
BaseGatedGaussian.get_gate_times_hmeco()
BaseGatedGaussian.get_gated_data()
BaseGatedGaussian.get_gated_waveforms()
BaseGatedGaussian.get_residuals()
BaseGatedGaussian.get_waveforms()
BaseGatedGaussian.normalize
BaseGatedGaussian.psds
BaseGatedGaussian.td_data
BaseGatedGaussian.whiten()
BaseGatedGaussian.write_metadata()
GatedGaussianMargPol
GatedGaussianNoise
- pycbc.inference.models.gaussian_noise module
BaseGaussianNoise
BaseGaussianNoise.ignore_failed_waveforms
BaseGaussianNoise.det_lognl()
BaseGaussianNoise.det_lognorm()
BaseGaussianNoise.from_config()
BaseGaussianNoise.high_frequency_cutoff
BaseGaussianNoise.kmax
BaseGaussianNoise.kmin
BaseGaussianNoise.lognorm
BaseGaussianNoise.low_frequency_cutoff
BaseGaussianNoise.normalize
BaseGaussianNoise.psd_segments
BaseGaussianNoise.psds
BaseGaussianNoise.set_psd_segments()
BaseGaussianNoise.update()
BaseGaussianNoise.weight
BaseGaussianNoise.whitened_data
BaseGaussianNoise.write_metadata()
GaussianNoise
catch_waveform_error()
create_waveform_generator()
get_values_from_injection()
- pycbc.inference.models.hierarchical module
HierarchicalModel
HierarchicalParam
JointPrimaryMarginalizedModel
JointPrimaryMarginalizedModel.from_config()
JointPrimaryMarginalizedModel.name
JointPrimaryMarginalizedModel.others_lognl()
JointPrimaryMarginalizedModel.reconstruct()
JointPrimaryMarginalizedModel.total_loglr()
JointPrimaryMarginalizedModel.update_all_models()
JointPrimaryMarginalizedModel.write_metadata()
MultiSignalModel
hpiter()
map_params()
- pycbc.inference.models.marginalized_gaussian_noise module
- pycbc.inference.models.relbin module
Relative
Relative.calculate_hihjs()
Relative.combine_layout()
Relative.extra_args_from_config()
Relative.get_waveforms()
Relative.init_from_frequencies()
Relative.likelihood_function
Relative.max_curvature_from_reference()
Relative.multi_loglikelihood()
Relative.multi_signal_support
Relative.name
Relative.setup_antenna()
Relative.summary_product()
Relative.write_metadata()
RelativeTime
RelativeTimeDom
setup_bins()
- pycbc.inference.models.relbin_cpu module
likelihood_parts()
likelihood_parts_det()
likelihood_parts_det_multi()
likelihood_parts_multi()
likelihood_parts_multi_v()
likelihood_parts_v()
likelihood_parts_v_pol()
likelihood_parts_v_pol_time()
likelihood_parts_v_time()
likelihood_parts_vector()
likelihood_parts_vectorp()
likelihood_parts_vectort()
snr_predictor()
snr_predictor_dom()
- pycbc.inference.models.single_template module
- pycbc.inference.models.tools module
DistMarg
DistMarg.current_params
DistMarg.draw_ifos()
DistMarg.draw_sky_times()
DistMarg.draw_times()
DistMarg.get_precalc_antenna_factors()
DistMarg.marginalize_loglr()
DistMarg.premarg_draw()
DistMarg.reconstruct()
DistMarg.reset_vector_params()
DistMarg.setup_marginalization()
DistMarg.setup_peak_lock()
DistMarg.snr_draw()
draw_sample()
marginalize_likelihood()
setup_distance_marg_interpolant()
str_to_bool()
str_to_tuple()
- Module contents
- pycbc.inference.sampler package
- Submodules
- pycbc.inference.sampler.base module
- pycbc.inference.sampler.base_cube module
- pycbc.inference.sampler.base_mcmc module
BaseMCMC
BaseMCMC.acl()
BaseMCMC.act
BaseMCMC.base_shape
BaseMCMC.burn_in
BaseMCMC.checkpoint()
BaseMCMC.checkpoint_from_config()
BaseMCMC.checkpoint_interval
BaseMCMC.checkpoint_signal
BaseMCMC.ckpt_signal_from_config()
BaseMCMC.clear_samples()
BaseMCMC.compute_acf()
BaseMCMC.compute_acl()
BaseMCMC.effective_nsamples()
BaseMCMC.get_thin_interval()
BaseMCMC.max_samples_per_chain
BaseMCMC.nchains
BaseMCMC.niterations
BaseMCMC.p0
BaseMCMC.pos
BaseMCMC.raw_acls
BaseMCMC.raw_acts
BaseMCMC.resume_from_checkpoint()
BaseMCMC.run()
BaseMCMC.run_mcmc()
BaseMCMC.set_burn_in()
BaseMCMC.set_burn_in_from_config()
BaseMCMC.set_p0()
BaseMCMC.set_start_from_config()
BaseMCMC.set_state_from_file()
BaseMCMC.set_target()
BaseMCMC.set_target_from_config()
BaseMCMC.set_thin_interval_from_config()
BaseMCMC.target_eff_nsamples
BaseMCMC.target_niterations
BaseMCMC.thin_interval
BaseMCMC.thin_safety_factor
BaseMCMC.write_results()
EnsembleSupport
blob_data_to_dict()
ensemble_compute_acf()
ensemble_compute_acl()
get_optional_arg_from_config()
raw_samples_to_dict()
- pycbc.inference.sampler.base_multitemper module
- pycbc.inference.sampler.dummy module
- pycbc.inference.sampler.dynesty module
DynestySampler
DynestySampler.checkpoint()
DynestySampler.finalize()
DynestySampler.from_config()
DynestySampler.io
DynestySampler.logz
DynestySampler.logz_err
DynestySampler.model_stats
DynestySampler.name
DynestySampler.niterations
DynestySampler.resume_from_checkpoint()
DynestySampler.run()
DynestySampler.samples
DynestySampler.set_initial_conditions()
DynestySampler.set_state_from_file()
DynestySampler.write_results()
estimate_nmcmc()
sample_rwalk_mod()
- pycbc.inference.sampler.emcee module
EmceeEnsembleSampler
EmceeEnsembleSampler.base_shape
EmceeEnsembleSampler.burn_in_class
EmceeEnsembleSampler.clear_samples()
EmceeEnsembleSampler.compute_acf()
EmceeEnsembleSampler.compute_acl()
EmceeEnsembleSampler.finalize()
EmceeEnsembleSampler.from_config()
EmceeEnsembleSampler.io
EmceeEnsembleSampler.model_stats
EmceeEnsembleSampler.name
EmceeEnsembleSampler.run_mcmc()
EmceeEnsembleSampler.samples
EmceeEnsembleSampler.set_state_from_file()
EmceeEnsembleSampler.write_results()
- pycbc.inference.sampler.emcee_pt module
EmceePTSampler
EmceePTSampler.base_shape
EmceePTSampler.betas
EmceePTSampler.burn_in_class
EmceePTSampler.calculate_logevidence()
EmceePTSampler.clear_samples()
EmceePTSampler.compute_acf()
EmceePTSampler.compute_acl()
EmceePTSampler.finalize()
EmceePTSampler.from_config()
EmceePTSampler.io
EmceePTSampler.model_stats
EmceePTSampler.name
EmceePTSampler.run_mcmc()
EmceePTSampler.samples
EmceePTSampler.set_state_from_file()
EmceePTSampler.write_results()
- pycbc.inference.sampler.epsie module
EpsieSampler
EpsieSampler.acl
EpsieSampler.base_shape
EpsieSampler.betas
EpsieSampler.burn_in_class
EpsieSampler.clear_samples()
EpsieSampler.compute_acf()
EpsieSampler.compute_acl()
EpsieSampler.effective_nsamples
EpsieSampler.finalize()
EpsieSampler.from_config()
EpsieSampler.io
EpsieSampler.model_stats
EpsieSampler.name
EpsieSampler.pos
EpsieSampler.run_mcmc()
EpsieSampler.samples
EpsieSampler.seed
EpsieSampler.set_p0()
EpsieSampler.set_state_from_file()
EpsieSampler.swap_interval
EpsieSampler.write_results()
- pycbc.inference.sampler.games module
- pycbc.inference.sampler.multinest module
MultinestSampler
MultinestSampler.check_if_finished()
MultinestSampler.checkpoint()
MultinestSampler.checkpoint_interval
MultinestSampler.dlogz
MultinestSampler.finalize()
MultinestSampler.from_config()
MultinestSampler.get_posterior_samples()
MultinestSampler.importance_dlogz
MultinestSampler.importance_logz
MultinestSampler.io
MultinestSampler.loglikelihood()
MultinestSampler.logz
MultinestSampler.model_stats
MultinestSampler.name
MultinestSampler.niterations
MultinestSampler.nlivepoints
MultinestSampler.resume_from_checkpoint()
MultinestSampler.run()
MultinestSampler.samples
MultinestSampler.set_initial_conditions()
MultinestSampler.set_state_from_file()
MultinestSampler.setup_output()
MultinestSampler.transform_prior()
MultinestSampler.write_results()
- pycbc.inference.sampler.nessai module
NessaiModel
NessaiSampler
NessaiSampler.checkpoint()
NessaiSampler.checkpoint_callback()
NessaiSampler.finalize()
NessaiSampler.from_config()
NessaiSampler.get_default_kwds()
NessaiSampler.io
NessaiSampler.model_stats
NessaiSampler.name
NessaiSampler.resume_from_checkpoint()
NessaiSampler.run()
NessaiSampler.samples
NessaiSampler.set_initial_conditions()
NessaiSampler.write_results()
- pycbc.inference.sampler.ptemcee module
PTEmceeSampler
PTEmceeSampler.adaptation_lag
PTEmceeSampler.adaptation_time
PTEmceeSampler.adaptive
PTEmceeSampler.base_shape
PTEmceeSampler.betas
PTEmceeSampler.burn_in_class
PTEmceeSampler.calculate_logevidence()
PTEmceeSampler.chain
PTEmceeSampler.clear_samples()
PTEmceeSampler.compute_acf()
PTEmceeSampler.compute_acl()
PTEmceeSampler.ensemble
PTEmceeSampler.finalize()
PTEmceeSampler.from_config()
PTEmceeSampler.io
PTEmceeSampler.model_stats
PTEmceeSampler.name
PTEmceeSampler.ntemps
PTEmceeSampler.run_mcmc()
PTEmceeSampler.samples
PTEmceeSampler.scale_factor
PTEmceeSampler.set_state_from_file()
PTEmceeSampler.starting_betas
PTEmceeSampler.write_results()
- pycbc.inference.sampler.refine module
- pycbc.inference.sampler.snowline module
SnowlineSampler
SnowlineSampler.checkpoint()
SnowlineSampler.finalize()
SnowlineSampler.from_config()
SnowlineSampler.io
SnowlineSampler.logz
SnowlineSampler.logz_err
SnowlineSampler.model_stats
SnowlineSampler.name
SnowlineSampler.niterations
SnowlineSampler.resume_from_checkpoint()
SnowlineSampler.run()
SnowlineSampler.samples
SnowlineSampler.write_results()
- pycbc.inference.sampler.ultranest module
UltranestSampler
UltranestSampler.checkpoint()
UltranestSampler.finalize()
UltranestSampler.from_config()
UltranestSampler.io
UltranestSampler.logz
UltranestSampler.logz_err
UltranestSampler.model_stats
UltranestSampler.name
UltranestSampler.niterations
UltranestSampler.resume_from_checkpoint()
UltranestSampler.run()
UltranestSampler.samples
UltranestSampler.write_results()
- Module contents
Submodules
pycbc.inference.burn_in module
This modules provides classes and functions for determining when Markov Chains have burned in.
- class pycbc.inference.burn_in.BaseBurnInTests(sampler, burn_in_test, **kwargs)[source]
Bases:
object
Base class for burn in tests.
- available_tests = ('halfchain', 'min_iterations', 'max_posterior', 'posterior_step', 'nacl')
- abstract burn_in_index(filename)[source]
The burn in index (retrieved from the iteration).
This is an abstract method because how this is evaluated depends on if this is an ensemble MCMC or not.
- abstract evaluate(filename)[source]
Performs all tests and evaluates the results to determine if and when all tests pass.
- abstract max_posterior(filename)[source]
Carries out the max posterior test and stores the results.
- min_iterations(filename)[source]
Just checks that the sampler has been run for the minimum number of iterations.
- abstract posterior_step(filename)[source]
Carries out the posterior step test and stores the results.
- write(fp, path=None)[source]
Writes burn-in info to an open HDF file.
- Parameters:
fp (pycbc.inference.io.base.BaseInferenceFile) – Open HDF file to write the data to. The HDF file should be an instance of a pycbc BaseInferenceFile.
path (str, optional) – Path in the HDF file to write the data to. Default is (None) is to write to the path given by the file’s
sampler_group
attribute.
- class pycbc.inference.burn_in.EnsembleMCMCBurnInTests(sampler, burn_in_test, **kwargs)[source]
Bases:
BaseBurnInTests
Provides methods for estimating burn-in of an ensemble MCMC.
- available_tests = ('halfchain', 'min_iterations', 'max_posterior', 'posterior_step', 'nacl', 'ks_test')
- class pycbc.inference.burn_in.EnsembleMultiTemperedMCMCBurnInTests(sampler, burn_in_test, **kwargs)[source]
Bases:
EnsembleMCMCBurnInTests
Adds support for multiple temperatures to
EnsembleMCMCBurnInTests
.
- class pycbc.inference.burn_in.MCMCBurnInTests(sampler, burn_in_test, **kwargs)[source]
Bases:
BaseBurnInTests
Burn-in tests for collections of independent MCMC chains.
This differs from EnsembleMCMCBurnInTests in that chains are treated as being independent of each other. The
is_burned_in
attribute will be True if any chain passes the burn in tests (whereas in MCMCBurnInTests, all chains must pass the burn in tests). In other words, independent samples can be collected even if all of the chains are not burned in.- write(fp, path=None)[source]
Writes burn-in info to an open HDF file.
- Parameters:
fp (pycbc.inference.io.base.BaseInferenceFile) – Open HDF file to write the data to. The HDF file should be an instance of a pycbc BaseInferenceFile.
path (str, optional) – Path in the HDF file to write the data to. Default is (None) is to write to the path given by the file’s
sampler_group
attribute.
- class pycbc.inference.burn_in.MultiTemperedMCMCBurnInTests(sampler, burn_in_test, **kwargs)[source]
Bases:
MCMCBurnInTests
Adds support for multiple temperatures to
MCMCBurnInTests
.
- pycbc.inference.burn_in.evaluate_tests(burn_in_test, test_is_burned_in, test_burn_in_iter)[source]
Evaluates burn in data from multiple tests.
The iteration to use for burn-in depends on the logic in the burn-in test string. For example, if the test was ‘max_posterior | nacl’ and max_posterior burned-in at iteration 5000 while nacl burned in at iteration 6000, we’d want to use 5000 as the burn-in iteration. However, if the test was ‘max_posterior & nacl’, we’d want to use 6000 as the burn-in iteration. This function handles all cases by doing the following: first, take the collection of burn in iterations from all the burn in tests that were applied. Next, cycle over the iterations in increasing order, checking which tests have burned in by that point. Then evaluate the burn-in string at that point to see if it passes, and if so, what the iteration is. The first point that the test passes is used as the burn-in iteration.
- Parameters:
- Returns:
is_burned_in (bool) – Whether or not the data passes all burn in tests.
burn_in_iteration – The iteration at which all the tests pass. If the tests did not all pass (
is_burned_in
is false), then returnsNOT_BURNED_IN_ITER
.
- pycbc.inference.burn_in.ks_test(samples1, samples2, threshold=0.9)[source]
Applies a KS test to determine if two sets of samples are the same.
The ks test is applied parameter-by-parameter. If the two-tailed p-value returned by the test is greater than
threshold
, the samples are considered to be the same.- Parameters:
- Returns:
Dictionary mapping parameter names to booleans indicating whether the given parameter passes the KS test.
- Return type:
- pycbc.inference.burn_in.max_posterior(lnps_per_walker, dim)[source]
Burn in based on samples being within dim/2 of maximum posterior.
- Parameters:
lnps_per_walker (2D array) – Array of values that are proportional to the log posterior values. Must have shape
nwalkers x niterations
.dim (int) – The dimension of the parameter space.
- Returns:
burn_in_idx (array of int) – The burn in indices of each walker. If a walker is not burned in, its index will be be equal to the length of the chain.
is_burned_in (array of bool) – Whether or not a walker is burned in.
- pycbc.inference.burn_in.nacl(nsamples, acls, nacls=5)[source]
Burn in based on ACL.
This applies the following test to determine burn in:
The first half of the chain is ignored.
An ACL is calculated from the second half.
If
nacls
times the ACL is < the length of the chain / 2, the chain is considered to be burned in at the half-way point.
- Parameters:
nsamples (int) – The number of samples of in the chain(s).
acls (dict) – Dictionary of parameter -> ACL(s). The ACLs for each parameter may be an integer or an array of integers (for multiple chains).
nacls (int, optional) – The number of ACLs the chain(s) must have gone past the halfway point in order to be considered burned in. Default is 5.
- Returns:
Dictionary of parameter -> boolean(s) indicating if the chain(s) pass the test. If an array of values was provided for the acls, the values will be arrays of booleans.
- Return type:
pycbc.inference.entropy module
The module contains functions for calculating the Kullback-Leibler divergence.
- pycbc.inference.entropy.check_hist_params(samples, hist_min, hist_max, hist_bins)[source]
Checks that the bound values given for the histogram are consistent, returning the range if they are or raising an error if they are not. Also checks that if hist_bins is a str, it corresponds to a method available in numpy.histogram
- Parameters:
samples (numpy.array) – Set of samples to get the min/max if only one of the bounds is given.
hist_min (numpy.float64) – Minimum value for the histogram.
hist_max (numpy.float64) – Maximum value for the histogram.
hist_bins (int or str) – If int, number of equal-width bins to use in numpy.histogram. If str, it should be one of the methods to calculate the optimal bin width available in numpy.histogram: [‘auto’, ‘fd’, ‘doane’, ‘scott’, ‘stone’, ‘rice’, ‘sturges’, ‘sqrt’]. Default is ‘fd’ (Freedman Diaconis Estimator). This option will be ignored if kde=True.
- Returns:
hist_range (tuple or None) – The bounds (hist_min, hist_max) or None.
hist_bins (int or str) – Number of bins or method for optimal width bin calculation.
- pycbc.inference.entropy.compute_pdf(samples, method, bins, hist_min, hist_max)[source]
Computes the probability density function for a set of samples.
- Parameters:
samples (numpy.array) – Set of samples to calculate the pdf.
method (str) – Method to calculate the pdf. Options are ‘kde’ for the Kernel Density Estimator, and ‘hist’ to use numpy.histogram
bins (str or int, optional) – This option will be ignored if method is kde. If int, number of equal-width bins to use when calculating probability density function from a set of samples of the distribution. If str, it should be one of the methods to calculate the optimal bin width available in numpy.histogram: [‘auto’, ‘fd’, ‘doane’, ‘scott’, ‘stone’, ‘rice’, ‘sturges’, ‘sqrt’]. Default is ‘fd’ (Freedman Diaconis Estimator).
hist_min (numpy.float64, optional) – Minimum of the distributions’ values to use. This will be ignored if kde=True.
hist_max (numpy.float64, optional) – Maximum of the distributions’ values to use. This will be ignored if kde=True.
- Returns:
pdf – Discrete probability distribution calculated from samples.
- Return type:
numpy.array
- pycbc.inference.entropy.entropy(pdf1, base=2.718281828459045)[source]
Computes the information entropy for a single parameter from one probability density function.
- Parameters:
pdf1 (numpy.array) – Probability density function.
base ({numpy.e, numpy.float64}, optional) – The logarithmic base to use (choose base 2 for information measured in bits, default is nats).
- Returns:
The information entropy value.
- Return type:
numpy.float64
- pycbc.inference.entropy.js(samples1, samples2, kde=False, bins=None, hist_min=None, hist_max=None, base=2.718281828459045)[source]
Computes the Jensen-Shannon divergence for a single parameter from two distributions.
- Parameters:
samples1 (numpy.array) – Samples.
samples2 (numpy.array) – Samples.
kde (bool) – Set to True to estimate the probability density function using kernel density estimation (KDE).
bins (int or str, optional) – If int, number of equal-width bins to use when calculating probability density function from a set of samples of the distribution. If str, it should be one of the methods to calculate the optimal bin width available in numpy.histogram: [‘auto’, ‘fd’, ‘doane’, ‘scott’, ‘stone’, ‘rice’, ‘sturges’, ‘sqrt’]. Default is ‘fd’ (Freedman Diaconis Estimator). This option will be ignored if kde=True.
hist_min (numpy.float64) – Minimum of the distributions’ values to use. This will be ignored if kde=True.
hist_max (numpy.float64) – Maximum of the distributions’ values to use. This will be ignored if kde=True.
base (numpy.float64) – The logarithmic base to use (choose base 2 for information measured in bits, default is nats).
- Returns:
The Jensen-Shannon divergence value.
- Return type:
numpy.float64
- pycbc.inference.entropy.kl(samples1, samples2, pdf1=False, pdf2=False, kde=False, bins=None, hist_min=None, hist_max=None, base=2.718281828459045)[source]
Computes the Kullback-Leibler divergence for a single parameter from two distributions.
- Parameters:
samples1 (numpy.array) – Samples or probability density function (for the latter must also set pdf1=True).
samples2 (numpy.array) – Samples or probability density function (for the latter must also set pdf2=True).
pdf1 (bool) – Set to True if samples1 is a probability density funtion already.
pdf2 (bool) – Set to True if samples2 is a probability density funtion already.
kde (bool) – Set to True if at least one of pdf1 or pdf2 is False to estimate the probability density function using kernel density estimation (KDE).
bins (int or str, optional) – If int, number of equal-width bins to use when calculating probability density function from a set of samples of the distribution. If str, it should be one of the methods to calculate the optimal bin width available in numpy.histogram: [‘auto’, ‘fd’, ‘doane’, ‘scott’, ‘stone’, ‘rice’, ‘sturges’, ‘sqrt’]. Default is ‘fd’ (Freedman Diaconis Estimator). This option will be ignored if kde=True.
hist_min (numpy.float64) – Minimum of the distributions’ values to use. This will be ignored if kde=True.
hist_max (numpy.float64) – Maximum of the distributions’ values to use. This will be ignored if kde=True.
base (numpy.float64) – The logarithmic base to use (choose base 2 for information measured in bits, default is nats).
- Returns:
The Kullback-Leibler divergence value.
- Return type:
numpy.float64
pycbc.inference.evidence module
This modules provides functions for estimating the marginal likelihood or evidence of a model.
- pycbc.inference.evidence.arithmetic_mean_estimator(log_likelihood)[source]
Returns the log evidence via the prior arithmetic mean estimator (AME).
The logarithm form of AME is used. This is the most basic evidence estimator, and often requires O(billions) of samples from the prior.
- Parameters:
log_likelihood (1d array of floats) – The log likelihood of the data sampled from the prior distribution.
- Returns:
Estimation of the log of the evidence.
- Return type:
- pycbc.inference.evidence.harmonic_mean_estimator(log_likelihood)[source]
Returns the log evidence via posterior harmonic mean estimator (HME).
The logarithm form of HME is used. This method is not recommended for general use. It is very slow to converge, formally, has infinite variance, and very error prone.
Not recommended for general use.
- Parameters:
log_likelihood (1d array of floats) – The log likelihood of the data sampled from the posterior distribution.
- Returns:
Estimation of the log of the evidence.
- Return type:
- pycbc.inference.evidence.stepping_stone_algorithm(log_likelihood, betas)[source]
Returns the log evidence of the model via stepping stone algorithm. Also returns an estimated standard deviation for the log evidence.
- Parameters:
log_likelihood (3d array of shape (betas, walker, iteration)) – The log likelihood for each temperature separated by temperature, walker, and iteration.
betas (1d array) – The inverse temperatures used in the MCMC.
- Returns:
log_evidence (float) – Estimation of the log of the evidence.
mcmc_std (float) – The standard deviation of the log evidence estimate from Monte-Carlo spread.
- pycbc.inference.evidence.thermodynamic_integration(log_likelihood, betas, method='simpsons')[source]
Returns the log evidence of the model via thermodynamic integration. Also returns an estimated standard deviation for the log evidence.
Current options are integration through the trapezoid rule, a first-order corrected trapezoid rule, and Simpson’s rule.
- Parameters:
log_likelihood (3d array of shape (betas, walker, iteration)) – The log likelihood for each temperature separated by temperature, walker, and iteration.
betas (1d array) – The inverse temperatures used in the MCMC.
method ({"trapzoid", "trapezoid_corrected", "simpsons"},) – optional. The numerical integration method to use for the thermodynamic integration. Choices include: “trapezoid”, “trapezoid_corrected”, “simpsons”, for the trapezoid rule, the first-order correction to the trapezoid rule, and Simpson’s rule. [Default = “simpsons”]
- Returns:
log_evidence (float) – Estimation of the log of the evidence.
mcmc_std (float) – The standard deviation of the log evidence estimate from Monte-Carlo spread.
pycbc.inference.gelman_rubin module
This modules provides functions for evaluating the Gelman-Rubin convergence diagnostic statistic.
- pycbc.inference.gelman_rubin.gelman_rubin(chains, auto_burn_in=True)[source]
Calculates the univariate Gelman-Rubin convergence statistic which compares the evolution of multiple chains in a Markov-Chain Monte Carlo process and computes their difference to determine their convergence. The between-chain and within-chain variances are computed for each sampling parameter, and a weighted combination of the two is used to determine the convergence. As the chains converge, the point scale reduction factor should go to 1.
- Parameters:
chains (iterable) – An iterable of numpy.array instances that contain the samples for each chain. Each chain has shape (nparameters, niterations).
auto_burn_in (bool) – If True, then only use later half of samples provided.
- Returns:
psrf – A numpy.array of shape (nparameters) that has the point estimates of the potential scale reduction factor.
- Return type:
numpy.array
- pycbc.inference.gelman_rubin.walk(chains, start, end, step)[source]
Calculates Gelman-Rubin conervergence statistic along chains of data. This function will advance along the chains and calculate the statistic for each step.
- Parameters:
chains (iterable) – An iterable of numpy.array instances that contain the samples for each chain. Each chain has shape (nparameters, niterations).
start (float) – Start index of blocks to calculate all statistics.
end (float) – Last index of blocks to calculate statistics.
step (float) – Step size to take for next block.
- Returns:
starts (numpy.array) – 1-D array of start indexes of calculations.
ends (numpy.array) – 1-D array of end indexes of caluclations.
stats (numpy.array) – Array with convergence statistic. It has shape (nparameters, ncalculations).
pycbc.inference.geweke module
Functions for computing the Geweke convergence statistic.
- pycbc.inference.geweke.geweke(x, seg_length, seg_stride, end_idx, ref_start, ref_end=None, seg_start=0)[source]
Calculates Geweke conervergence statistic for a chain of data. This function will advance along the chain and calculate the statistic for each step.
- Parameters:
x (numpy.array) – A one-dimensional array of data.
seg_length (int) – Number of samples to use for each Geweke calculation.
seg_stride (int) – Number of samples to advance before next Geweke calculation.
end_idx (int) – Index of last start.
ref_start (int) – Index of beginning of end reference segment.
ref_end (int) – Index of end of end reference segment. Default is None which will go to the end of the data array.
seg_start (int) – What index to start computing the statistic. Default is 0 which will go to the beginning of the data array.
- Returns:
starts (numpy.array) – The start index of the first segment in the chain.
ends (numpy.array) – The end index of the first segment in the chain.
stats (numpy.array) – The Geweke convergence diagnostic statistic for the segment.
pycbc.inference.option_utils module
This module contains standard options used for inference-related programs.
- class pycbc.inference.option_utils.ParseLabelArg(type=<class 'str'>, nargs=None, **kwargs)[source]
Bases:
Action
Argparse action that will parse arguments that can accept labels.
This assumes that the values set on the command line for its assigned argument are strings formatted like
PARAM[:LABEL]
. When the arguments are parsed, theLABEL
bit is stripped off and added to a dictionary mappingPARAM -> LABEL
. This dictionary is stored to the parsed namespace called{dest}_labels
, where{dest}
is the argument’sdest
setting (by default, this is the same as the option string). Likewise, the argument’sdest
in the parsed namespace is updated so that it is justPARAM
.If no
LABEL
is provided, thenPARAM
will be used forLABEL
.This action can work on arguments that have
nargs != 0
andtype
set tostr
.
- class pycbc.inference.option_utils.ParseParametersArg(type=<class 'str'>, nargs=None, **kwargs)[source]
Bases:
ParseLabelArg
Argparse action that will parse parameters and labels from an opton.
Does the same as
ParseLabelArg
, with the additional functionality that ifLABEL
is a known parameter inpycbc.waveform.parameters
, then the label attribute there will be used in the labels dictionary. Otherwise,LABEL
will be used.Examples
Create a parser and add two arguments that use this action (note that the first argument accepts multiple inputs while the second only accepts a single input):
>>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--parameters', type=str, nargs="+", action=ParseParametersArg) >>> parser.add_argument('--z-arg', type=str, action=ParseParametersArg)
Parse a command line that uses these options:
>>> import shlex >>> cli = "--parameters 'mass1+mass2:mtotal' ra ni --z-arg foo:bar" >>> opts = parser.parse_args(shlex.split(cli)) >>> opts.parameters ['mass1+mass2', 'ra', 'ni'] >>> opts.parameters_labels {'mass1+mass2': '$M~(\mathrm{M}_\odot)$', 'ni': 'ni', 'ra': '$\alpha$'} >>> opts.z_arg 'foo' >>> opts.z_arg_labels {'foo': 'bar'}
In the above, the first argument to
--parameters
wasmtotal
. Since this is a recognized parameter inpycbc.waveform.parameters
, the label dictionary contains the latex string associated with themtotal
parameter. A label was not provided for the second argument, and sora
was used. Sincera
is also a recognized parameter, its associated latex string was used in the labels dictionary. Sinceni
andbar
(the label forz-arg
) are not recognized parameters, they were just used as-is in the labels dictionaries.
- pycbc.inference.option_utils.add_density_option_group(parser)[source]
Adds the options needed to configure contours and density colour map.
- Parameters:
parser (object) – ArgumentParser instance.
- pycbc.inference.option_utils.add_injsamples_map_opt(parser)[source]
Adds option to parser to specify a mapping between injection parameters an sample parameters.
- pycbc.inference.option_utils.add_plot_posterior_option_group(parser)[source]
Adds the options needed to configure plots of posterior results.
- Parameters:
parser (object) – ArgumentParser instance.
- pycbc.inference.option_utils.add_scatter_option_group(parser)[source]
Adds the options needed to configure scatter plots.
- Parameters:
parser (object) – ArgumentParser instance.
- pycbc.inference.option_utils.expected_parameters_from_cli(opts)[source]
Parses the –expected-parameters arguments from the plot_posterior option group.
- Parameters:
opts (ArgumentParser) – The parsed arguments from the command line.
- Returns:
Dictionary of parameter name -> expected value. Only parameters that were specified in the –expected-parameters option will be included; if no parameters were provided, will return an empty dictionary.
- Return type:
- pycbc.inference.option_utils.plot_ranges_from_cli(opts)[source]
Parses the mins and maxs arguments from the plot_posterior option group.
- Parameters:
opts (ArgumentParser) – The parsed arguments from the command line.
- Returns:
mins (dict) – Dictionary of parameter name -> specified mins. Only parameters that were specified in the –mins option will be included; if no parameters were provided, will return an empty dictionary.
maxs (dict) – Dictionary of parameter name -> specified maxs. Only parameters that were specified in the –mins option will be included; if no parameters were provided, will return an empty dictionary.