Lecture 19
PyMC is a probabilistic programming library for Python that allows users to build Bayesian models with a simple Python API and fit them using Markov chain Monte Carlo (MCMC) methods.
Modern - Includes state-of-the-art inference algorithms, including MCMC (NUTS) and variational inference (ADVI).
User friendly - Write your models using friendly Python syntax. Learn Bayesian modeling from the many example notebooks.
Fast - Uses Aesara as its computational backend to compile to C and JAX, run your models on the GPU, and benefit from complex graph-optimizations.
Batteries included - Includes probability distributions, Gaussian processes, ABC, SMC and much more. It integrates nicely with ArviZ for visualizations and diagnostics, as well as Bambi for high-level mixed-effect models.
Community focused - Ask questions on discourse, join MeetUp events, follow us on Twitter, and start contributing.
ArviZ is a Python package for exploratory analysis of Bayesian models. Includes functions for posterior analysis, data storage, sample diagnostics, model checking, and comparison.
Interoperability - Integrates with all major probabilistic programming libraries: PyMC, CmdStanPy, PyStan, Pyro, NumPyro, and emcee.
Large Suite of Visualizations - Provides over 25 plotting functions for all parts of Bayesian workflow: visualizing distributions, diagnostics, and model checking. See the gallery for examples.
State of the Art Diagnostics - Latest published diagnostics and statistics are implemented, tested and distributed with ArviZ.
Flexible Model Comparison - Includes functions for comparing models with information criteria, and cross validation (both approximate and brute force).
Built for Collaboration - Designed for flexible cross-language serialization using netCDF or Zarr formats. ArviZ also has a Julia version that uses the same data schema.
Labeled Data - Builds on top of xarray to work with labeled dimensions and coordinates.
All models are derived from the Model()
class, unlike what we have seen previously PyMC makes heavy use of Python’s context manager using the with
statement to add model components to a model.
pm.Normal()
is an example of a PyMC distribution, which are used to construct models, these are implemented using the TensorVariable
class which is used for all of the builtin distributions (and can be used to create custom distributions). Generally you will not be interacting with these objects directly, but with that said some useful methods and attributes:
If you really want to construct a TensorVariable
outside of a model this can be done via the dist
method for each distribution.
Because of this construction it is possible to add additional components to an existing (named) model via subsequent with
statements (only the first needs pm.Model()
)
Note that we defined \(y|x \sim \mathcal{N}(x, 1)\), so what is happening when we use pm.draw(norm.y)
?
Each time we ask for a draw from y
, PyMC is first drawing from x
for us.
We will now build a basic model where we know what the solution should look like and compare the results.
In order to sample from the posterior we add a call to sample()
within the model context.
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [p]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 0 seconds.
pm.sample()
resultsXarray (formerly xray) is an open source project and Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!
Xarray introduces labels in the form of dimensions, coordinates and attributes on top of raw NumPy-like arrays, which allows for a more intuitive, more concise, and less error-prone developer experience. The package includes a large and growing library of domain-agnostic functions for advanced analytics and visualization with these data structures.
Xarray is inspired by and borrows heavily from pandas, the popular data analysis package focused on labelled tabular data. It is particularly tailored to working with netCDF files, which were the source of xarray’s data model, and integrates tightly with dask for parallel computing.
trace
<xarray.Dataset>
Dimensions: (chain: 4, draw: 1000)
Coordinates:
* chain (chain) int64 0 1 2 3
* draw (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
Data variables:
p (chain, draw) float64 0.5068 0.4518 0.3853 ... 0.166 0.3242 0.3242
Attributes:
created_at: 2023-03-24T16:28:00.773567
arviz_version: 0.15.1
inference_library: pymc
inference_library_version: 5.1.2
sampling_time: 0.32303428649902344
tuning_steps: 1000
Posterior values, or subsets, can be converted to DataFrames via the to_dataframe()
method
Standard MCMC diagnostic statistics are available via summary()
from ArviZ
individual methods are available for each statistics,
Given the below data, we want to fit a linear regression model to the following synthetic data,
with pm.Model() as lm:
m = pm.Normal('m', mu=0, sigma=50)
b = pm.Normal('b', mu=0, sigma=50)
sigma = pm.HalfNormal('sigma', sigma=5)
pm.Normal('y', mu=m*x + b, sigma=sigma, observed=y)
trace = pm.sample(progressbar=False, random_seed=1234)
y
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [m, b, sigma]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 1 seconds.
plt.scatter(x, y, s=30, label='data')
post_m = trace.posterior['m'].sel(chain=0, draw=slice(0,None,10))
post_b = trace.posterior['b'].sel(chain=0, draw=slice(0,None,10))
plt.figure(layout="constrained")
plt.scatter(x, y, s=30, label='data')
for m, b in zip(post_m.values, post_b.values):
plt.plot(x, m*x + b, c='gray', alpha=0.1)
plt.plot(x, 6*x + 2, label='true regression line', lw=3., c='red')
plt.legend(loc='best')
plt.show()
Draws for observed variables can also be generated (posterior predictive draws) via the sample_posterior_predictive()
method.
Sampling: [y]
<xarray.Dataset> Dimensions: (chain: 4, draw: 1000, y_dim_2: 11) Coordinates: * chain (chain) int64 0 1 2 3 * draw (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999 * y_dim_2 (y_dim_2) int64 0 1 2 3 4 5 6 7 8 9 10 Data variables: y (chain, draw, y_dim_2) float64 2.857 2.19 4.669 ... 7.195 5.406 Attributes: created_at: 2023-03-24T16:28:16.241911 arviz_version: 0.15.1 inference_library: pymc inference_library_version: 5.1.2
array([0, 1, 2, 3])
array([ 0, 1, 2, ..., 997, 998, 999])
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
array([[[ 2.85686, 2.18953, 4.66897, ..., 5.35506, 7.35803, 5.3522 ], [ 2.34297, 3.40779, 2.59731, ..., 5.71168, 8.70207, 8.68523], [ 3.12766, 0.04218, 3.22724, ..., 7.91186, 7.92524, 9.32503], ..., [ 2.51308, 3.99268, 4.22684, ..., 8.37936, 8.93222, 7.39906], [ 1.5481 , 2.31502, 1.43538, ..., 6.72311, 7.30769, 6.81044], [ 3.83907, 1.04454, 4.29189, ..., 4.509 , 6.37489, 7.77304]], [[ 1.83429, 2.62325, 2.18856, ..., 5.08481, 6.99286, 8.52024], [ 2.59373, 2.10047, 3.04349, ..., 7.57884, 7.7987 , 7.66964], [ 1.9111 , 3.33027, 2.4711 , ..., 5.48622, 7.50415, 8.68695], ..., [-0.20658, 1.8633 , 1.83708, ..., 10.15023, 4.32709, 9.2671 ], [ 4.43983, 7.16712, 3.68232, ..., 9.66555, 7.05418, 5.82735], [ 4.99111, 3.76234, 3.48569, ..., 4.51302, 6.45616, 8.82464]], [[ 2.68416, 4.1502 , 4.7873 , ..., 7.19026, 8.88485, 6.70774], [ 2.90782, 3.614 , 4.35902, ..., 4.69984, 7.64229, 8.46869], [ 3.65398, 2.26715, 3.92536, ..., 5.98342, 6.3247 , 6.47577], ..., [ 2.54926, 4.0628 , 5.12966, ..., 7.5882 , 5.50138, 10.79649], [ 3.46671, 3.96819, 3.03161, ..., 8.18251, 4.7326 , 6.82195], [ 2.67285, 2.74161, 3.61491, ..., 6.48693, 7.88254, 8.33835]], [[ 3.39342, 2.51793, 6.92264, ..., 6.14037, 3.56888, 8.00324], [ 1.92187, 5.06539, 2.48604, ..., 7.80234, 10.93275, 9.70731], [ 3.08495, 3.06477, 2.98444, ..., 6.52886, 6.87296, 6.32577], ..., [ 1.75755, 4.23053, 3.08631, ..., 5.79943, 7.91415, 7.3435 ], [ 1.12522, 1.00191, 3.00347, ..., 7.77976, 9.04782, 7.04573], [ 0.79922, 3.85529, 6.9088 , ..., 5.58716, 7.19542, 5.40647]]])
PandasIndex(Int64Index([0, 1, 2, 3], dtype='int64', name='chain'))
PandasIndex(Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 990, 991, 992, 993, 994, 995, 996, 997, 998, 999], dtype='int64', name='draw', length=1000))
PandasIndex(Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype='int64', name='y_dim_2'))
<xarray.Dataset> Dimensions: (y_dim_0: 11) Coordinates: * y_dim_0 (y_dim_0) int64 0 1 2 3 4 5 6 7 8 9 10 Data variables: y (y_dim_0) float64 2.471 1.409 4.633 3.487 ... 6.816 5.157 9.15 Attributes: created_at: 2023-03-24T16:28:16.242300 arviz_version: 0.15.1 inference_library: pymc inference_library_version: 5.1.2
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
array([2.47144, 1.40902, 4.63271, 3.48735, 3.67941, 5.88716, 6.45959, 5.56348, 6.8157 , 5.15732, 9.15004])
PandasIndex(Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype='int64', name='y_dim_0'))
<xarray.Dataset> Dimensions: (chain: 4, draw: 1000, y_dim_2: 11) Coordinates: * chain (chain) int64 0 1 2 3 * draw (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999 * y_dim_2 (y_dim_2) int64 0 1 2 3 4 5 6 7 8 9 10 Data variables: y (chain, draw, y_dim_2) float64 2.857 2.19 4.669 ... 7.195 5.406 Attributes: created_at: 2023-03-24T16:28:16.241911 arviz_version: 0.15.1 inference_library: pymc inference_library_version: 5.1.2
array([0, 1, 2, 3])
array([ 0, 1, 2, ..., 997, 998, 999])
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
array([[[ 2.85686, 2.18953, 4.66897, ..., 5.35506, 7.35803, 5.3522 ], [ 2.34297, 3.40779, 2.59731, ..., 5.71168, 8.70207, 8.68523], [ 3.12766, 0.04218, 3.22724, ..., 7.91186, 7.92524, 9.32503], ..., [ 2.51308, 3.99268, 4.22684, ..., 8.37936, 8.93222, 7.39906], [ 1.5481 , 2.31502, 1.43538, ..., 6.72311, 7.30769, 6.81044], [ 3.83907, 1.04454, 4.29189, ..., 4.509 , 6.37489, 7.77304]], [[ 1.83429, 2.62325, 2.18856, ..., 5.08481, 6.99286, 8.52024], [ 2.59373, 2.10047, 3.04349, ..., 7.57884, 7.7987 , 7.66964], [ 1.9111 , 3.33027, 2.4711 , ..., 5.48622, 7.50415, 8.68695], ..., [-0.20658, 1.8633 , 1.83708, ..., 10.15023, 4.32709, 9.2671 ], [ 4.43983, 7.16712, 3.68232, ..., 9.66555, 7.05418, 5.82735], [ 4.99111, 3.76234, 3.48569, ..., 4.51302, 6.45616, 8.82464]], [[ 2.68416, 4.1502 , 4.7873 , ..., 7.19026, 8.88485, 6.70774], [ 2.90782, 3.614 , 4.35902, ..., 4.69984, 7.64229, 8.46869], [ 3.65398, 2.26715, 3.92536, ..., 5.98342, 6.3247 , 6.47577], ..., [ 2.54926, 4.0628 , 5.12966, ..., 7.5882 , 5.50138, 10.79649], [ 3.46671, 3.96819, 3.03161, ..., 8.18251, 4.7326 , 6.82195], [ 2.67285, 2.74161, 3.61491, ..., 6.48693, 7.88254, 8.33835]], [[ 3.39342, 2.51793, 6.92264, ..., 6.14037, 3.56888, 8.00324], [ 1.92187, 5.06539, 2.48604, ..., 7.80234, 10.93275, 9.70731], [ 3.08495, 3.06477, 2.98444, ..., 6.52886, 6.87296, 6.32577], ..., [ 1.75755, 4.23053, 3.08631, ..., 5.79943, 7.91415, 7.3435 ], [ 1.12522, 1.00191, 3.00347, ..., 7.77976, 9.04782, 7.04573], [ 0.79922, 3.85529, 6.9088 , ..., 5.58716, 7.19542, 5.40647]]])
PandasIndex(Int64Index([0, 1, 2, 3], dtype='int64', name='chain'))
PandasIndex(Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 990, 991, 992, 993, 994, 995, 996, 997, 998, 999], dtype='int64', name='draw', length=1000))
PandasIndex(Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype='int64', name='y_dim_2'))
with pm.Model() as lm2:
m = pm.Normal('m', mu=0, sigma=50)
b = pm.Normal('b', mu=0, sigma=50)
sigma = pm.HalfNormal('sigma', sigma=5)
y_hat = pm.Deterministic("y_hat", m*x + b)
pm.Normal('y', mu=y_hat, sigma=sigma, observed=y)
trace = pm.sample(random_seed=1234, progressbar=False)
pp = pm.sample_posterior_predictive(trace, var_names=["y_hat"], progressbar=False)
y
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [m, b, sigma]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 1 seconds.
Sampling: []
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
m 5.573 1.344 2.969 8.042 0.040 0.028 1182.0 1229.0 1.0
b 2.189 0.791 0.659 3.624 0.024 0.018 1106.0 1153.0 1.0
sigma 1.366 0.378 0.788 2.075 0.010 0.007 1452.0 1722.0 1.0
y_hat[0] 2.189 0.791 0.659 3.624 0.024 0.018 1106.0 1153.0 1.0
y_hat[1] 2.746 0.681 1.427 4.009 0.021 0.015 1155.0 1245.0 1.0
y_hat[2] 3.303 0.583 2.228 4.424 0.017 0.012 1254.0 1319.0 1.0
y_hat[3] 3.861 0.501 2.933 4.821 0.013 0.010 1481.0 1786.0 1.0
y_hat[4] 4.418 0.446 3.543 5.218 0.010 0.007 2031.0 2364.0 1.0
y_hat[5] 4.975 0.427 4.187 5.791 0.007 0.005 3266.0 2804.0 1.0
y_hat[6] 5.532 0.449 4.671 6.389 0.007 0.005 4478.0 2810.0 1.0
y_hat[7] 6.090 0.508 5.090 7.017 0.008 0.006 3929.0 2889.0 1.0
y_hat[8] 6.647 0.591 5.568 7.810 0.011 0.008 2963.0 2707.0 1.0
y_hat[9] 7.204 0.691 5.924 8.542 0.014 0.010 2389.0 2337.0 1.0
y_hat[10] 7.761 0.801 6.358 9.405 0.018 0.013 2087.0 2291.0 1.0
with pm.Model() as bayes_naive:
b = pm.Flat("beta", shape=k)
s = pm.HalfNormal('sigma', sigma=2)
pm.Normal("y", mu=X @ b, sigma=s, observed=y)
trace = pm.sample(progressbar=False, random_seed=12345)
y
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [beta, sigma]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 70 seconds.
The rhat statistic is larger than 1.01 for some parameters. This indicates problems during sampling. See https://
arxiv.org/abs/1903.08008 for details
The effective sample size per chain is smaller than 100 for some parameters. A higher number is needed for reliable
rhat and ess computation. See https://arxiv.org/abs/1903.08008 for details
There were 113 divergences after tuning. Increase `target_accept` or reparameterize.
Chain 0 reached the maximum tree depth. Increase `max_treedepth`, increase `target_accept` or reparameterize.
Chain 1 reached the maximum tree depth. Increase `max_treedepth`, increase `target_accept` or reparameterize.
Chain 2 reached the maximum tree depth. Increase `max_treedepth`, increase `target_accept` or reparameterize.
Chain 3 reached the maximum tree depth. Increase `max_treedepth`, increase `target_accept` or reparameterize.
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
beta[0] 643.620 606.139 -622.272 1402.288 261.586 196.065 6.0 14.0 1.95
beta[1] -410.200 341.106 -1078.665 124.358 101.833 76.145 12.0 20.0 1.31
beta[2] 717.738 1008.215 -680.198 2520.985 482.446 376.947 5.0 12.0 2.40
beta[3] -519.149 1119.439 -2671.450 1043.628 549.247 419.469 5.0 12.0 2.94
beta[4] 1471.100 990.544 -179.163 2808.723 471.027 364.160 5.0 21.0 2.43
... ... ... ... ... ... ... ... ... ...
beta[96] 420.720 1056.327 -1260.449 2382.936 487.813 369.004 5.0 12.0 2.58
beta[97] -143.535 1203.422 -2170.522 1927.823 574.093 436.428 5.0 14.0 3.01
beta[98] -818.080 685.229 -2423.653 190.270 313.078 245.187 6.0 12.0 1.89
beta[99] 261.936 912.018 -1070.744 1629.130 440.188 335.261 5.0 15.0 2.59
sigma 2.432 1.316 0.461 4.800 0.467 0.343 7.0 27.0 1.54
[101 rows x 9 columns]
with pm.Model() as bayes_weak:
b = pm.Normal("beta", mu=0, sigma=10, shape=k)
s = pm.HalfNormal('sigma', sigma=2)
pm.Normal("y", mu=X @ b, sigma=s, observed=y)
trace = pm.sample(progressbar=False, random_seed=12345)
y
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [beta, sigma]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 71 seconds.
The rhat statistic is larger than 1.01 for some parameters. This indicates problems during sampling. See https://
arxiv.org/abs/1903.08008 for details
The effective sample size per chain is smaller than 100 for some parameters. A higher number is needed for reliable
rhat and ess computation. See https://arxiv.org/abs/1903.08008 for details
There were 40 divergences after tuning. Increase `target_accept` or reparameterize.
Chain 2 reached the maximum tree depth. Increase `max_treedepth`, increase `target_accept` or reparameterize.
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
beta[0] -0.077 6.620 -12.749 12.799 0.200 0.548 830.0 1795.0 1.07
beta[1] 0.527 5.875 -10.243 12.733 0.693 0.525 63.0 2169.0 1.13
beta[2] 2.191 8.614 -11.691 19.642 2.457 1.780 13.0 21.0 1.23
beta[3] -0.756 6.533 -14.000 10.236 0.958 0.681 43.0 1570.0 1.06
beta[4] 0.399 7.131 -12.064 14.597 1.102 0.785 40.0 1274.0 1.06
... ... ... ... ... ... ... ... ... ...
beta[96] 0.085 6.255 -10.955 12.770 0.510 0.462 162.0 1586.0 1.03
beta[97] -2.220 6.716 -14.092 11.428 1.059 0.754 40.0 234.0 1.07
beta[98] -1.101 7.490 -14.253 11.607 2.305 1.677 10.0 31.0 1.29
beta[99] -1.003 6.763 -14.138 10.319 0.633 0.449 125.0 1170.0 1.02
sigma 1.606 1.172 0.013 3.646 0.393 0.288 7.0 11.0 1.57
[101 rows x 9 columns]
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
beta[10] 4.967 6.752 -9.493 15.785 1.221 0.872 31.0 2230.0 1.09
beta[20] -4.413 6.705 -15.987 8.051 1.157 0.825 34.0 1600.0 1.08
beta[30] 4.986 5.798 -6.198 15.950 0.606 0.560 95.0 1727.0 1.06
beta[40] -3.742 7.697 -19.201 9.502 1.128 0.802 48.0 263.0 1.05
beta[50] 6.186 6.122 -6.222 16.288 0.635 0.451 122.0 1311.0 1.04
beta[60] -6.317 6.212 -17.585 6.570 0.327 0.231 364.0 1647.0 1.10
beta[70] 4.856 6.688 -6.852 18.421 0.461 0.423 217.0 1495.0 1.04
beta[80] -9.648 6.434 -19.370 2.766 1.712 1.256 15.0 217.0 1.18
def plot_slope(trace, prior="beta", chain=0):
post = (trace.posterior[prior]
.to_dataframe()
.reset_index()
.query(f"chain == {chain}")
)
sns.catplot(x="beta_dim_0", y="beta", data=post, kind="boxen", linewidth=0, color='blue', aspect=2, showfliers=False)
plt.tight_layout()
plt.xticks(range(0,110,10))
plt.show()
with pm.Model() as bayes_lasso:
b = pm.Laplace("beta", 0, 1, shape=k)
s = pm.HalfNormal('sigma', sigma=1)
pm.Normal("y", mu=X @ b, sigma=s, observed=y)
trace = pm.sample(progressbar=False, random_seed=1234)
y
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [beta, sigma]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 23 seconds.
The rhat statistic is larger than 1.01 for some parameters. This indicates problems during sampling. See https://
arxiv.org/abs/1903.08008 for details
The effective sample size per chain is smaller than 100 for some parameters. A higher number is needed for reliable
rhat and ess computation. See https://arxiv.org/abs/1903.08008 for details
There were 239 divergences after tuning. Increase `target_accept` or reparameterize.
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
beta[0] 0.067 0.785 -1.402 1.716 0.011 0.018 4345.0 1972.0 1.02
beta[1] 0.308 0.829 -1.179 1.710 0.152 0.119 38.0 533.0 1.07
beta[2] -0.045 0.783 -1.634 1.478 0.012 0.015 4101.0 2327.0 1.02
beta[3] -0.215 0.804 -1.865 1.157 0.075 0.053 95.0 2069.0 1.03
beta[4] 0.063 0.775 -1.508 1.570 0.012 0.013 4381.0 2580.0 1.11
... ... ... ... ... ... ... ... ... ...
beta[96] 0.173 0.774 -1.322 1.427 0.136 0.097 38.0 2541.0 1.07
beta[97] -0.139 0.721 -1.532 1.351 0.011 0.014 3961.0 2368.0 1.11
beta[98] 0.244 0.708 -1.038 1.689 0.028 0.020 570.0 2040.0 1.01
beta[99] -0.277 0.750 -1.850 1.061 0.020 0.016 1298.0 1953.0 1.01
sigma 0.825 0.522 0.171 1.790 0.129 0.093 10.0 8.0 1.30
[101 rows x 9 columns]
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
beta[10] 8.306 1.184 6.066 10.577 0.025 0.018 2301.0 2176.0 1.04
beta[20] -8.298 1.253 -10.672 -5.908 0.029 0.021 1860.0 2316.0 1.01
beta[30] 8.673 0.979 6.683 10.340 0.050 0.035 551.0 2361.0 1.01
beta[40] -8.781 1.517 -11.710 -5.931 0.031 0.022 2408.0 2496.0 1.12
beta[50] 9.076 0.981 7.097 10.773 0.041 0.029 1055.0 2812.0 1.01
beta[60] -9.313 1.133 -11.377 -7.166 0.133 0.098 69.0 2392.0 1.04
beta[70] 8.636 1.228 6.214 10.546 0.167 0.125 53.0 2107.0 1.05
beta[80] -9.984 0.888 -11.761 -8.366 0.017 0.012 2629.0 2425.0 1.05
Sta 663 - Spring 2023