using Pumas
Introduction to Bayesian Models in Pumas
1 Introduction to Bayesian Models in Pumas
This notebook will provide the basics on how to specify and fit Bayesian models in Pumas.
1.1 Bayesian Statistics
Bayesian statistics is a data analysis approach based on Bayes’ theorem where available knowledge about the parameters of a statistical model is updated with the information of observed data (Gelman et al., 2013; McElreath, 2020).
Previous knowledge is expressed as a prior distribution and combined with the observed data in the form of a likelihood function to generate a posterior distribution. The posterior can also be used to make predictions about future events.
1.2 Advantages of Bayesian analysis
The Bayesian workflow allows analysts to:
- Incorporate domain knowledge and insights from previous studies using prior distributions.
- Quantify the epistemic uncertainty in the model parameters’ values. Parameter values can be uncertain in reality due to model non-identifiability or practical non-identifiability issues because of lack of data.
This epistemic uncertainty can be further propagated forward via simulation to get a distribution of predictions which when used to make decisions, e.g. dosing decisions, can make those decisions more robust to changes in the parameter values. The above are advantages of Bayesian analysis which the traditional frequentist workflow typically doesn’t have a satisfactory answer for. Using bootstrapping or asymptotic estimates of standard errors can be considered somewhat ad-hoc and assumptions methods to quantify uncertainty in the parameter estimates when Bayesian inference uses the established theory of probability to more rigorously quantify said uncertainty with fewer assumptions about the model.
It is important to note that one can still reap the second benefit of Bayesian analysis due to its flexibility even when little to no domain knowledge is imposed on the model. So the Bayesian workflow doesn’t force you to incorporate your domain knowledge but it empowers you to if you want.
Let’s formalize some of the advantages.
First, Bayesian Statistics uses probabilistic statements in terms of:
- one or more parameters \(\theta\)
- unobserved data \(\tilde{y}\)
These statements are conditioned on the observed values of \(y\):
- \(P(\theta \mid y)\)
- \(P(\tilde{y} \mid y)\)
We also, implicitly, condition on the observed data from any covariate \(x\)
Generally, we are interested in:
- expected response of a new subject to a drug, e.g. \(\operatorname{E}[\hat{y} \mid y]\)
- the probability of drug effect is higher than zero, e.g. \(P(\theta > 0 \mid y) \geq 0.95\)
1.3 How to specify Bayesian models in Pumas?
Let’s revisit the 1-compartment model we specified in the Introduction to Pumas tutorial:
@model begin
@param begin
∈ RealDomain(lower = 0, init = 3.2)
tvcl ∈ RealDomain(lower = 0, init = 16.4)
tvv ∈ RealDomain(lower = 0, init = 3.8)
tvka ∈ PDiagDomain(init = [0.04, 0.04, 0.04])
Ω ∈ RealDomain(lower = 0.0001, init = 0.2)
σ_p end
@random begin
~ MvNormal(Ω)
η end
@covariates begin
Doseend
@pre begin
= tvcl * exp(η[1])
CL = tvv * exp(η[2])
Vc = tvka * exp(η[3])
Ka end
@dynamics Depots1Central1
@derived begin
:= @. Central / Vc
cp ~ @. Normal(cp, abs(cp) * σ_p)
Conc end
end
Bayesian models in Pumas are quite easy: you just need to specify priors for all parameters in the @param
block.
Instead of having:
@param begin
∈ RealDomain(lower = 0, init = 3.2)
tvcl ...
end
We would say that tvcl
has a prior:
@param begin
~ LogNormal(log(1.5), 1)
tvcl ...
end
where tvcl
(typical value for clearance) is the parameter that we want the model to estimate and LogNormal(log(1.5), 1)
is the prior we are giving to tvcl
.
You can use any prior you want. Here we are giving tvcl
a LogNormal
prior since it has a positive-only domain and allows us to give more weight to certain values while also making it possible for higher values due to the wide-tail nature.
There is no one prior that fits all cases and generally it might be a good practice to follow a previous similar study’s priors where a good reference can be found.
However, if you have the task of choosing good prior distributions for an all-new model, it will generally be a multi-step process consisting of:
- Deciding the support of the prior. The support of the prior distribution must match the the domain of the parameter. For example, different priors can be used for positive parameters than those for parameters between 0 and 1.
- Deciding the center of the prior, e.g. mean, median or mode.
- Deciding the strength (aka informativeness) of the prior. This is often controlled by a standard deviation or scale parameter in the distribution constructor. A small standard deviation or scale parameter implies low uncertainty in the parameter value which leads to a stronger (aka more informative) prior. A large standard deviation or scale parameter implies high uncertainty in the parameter value which leads to a weak (aka less informative) prior. You should study each prior distribution you are considering before using it to make sure the strength of the prior reflects your confidence level in the parameter values.
- Deciding the shape of the probability density function (PDF) of the prior. Some distributions are left skewed, others are right skewed and some are symmetric. Some have heavier tails than others. You should make sure the shape of the PDF reflects your knowledge about the parameter value prior to observing the data.
1.3.0.1 Data preparation for modeling
We’ll use the PharmaDatasets.jl
package to load the data:
using PharmaDatasets
= dataset("pk_painrelief")
pkpain_df first(pkpain_df, 5)
Row | Subject | Time | Conc | PainRelief | PainScore | RemedStatus | Dose |
---|---|---|---|---|---|---|---|
Int64 | Float64 | Float64 | Int64 | Int64 | Int64 | String7 | |
1 | 1 | 0.0 | 0.0 | 0 | 3 | 1 | 20 mg |
2 | 1 | 0.5 | 1.15578 | 1 | 1 | 0 | 20 mg |
3 | 1 | 1.0 | 1.37211 | 1 | 0 | 0 | 20 mg |
4 | 1 | 1.5 | 1.30058 | 1 | 0 | 0 | 20 mg |
5 | 1 | 2.0 | 1.19195 | 1 | 1 | 0 | 20 mg |
Let’s filter out the placebo data as we don’t need that for the PK analysis:
using DataFramesMeta
= @rsubset pkpain_df :Dose != "Placebo"; pkpain_noplb_df
If you want to learn more about data wrangling, don’t forget to check our Data Wrangling in Julia tutorials!
Also we need to add the :amt
column:
@rtransform! pkpain_noplb_df :amt = :Time == 0 ? parse(Int, chop(:Dose; tail = 3)) : missing;
PumasNDF requires the presence of :evid
and :cmt
columns in the dataset:
@rtransform! pkpain_noplb_df begin
:evid = :Time == 0 ? 1 : 0
:cmt = :Time == 0 ? 1 : 2
end;
Further, observations at time of dosing, i.e., when evid = 1
have to be missing
:
@rtransform! pkpain_noplb_df :Conc = :evid == 1 ? missing : :Conc;
Here is the final result:
first(pkpain_noplb_df, 10)
Row | Subject | Time | Conc | PainRelief | PainScore | RemedStatus | Dose | amt | evid | cmt |
---|---|---|---|---|---|---|---|---|---|---|
Int64 | Float64 | Float64? | Int64 | Int64 | Int64 | String7 | Int64? | Int64 | Int64 | |
1 | 1 | 0.0 | missing | 0 | 3 | 1 | 20 mg | 20 | 1 | 1 |
2 | 1 | 0.5 | 1.15578 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
3 | 1 | 1.0 | 1.37211 | 1 | 0 | 0 | 20 mg | missing | 0 | 2 |
4 | 1 | 1.5 | 1.30058 | 1 | 0 | 0 | 20 mg | missing | 0 | 2 |
5 | 1 | 2.0 | 1.19195 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
6 | 1 | 2.5 | 1.13602 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
7 | 1 | 3.0 | 0.873224 | 1 | 0 | 0 | 20 mg | missing | 0 | 2 |
8 | 1 | 4.0 | 0.739963 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
9 | 1 | 5.0 | 0.600143 | 0 | 2 | 0 | 20 mg | missing | 0 | 2 |
10 | 1 | 6.0 | 0.425624 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
Finally, we’ll import pkpain_noplb_df
DataFrame
into a Population
with read_pumas
:
= read_pumas(
pkpain_noplb
pkpain_noplb_df,= :Subject,
id = :Time,
time = :amt,
amt = [:Conc],
observations = [:Dose],
covariates = :evid,
evid = :cmt,
cmt )
Population
Subjects: 120
Covariates: Dose
Observations: Conc
1.3.1 Pumas Bayesian 1-compartment Model
Let’s then add the priors in the @param
block.
Since the original model in the Introduction to Pumas tutorial has Ω
as a PDiagDomain
, we’ll be using scalars and not a matrix for our between-subject variability parameters. Hence, we’ll define ω²
for every diagonal element of the original Ω
.
We’ll cover covariance matrices in the Random Effects in Bayesian Models tutorial.
Now we fit our model by using the fit
function. Instead of using estimation method as FOCE
or NaivePooled
, we will be using BayesMCMC
which uses No-U-Turn Sampler (NUTS) – a very fast and state-of-the-art Hamiltonian Monte Carlo (HMC) based sampler:
If you want to learn more about Markov chain Monte Carlo (MCMC) methods, Hamiltonian Monte Carlo (HMC), and No-U-Turn Sampler (NUTS); please check the Pumas Bayesian Workflow Documentation at docs.pumas.ai
.
= @model begin
pk_1cmp @param begin
~ LogNormal(log(3.2), 1)
tvcl ~ LogNormal(log(16.4), 1)
tvv ~ LogNormal(log(3.8), 1)
tvka ~ LogNormal(log(0.04), 0.25)
ω²cl ~ LogNormal(log(0.04), 0.25)
ω²v ~ LogNormal(log(0.04), 0.25)
ω²ka ∈ LogNormal(log(0.2), 0.25)
σ_p end
@random begin
~ Normal(0, sqrt(ω²cl))
ηcl ~ Normal(0, sqrt(ω²v))
ηv ~ Normal(0, sqrt(ω²ka))
ηka end
@covariates begin
Doseend
@pre begin
= tvcl * exp(ηcl)
CL = tvv * exp(ηv)
Vc = tvka * exp(ηka)
Ka end
@dynamics Depots1Central1
@derived begin
:= @. Central / Vc
cp ~ @. Normal(cp, abs(cp) * σ_p)
Conc end
end
┌ Warning: Covariate Dose is not used in the model.
└ @ Pumas ~/run/_work/PumasTutorials.jl/PumasTutorials.jl/custom_julia_depot/packages/Pumas/aZRyj/src/dsl/model_macro.jl:2856
PumasModel
Parameters: tvcl, tvv, tvka, ω²cl, ω²v, ω²ka, σ_p
Random effects: ηcl, ηv, ηka
Covariates: Dose
Dynamical system variables: Depot, Central
Dynamical system type: Closed form
Derived: Conc
Observed: Conc
By default BayesMCMC
uses nsamples = 2_000
as the number of MCMC samples to generate (including the adaptation samples); and nadapts = 1_000
as the number of adaptation steps in the NUTS algorithm which must be less than nsamples
. Additionally, BayesMCMC
by default uses 4 chains with nchains = 4
. By default, multithreading is enabled (ensemblealg = EnsembleThreads
): If Julia runs multithreaded, sampling of the chains is performed in parallel (parallel_chains = true
), and if the number of threads is at least twice the number of chains, then parallelization is extended to subjects (parallel_subjects = true
).
Here we’ll use nsamples = 1_000
and nadapts = 500
.
Similar to the original model fit in the Introduction to Pumas tutorial, we’ll fix tvka
to 2 as we don’t have a lot of information before tmax
.
= fit(
pk_1cmp_fit
pk_1cmp,
pkpain_noplb,init_params(pk_1cmp),
BayesMCMC(; nsamples = 1_000, nadapts = 500, constantcoef = (; tvka = 2)),
)
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
Chains MCMC chain (1000×6×4 Array{Float64, 3}):
Iterations = 1:1:1000
Number of chains = 4
Samples per chain = 1000
Wall duration = 510.91 seconds
Compute duration = 1929.91 seconds
parameters = tvcl, tvv, ω²cl, ω²v, ω²ka, σ_p
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
tvcl 3.1978 0.0780 0.0058 181.2387 447.7544 1.0145 ⋯
tvv 13.2583 0.2928 0.0113 577.5371 1037.1251 1.0056 ⋯
ω²cl 0.0733 0.0091 0.0005 366.8807 705.6506 1.0245 ⋯
ω²v 0.0460 0.0057 0.0002 863.1270 1538.8892 1.0034 ⋯
ω²ka 1.0956 0.1657 0.0048 1900.3357 1732.1927 1.0008 ⋯
σ_p 0.1051 0.0135 0.0005 1735.9736 1009.8418 1.0036 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
tvcl 3.0440 3.1435 3.1957 3.2524 3.3472
tvv 12.7509 13.0735 13.2593 13.4423 13.7860
ω²cl 0.0578 0.0678 0.0731 0.0786 0.0916
ω²v 0.0359 0.0422 0.0457 0.0494 0.0576
ω²ka 0.8425 0.9962 1.0914 1.1977 1.3989
σ_p 0.0995 0.1025 0.1041 0.1057 0.1098
Pumas Bayesian models returns a different object that of other Pumas models which can be:
- exported to a
DataFrame
containing all of the MCMC samples withDataFrame(Chains(fit_result))
- summarized to a
DataFrame
containing summary statistics of the MCMC samples withDataFrame(summarystats(fit_result))
- summarized to a
DataFrame
containing quantiles of the MCMC samples withDataFrame(quantile(fit_result))
Here are our estimates both as summary statistics with summarystats
and quantiles with quantile
:
DataFrame(summarystats(pk_1cmp_fit))
Row | parameters | mean | std | mcse | ess_bulk | ess_tail | rhat | ess_per_sec |
---|---|---|---|---|---|---|---|---|
Symbol | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | tvcl | 3.19782 | 0.0780312 | 0.00579441 | 181.239 | 447.754 | 1.01454 | 0.0939105 |
2 | tvv | 13.2583 | 0.292843 | 0.0112928 | 577.537 | 1037.13 | 1.00555 | 0.299256 |
3 | ω²cl | 0.0733496 | 0.00908349 | 0.000477156 | 366.881 | 705.651 | 1.02449 | 0.190103 |
4 | ω²v | 0.0459634 | 0.00567866 | 0.000195322 | 863.127 | 1538.89 | 1.0034 | 0.447237 |
5 | ω²ka | 1.09558 | 0.165699 | 0.00478086 | 1900.34 | 1732.19 | 1.00077 | 0.984677 |
6 | σ_p | 0.105083 | 0.0135448 | 0.000531491 | 1735.97 | 1009.84 | 1.00359 | 0.899511 |
DataFrame(quantile(pk_1cmp_fit))
Row | parameters | 2.5% | 25.0% | 50.0% | 75.0% | 97.5% |
---|---|---|---|---|---|---|
Symbol | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | tvcl | 3.04402 | 3.14347 | 3.19573 | 3.25244 | 3.3472 |
2 | tvv | 12.7509 | 13.0735 | 13.2593 | 13.4423 | 13.786 |
3 | ω²cl | 0.0578156 | 0.0677732 | 0.0730968 | 0.078619 | 0.0916429 |
4 | ω²v | 0.0358798 | 0.0421644 | 0.0457145 | 0.0494455 | 0.057631 |
5 | ω²ka | 0.84251 | 0.996203 | 1.09142 | 1.19774 | 1.39894 |
6 | σ_p | 0.0994696 | 0.102471 | 0.104115 | 0.10573 | 0.109751 |
1.4 References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian Data Analysis. Chapman and Hall/CRC.
McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan. CRC press.