using Pumas
Introduction to Bayesian Models in Pumas
1 Introduction to Bayesian Models in Pumas
This notebook will provide the basics on how to specify and fit Bayesian models in Pumas.
1.1 Bayesian Statistics
Bayesian statistics is a data analysis approach based on Bayes’ theorem where available knowledge about the parameters of a statistical model is updated with the information of observed data (Gelman et al., 2013; McElreath, 2020).
Previous knowledge is expressed as a prior distribution and combined with the observed data in the form of a likelihood function to generate a posterior distribution. The posterior can also be used to make predictions about future events.
1.2 Advantages of Bayesian analysis
The Bayesian workflow allows analysts to:
- Incorporate domain knowledge and insights from previous studies using prior distributions.
- Quantify the epistemic uncertainty in the model parameters’ values. Parameter values can be uncertain in reality due to model non-identifiability or practical non-identifiability issues because of lack of data.
This epistemic uncertainty can be further propagated forward via simulation to get a distribution of predictions which when used to make decisions, e.g. dosing decisions, can make those decisions more robust to changes in the parameter values. The above are advantages of Bayesian analysis which the traditional frequentist workflow typically doesn’t have a satisfactory answer for. Using bootstrapping or asymptotic estimates of standard errors can be considered somewhat ad-hoc and assumptions methods to quantify uncertainty in the parameter estimates when Bayesian inference uses the established theory of probability to more rigorously quantify said uncertainty with fewer assumptions about the model.
It is important to note that one can still reap the second benefit of Bayesian analysis due to its flexibility even when little to no domain knowledge is imposed on the model. So the Bayesian workflow doesn’t force you to incorporate your domain knowledge but it empowers you to if you want.
Let’s formalize some of the advantages.
First, Bayesian Statistics uses probabilistic statements in terms of:
- one or more parameters \(\theta\)
- unobserved data \(\tilde{y}\)
These statements are conditioned on the observed values of \(y\):
- \(P(\theta \mid y)\)
- \(P(\tilde{y} \mid y)\)
We also, implicitly, condition on the observed data from any covariate \(x\)
Generally, we are interested in:
- expected response of a new subject to a drug, e.g. \(\operatorname{E}[\hat{y} \mid y]\)
- the probability of drug effect is higher than zero, e.g. \(P(\theta > 0 \mid y) \geq 0.95\)
1.3 How to specify Bayesian models in Pumas?
Let’s revisit the 1-compartment model we specified in the Introduction to Pumas tutorial:
@model begin
@param begin
∈ RealDomain(lower = 0, init = 3.2)
tvcl ∈ RealDomain(lower = 0, init = 16.4)
tvv ∈ RealDomain(lower = 0, init = 3.8)
tvka ∈ PDiagDomain(init = [0.04, 0.04, 0.04])
Ω ∈ RealDomain(lower = 0.0001, init = 0.2)
σ_p end
@random begin
~ MvNormal(Ω)
η end
@covariates begin
Doseend
@pre begin
= tvcl * exp(η[1])
CL = tvv * exp(η[2])
Vc = tvka * exp(η[3])
Ka end
@dynamics Depots1Central1
@derived begin
:= @. Central / Vc
cp ~ @. Normal(cp, abs(cp) * σ_p)
Conc end
end
Bayesian models in Pumas are quite easy: you just need to specify priors for all parameters in the @param
block.
Instead of having:
@param begin
∈ RealDomain(lower = 0, init = 3.2)
tvcl ...
end
We would say that tvcl
has a prior:
@param begin
~ LogNormal(log(1.5), 1)
tvcl ...
end
where tvcl
(typical value for clearance) is the parameter that we want the model to estimate and LogNormal(log(1.5), 1)
is the prior we are giving to tvcl
.
You can use any prior you want. Here we are giving tvcl
a LogNormal
prior since it has a positive-only domain and allows us to give more weight to certain values while also making it possible for higher values due to the wide-tail nature.
There is no one prior that fits all cases and generally it might be a good practice to follow a previous similar study’s priors where a good reference can be found.
However, if you have the task of choosing good prior distributions for an all-new model, it will generally be a multi-step process consisting of:
- Deciding the support of the prior. The support of the prior distribution must match the the domain of the parameter. For example, different priors can be used for positive parameters than those for parameters between 0 and 1.
- Deciding the center of the prior, e.g. mean, median or mode.
- Deciding the strength (aka informativeness) of the prior. This is often controlled by a standard deviation or scale parameter in the distribution constructor. A small standard deviation or scale parameter implies low uncertainty in the parameter value which leads to a stronger (aka more informative) prior. A large standard deviation or scale parameter implies high uncertainty in the parameter value which leads to a weak (aka less informative) prior. You should study each prior distribution you are considering before using it to make sure the strength of the prior reflects your confidence level in the parameter values.
- Deciding the shape of the probability density function (PDF) of the prior. Some distributions are left skewed, others are right skewed and some are symmetric. Some have heavier tails than others. You should make sure the shape of the PDF reflects your knowledge about the parameter value prior to observing the data.
1.3.0.1 Data preparation for modeling
We’ll use the PharmaDatasets.jl
package to load the data:
using PharmaDatasets
= dataset("pk_painrelief")
pkpain_df first(pkpain_df, 5)
Row | Subject | Time | Conc | PainRelief | PainScore | RemedStatus | Dose |
---|---|---|---|---|---|---|---|
Int64 | Float64 | Float64 | Int64 | Int64 | Int64 | String7 | |
1 | 1 | 0.0 | 0.0 | 0 | 3 | 1 | 20 mg |
2 | 1 | 0.5 | 1.15578 | 1 | 1 | 0 | 20 mg |
3 | 1 | 1.0 | 1.37211 | 1 | 0 | 0 | 20 mg |
4 | 1 | 1.5 | 1.30058 | 1 | 0 | 0 | 20 mg |
5 | 1 | 2.0 | 1.19195 | 1 | 1 | 0 | 20 mg |
Let’s filter out the placebo data as we don’t need that for the PK analysis:
using DataFramesMeta
= @rsubset pkpain_df :Dose != "Placebo"; pkpain_noplb_df
If you want to learn more about data wrangling, don’t forget to check our Data Wrangling in Julia tutorials!
Also we need to add the :amt
column:
@rtransform! pkpain_noplb_df :amt = :Time == 0 ? parse(Int, chop(:Dose; tail = 3)) : missing;
PumasNDF requires the presence of :evid
and :cmt
columns in the dataset:
@rtransform! pkpain_noplb_df begin
:evid = :Time == 0 ? 1 : 0
:cmt = :Time == 0 ? 1 : 2
end;
Further, observations at time of dosing, i.e., when evid = 1
have to be missing
:
@rtransform! pkpain_noplb_df :Conc = :evid == 1 ? missing : :Conc;
Here is the final result:
first(pkpain_noplb_df, 10)
Row | Subject | Time | Conc | PainRelief | PainScore | RemedStatus | Dose | amt | evid | cmt |
---|---|---|---|---|---|---|---|---|---|---|
Int64 | Float64 | Float64? | Int64 | Int64 | Int64 | String7 | Int64? | Int64 | Int64 | |
1 | 1 | 0.0 | missing | 0 | 3 | 1 | 20 mg | 20 | 1 | 1 |
2 | 1 | 0.5 | 1.15578 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
3 | 1 | 1.0 | 1.37211 | 1 | 0 | 0 | 20 mg | missing | 0 | 2 |
4 | 1 | 1.5 | 1.30058 | 1 | 0 | 0 | 20 mg | missing | 0 | 2 |
5 | 1 | 2.0 | 1.19195 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
6 | 1 | 2.5 | 1.13602 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
7 | 1 | 3.0 | 0.873224 | 1 | 0 | 0 | 20 mg | missing | 0 | 2 |
8 | 1 | 4.0 | 0.739963 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
9 | 1 | 5.0 | 0.600143 | 0 | 2 | 0 | 20 mg | missing | 0 | 2 |
10 | 1 | 6.0 | 0.425624 | 1 | 1 | 0 | 20 mg | missing | 0 | 2 |
Finally, we’ll import pkpain_noplb_df
DataFrame
into a Population
with read_pumas
:
= read_pumas(
pkpain_noplb
pkpain_noplb_df,= :Subject,
id = :Time,
time = :amt,
amt = [:Conc],
observations = [:Dose],
covariates = :evid,
evid = :cmt,
cmt )
Population
Subjects: 120
Covariates: Dose
Observations: Conc
1.3.1 Pumas Bayesian 1-compartment Model
Let’s then add the priors in the @param
block.
Since the original model in the Introduction to Pumas tutorial has Ω
as a PDiagDomain
, we’ll be using scalars and not a matrix for our between-subject variability parameters. Hence, we’ll define ω²
for every diagonal element of the original Ω
.
We’ll cover covariance matrices in the Random Effects in Bayesian Models tutorial.
Now we fit our model by using the fit
function. Instead of using estimation method as FOCE
or NaivePooled
, we will be using BayesMCMC
which uses No-U-Turn Sampler (NUTS) – a very fast and state-of-the-art Hamiltonian Monte Carlo (HMC) based sampler:
If you want to learn more about Markov chain Monte Carlo (MCMC) methods, Hamiltonian Monte Carlo (HMC), and No-U-Turn Sampler (NUTS); please check the Pumas Bayesian Workflow Documentation at docs.pumas.ai
.
= @model begin
pk_1cmp @param begin
~ LogNormal(log(3.2), 1)
tvcl ~ LogNormal(log(16.4), 1)
tvv ~ LogNormal(log(3.8), 1)
tvka ~ LogNormal(log(0.04), 0.25)
ω²cl ~ LogNormal(log(0.04), 0.25)
ω²v ~ LogNormal(log(0.04), 0.25)
ω²ka ∈ LogNormal(log(0.2), 0.25)
σ_p end
@random begin
~ Normal(0, sqrt(ω²cl))
ηcl ~ Normal(0, sqrt(ω²v))
ηv ~ Normal(0, sqrt(ω²ka))
ηka end
@covariates begin
Doseend
@pre begin
= tvcl * exp(ηcl)
CL = tvv * exp(ηv)
Vc = tvka * exp(ηka)
Ka end
@dynamics Depots1Central1
@derived begin
:= @. Central / Vc
cp ~ @. Normal(cp, abs(cp) * σ_p)
Conc end
end
PumasModel
Parameters: tvcl, tvv, tvka, ω²cl, ω²v, ω²ka, σ_p
Random effects: ηcl, ηv, ηka
Covariates: Dose
Dynamical system variables: Depot, Central
Dynamical system type: Closed form
Derived: Conc
Observed: Conc
By default BayesMCMC
uses nsamples = 10_000
as the number of MCMC samples to generate (including the adaptation samples); and nadapts = 2_000
as the number of adaptation steps in the NUTS algorithm which must be less than nsamples
. Additionally, BayesMCMC
by default uses 4 parallels chains with nchains = 4
and parallel_chains = true
. Parallelism is also extended to subjects with parallel_subjects = true
by default.
Here we’ll use nsamples = 2_000
and nadapts = 1_000
.
Similar to the original model fit in the Introduction to Pumas tutorial, we’ll fix tvka
to 2 as we don’t have a lot of information before tmax
.
= fit(
pk_1cmp_fit
pk_1cmp,
pkpain_noplb,init_params(pk_1cmp),
BayesMCMC(; nsamples = 2_000, nadapts = 1_000, constantcoef = (; tvka = 2)),
)
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
Chains MCMC chain (2000×6×4 Array{Float64, 3}):
Iterations = 1:1:2000
Number of chains = 4
Samples per chain = 2000
Wall duration = 873.76 seconds
Compute duration = 872.91 seconds
parameters = tvcl, tvv, ω²cl, ω²v, ω²ka, σ_p
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float64 ⋯
tvcl 3.2016 0.0811 0.0039 394.6287 910.4521 1.0125 ⋯
tvv 13.2539 0.3017 0.0092 961.9019 1676.8707 1.0028 ⋯
ω²cl 0.0732 0.0086 0.0003 1097.6390 1906.8388 1.0022 ⋯
ω²v 0.0462 0.0056 0.0001 1839.3773 3237.6978 1.0032 ⋯
ω²ka 1.0990 0.1632 0.0033 4290.4760 3087.3921 1.0008 ⋯
σ_p 0.1048 0.0180 0.0004 4184.2730 2743.7841 1.0020 ⋯
1 column omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
tvcl 3.0505 3.1495 3.2005 3.2523 3.3544
tvv 12.7104 13.0642 13.2503 13.4382 13.8342
ω²cl 0.0584 0.0675 0.0728 0.0786 0.0910
ω²v 0.0360 0.0423 0.0459 0.0498 0.0580
ω²ka 0.8410 0.9983 1.0945 1.1964 1.4122
σ_p 0.0994 0.1024 0.1040 0.1056 0.1091
Pumas Bayesian models returns a different object that of other Pumas models which can be:
- exported to a
DataFrame
containing all of the MCMC samples withDataFrame(Chains(fit_result))
- summarized to a
DataFrame
containing summary statistics of the MCMC samples withDataFrame(summarystats(fit_result))
- summarized to a
DataFrame
containing quantiles of the MCMC samples withDataFrame(quantile(fit_result))
Here are our estimates both as summary statistics with summarystats
and quantiles with quantile
:
DataFrame(summarystats(pk_1cmp_fit))
Row | parameters | mean | std | mcse | ess_bulk | ess_tail | rhat | ess_per_sec |
---|---|---|---|---|---|---|---|---|
Symbol | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | tvcl | 3.20158 | 0.0811209 | 0.00391905 | 394.629 | 910.452 | 1.01245 | 0.452086 |
2 | tvv | 13.2539 | 0.301661 | 0.00918247 | 961.902 | 1676.87 | 1.00277 | 1.10195 |
3 | ω²cl | 0.0731858 | 0.00861853 | 0.000261842 | 1097.64 | 1906.84 | 1.00219 | 1.25745 |
4 | ω²v | 0.0461996 | 0.00560941 | 0.000131005 | 1839.38 | 3237.7 | 1.00319 | 2.10719 |
5 | ω²ka | 1.09902 | 0.163238 | 0.0033104 | 4290.48 | 3087.39 | 1.00084 | 4.91516 |
6 | σ_p | 0.104771 | 0.0180441 | 0.000402647 | 4184.27 | 2743.78 | 1.00196 | 4.7935 |
DataFrame(quantile(pk_1cmp_fit))
Row | parameters | 2.5% | 25.0% | 50.0% | 75.0% | 97.5% |
---|---|---|---|---|---|---|
Symbol | Float64 | Float64 | Float64 | Float64 | Float64 | |
1 | tvcl | 3.05046 | 3.14948 | 3.20052 | 3.25227 | 3.35441 |
2 | tvv | 12.7104 | 13.0642 | 13.2503 | 13.4382 | 13.8342 |
3 | ω²cl | 0.0584284 | 0.067459 | 0.0728335 | 0.0785798 | 0.0909928 |
4 | ω²v | 0.036031 | 0.0422773 | 0.0459159 | 0.0497899 | 0.0580445 |
5 | ω²ka | 0.840969 | 0.998313 | 1.0945 | 1.19643 | 1.41225 |
6 | σ_p | 0.0994474 | 0.102422 | 0.103974 | 0.1056 | 0.109093 |
1.4 References
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian Data Analysis. Chapman and Hall/CRC.
McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan. CRC press.