Introduction to Bayesian Models in Pumas

Authors

Jose Storopoli

Mohamed Tarek

using Pumas

1 Introduction to Bayesian Models in Pumas

This notebook will provide the basics on how to specify and fit Bayesian models in Pumas.

1.1 Bayesian Statistics

Bayesian statistics is a data analysis approach based on Bayes’ theorem where available knowledge about the parameters of a statistical model is updated with the information of observed data (Gelman et al., 2013; McElreath, 2020).

Previous knowledge is expressed as a prior distribution and combined with the observed data in the form of a likelihood function to generate a posterior distribution. The posterior can also be used to make predictions about future events.

1.2 Advantages of Bayesian analysis

The Bayesian workflow allows analysts to:

  1. Incorporate domain knowledge and insights from previous studies using prior distributions.
  2. Quantify the epistemic uncertainty in the model parameters’ values. Parameter values can be uncertain in reality due to model non-identifiability or practical non-identifiability issues because of lack of data.

This epistemic uncertainty can be further propagated forward via simulation to get a distribution of predictions which when used to make decisions, e.g. dosing decisions, can make those decisions more robust to changes in the parameter values. The above are advantages of Bayesian analysis which the traditional frequentist workflow typically doesn’t have a satisfactory answer for. Using bootstrapping or asymptotic estimates of standard errors can be considered somewhat ad-hoc and assumptions methods to quantify uncertainty in the parameter estimates when Bayesian inference uses the established theory of probability to more rigorously quantify said uncertainty with fewer assumptions about the model.

It is important to note that one can still reap the second benefit of Bayesian analysis due to its flexibility even when little to no domain knowledge is imposed on the model. So the Bayesian workflow doesn’t force you to incorporate your domain knowledge but it empowers you to if you want.

Let’s formalize some of the advantages.

First, Bayesian Statistics uses probabilistic statements in terms of:

  • one or more parameters \(\theta\)
  • unobserved data \(\tilde{y}\)

These statements are conditioned on the observed values of \(y\):

  • \(P(\theta \mid y)\)
  • \(P(\tilde{y} \mid y)\)

We also, implicitly, condition on the observed data from any covariate \(x\)

Generally, we are interested in:

  • expected response of a new subject to a drug, e.g. \(\operatorname{E}[\hat{y} \mid y]\)
  • the probability of drug effect is higher than zero, e.g. \(P(\theta > 0 \mid y) \geq 0.95\)

1.3 How to specify Bayesian models in Pumas?

Let’s revisit the 1-compartment model we specified in the Introduction to Pumas tutorial:

@model begin
    @param begin
        tvcl  RealDomain(lower = 0, init = 3.2)
        tvv  RealDomain(lower = 0, init = 16.4)
        tvka  RealDomain(lower = 0, init = 3.8)
        Ω  PDiagDomain(init = [0.04, 0.04, 0.04])
        σ_p  RealDomain(lower = 0.0001, init = 0.2)
    end
    @random begin
        η ~ MvNormal(Ω)
    end
    @covariates begin
        Dose
    end
    @pre begin
        CL = tvcl * exp(η[1])
        Vc = tvv * exp(η[2])
        Ka = tvka * exp(η[3])
    end
    @dynamics Depots1Central1
    @derived begin
        cp := @. Central / Vc
        Conc ~ @. Normal(cp, abs(cp) * σ_p)
    end
end

Bayesian models in Pumas are quite easy: you just need to specify priors for all parameters in the @param block.

Instead of having:

@param begin
    tvcl  RealDomain(lower = 0, init = 3.2)
    ...
end

We would say that tvcl has a prior:

@param begin
    tvcl ~ LogNormal(log(1.5), 1)
    ...
end

where tvcl (typical value for clearance) is the parameter that we want the model to estimate and LogNormal(log(1.5), 1) is the prior we are giving to tvcl.

Note

You can use any prior you want. Here we are giving tvcl a LogNormal prior since it has a positive-only domain and allows us to give more weight to certain values while also making it possible for higher values due to the wide-tail nature.

There is no one prior that fits all cases and generally it might be a good practice to follow a previous similar study’s priors where a good reference can be found.

However, if you have the task of choosing good prior distributions for an all-new model, it will generally be a multi-step process consisting of:

  1. Deciding the support of the prior. The support of the prior distribution must match the the domain of the parameter. For example, different priors can be used for positive parameters than those for parameters between 0 and 1.
  2. Deciding the center of the prior, e.g. mean, median or mode.
  3. Deciding the strength (aka informativeness) of the prior. This is often controlled by a standard deviation or scale parameter in the distribution constructor. A small standard deviation or scale parameter implies low uncertainty in the parameter value which leads to a stronger (aka more informative) prior. A large standard deviation or scale parameter implies high uncertainty in the parameter value which leads to a weak (aka less informative) prior. You should study each prior distribution you are considering before using it to make sure the strength of the prior reflects your confidence level in the parameter values.
  4. Deciding the shape of the probability density function (PDF) of the prior. Some distributions are left skewed, others are right skewed and some are symmetric. Some have heavier tails than others. You should make sure the shape of the PDF reflects your knowledge about the parameter value prior to observing the data.

1.3.0.1 Data preparation for modeling

We’ll use the PharmaDatasets.jl package to load the data:

using PharmaDatasets
pkpain_df = dataset("pk_painrelief")
first(pkpain_df, 5)
5×7 DataFrame
Row Subject Time Conc PainRelief PainScore RemedStatus Dose
Int64 Float64 Float64 Int64 Int64 Int64 String7
1 1 0.0 0.0 0 3 1 20 mg
2 1 0.5 1.15578 1 1 0 20 mg
3 1 1.0 1.37211 1 0 0 20 mg
4 1 1.5 1.30058 1 0 0 20 mg
5 1 2.0 1.19195 1 1 0 20 mg

Let’s filter out the placebo data as we don’t need that for the PK analysis:

using DataFramesMeta
pkpain_noplb_df = @rsubset pkpain_df :Dose != "Placebo";
Tip

If you want to learn more about data wrangling, don’t forget to check our Data Wrangling in Julia tutorials!

Also we need to add the :amt column:

@rtransform! pkpain_noplb_df :amt = :Time == 0 ? parse(Int, chop(:Dose; tail = 3)) : missing;

PumasNDF requires the presence of :evid and :cmt columns in the dataset:

@rtransform! pkpain_noplb_df begin
    :evid = :Time == 0 ? 1 : 0
    :cmt = :Time == 0 ? 1 : 2
end;

Further, observations at time of dosing, i.e., when evid = 1 have to be missing:

@rtransform! pkpain_noplb_df :Conc = :evid == 1 ? missing : :Conc;

Here is the final result:

first(pkpain_noplb_df, 10)
10×10 DataFrame
Row Subject Time Conc PainRelief PainScore RemedStatus Dose amt evid cmt
Int64 Float64 Float64? Int64 Int64 Int64 String7 Int64? Int64 Int64
1 1 0.0 missing 0 3 1 20 mg 20 1 1
2 1 0.5 1.15578 1 1 0 20 mg missing 0 2
3 1 1.0 1.37211 1 0 0 20 mg missing 0 2
4 1 1.5 1.30058 1 0 0 20 mg missing 0 2
5 1 2.0 1.19195 1 1 0 20 mg missing 0 2
6 1 2.5 1.13602 1 1 0 20 mg missing 0 2
7 1 3.0 0.873224 1 0 0 20 mg missing 0 2
8 1 4.0 0.739963 1 1 0 20 mg missing 0 2
9 1 5.0 0.600143 0 2 0 20 mg missing 0 2
10 1 6.0 0.425624 1 1 0 20 mg missing 0 2

Finally, we’ll import pkpain_noplb_df DataFrame into a Population with read_pumas:

pkpain_noplb = read_pumas(
    pkpain_noplb_df,
    id = :Subject,
    time = :Time,
    amt = :amt,
    observations = [:Conc],
    covariates = [:Dose],
    evid = :evid,
    cmt = :cmt,
)
Population
  Subjects: 120
  Covariates: Dose
  Observations: Conc

1.3.1 Pumas Bayesian 1-compartment Model

Let’s then add the priors in the @param block.

Note

Since the original model in the Introduction to Pumas tutorial has Ω as a PDiagDomain, we’ll be using scalars and not a matrix for our between-subject variability parameters. Hence, we’ll define ω² for every diagonal element of the original Ω.

We’ll cover covariance matrices in the Random Effects in Bayesian Models tutorial.

Now we fit our model by using the fit function. Instead of using estimation method as FOCE or NaivePooled, we will be using BayesMCMC which uses No-U-Turn Sampler (NUTS) – a very fast and state-of-the-art Hamiltonian Monte Carlo (HMC) based sampler:

Note

If you want to learn more about Markov chain Monte Carlo (MCMC) methods, Hamiltonian Monte Carlo (HMC), and No-U-Turn Sampler (NUTS); please check the Pumas Bayesian Workflow Documentation at docs.pumas.ai.

pk_1cmp = @model begin
    @param begin
        tvcl ~ LogNormal(log(3.2), 1)
        tvv ~ LogNormal(log(16.4), 1)
        tvka ~ LogNormal(log(3.8), 1)
        ω²cl ~ LogNormal(log(0.04), 0.25)
        ω²v ~ LogNormal(log(0.04), 0.25)
        ω²ka ~ LogNormal(log(0.04), 0.25)
        σ_p  LogNormal(log(0.2), 0.25)
    end
    @random begin
        ηcl ~ Normal(0, sqrt(ω²cl))
        ηv ~ Normal(0, sqrt(ω²v))
        ηka ~ Normal(0, sqrt(ω²ka))
    end
    @covariates begin
        Dose
    end
    @pre begin
        CL = tvcl * exp(ηcl)
        Vc = tvv * exp(ηv)
        Ka = tvka * exp(ηka)
    end
    @dynamics Depots1Central1
    @derived begin
        cp := @. Central / Vc
        Conc ~ @. Normal(cp, abs(cp) * σ_p)
    end
end
PumasModel
  Parameters: tvcl, tvv, tvka, ω²cl, ω²v, ω²ka, σ_p
  Random effects: ηcl, ηv, ηka
  Covariates: Dose
  Dynamical variables: Depot, Central
  Derived: Conc
  Observed: Conc
Tip

By default BayesMCMC uses nsamples = 10_000 as the number of MCMC samples to generate (including the adaptation samples); and nadapts = 2_000 as the number of adaptation steps in the NUTS algorithm which must be less than nsamples. Additionally, BayesMCMC by default uses 4 parallels chains with nchains = 4 and parallel_chains = true. Parallelism is also extended to subjects with parallel_subjects = true by default.

Here we’ll use nsamples = 2_000 and nadapts = 1_000.

Caution

Similar to the original model fit in the Introduction to Pumas tutorial, we’ll fix tvka to 2 as we don’t have a lot of information before tmax.

pk_1cmp_fit = fit(
    pk_1cmp,
    pkpain_noplb,
    init_params(pk_1cmp),
    BayesMCMC(; nsamples = 2_000, nadapts = 1_000);
    constantcoef = (; tvka = 2),
)
[ Info: Checking the initial parameter values.
[ Info: The initial log probability and its gradient are finite. Check passed.
Chains MCMC chain (2000×6×4 Array{Float64, 3}):

Iterations        = 1:1:2000
Number of chains  = 4
Samples per chain = 2000
Wall duration     = 622.42 seconds
Compute duration  = 622.33 seconds
parameters        = tvcl, tvv, ω²cl, ω²v, ω²ka, σ_p

Summary Statistics
  parameters      mean       std      mcse    ess_bulk    ess_tail      rhat       Symbol   Float64   Float64   Float64     Float64     Float64   Float64   ⋯

        tvcl    3.1969    0.0844    0.0053    260.5018    405.0524    1.0170   ⋯
         tvv   13.2735    0.2745    0.0098    776.1424   1702.8245    1.0044   ⋯
        ω²cl    0.0738    0.0084    0.0003    784.8098   1817.2489    1.0036   ⋯
         ω²v    0.0460    0.0056    0.0001   1444.4659   2806.2990    1.0014   ⋯
        ω²ka    1.0949    0.1522    0.0026   4520.7441   4521.7609    1.0005   ⋯
         σ_p    0.1045    0.0099    0.0002   6141.0857   4069.0225    1.0006   ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

        tvcl    3.0389    3.1397    3.1946    3.2509    3.3688
         tvv   12.7459   13.0845   13.2814   13.4592   13.8038
        ω²cl    0.0592    0.0680    0.0733    0.0790    0.0913
         ω²v    0.0363    0.0421    0.0456    0.0493    0.0579
        ω²ka    0.8402    0.9977    1.0889    1.1880    1.4041
         σ_p    0.0995    0.1024    0.1040    0.1056    0.1090
Note

Pumas Bayesian models returns a different object that of other Pumas models which can be:

  • exported to a DataFrame containing all of the MCMC samples with DataFrame(Chains(fit_result))
  • summarized to a DataFrame containing summary statistics of the MCMC samples with DataFrame(summarystats(fit_result))
  • summarized to a DataFrame containing quantiles of the MCMC samples with DataFrame(quantile(fit_result))

Here are our estimates both as summary statistics with summarystats and quantiles with quantile:

DataFrame(summarystats(pk_1cmp_fit))
6×8 DataFrame
Row parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
1 tvcl 3.19689 0.0844071 0.00527122 260.502 405.052 1.01699 0.418594
2 tvv 13.2735 0.274455 0.00980839 776.142 1702.82 1.00444 1.24716
3 ω²cl 0.0737777 0.00839555 0.000296916 784.81 1817.25 1.00359 1.26109
4 ω²v 0.0459515 0.0055725 0.000145784 1444.47 2806.3 1.0014 2.32108
5 ω²ka 1.09489 0.152242 0.00261075 4520.74 4521.76 1.00049 7.26427
6 σ_p 0.10449 0.00994056 0.000240379 6141.09 4069.02 1.0006 9.86796
DataFrame(quantile(pk_1cmp_fit))
6×6 DataFrame
Row parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
1 tvcl 3.03886 3.13967 3.1946 3.25091 3.36883
2 tvv 12.7459 13.0845 13.2814 13.4592 13.8038
3 ω²cl 0.0592421 0.0680212 0.0733442 0.0789529 0.0913143
4 ω²v 0.0362687 0.0420672 0.045589 0.0493273 0.0579137
5 ω²ka 0.840245 0.997697 1.08885 1.18801 1.40414
6 σ_p 0.0995248 0.102417 0.104034 0.105619 0.108986

1.4 References

Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013). Bayesian Data Analysis. Chapman and Hall/CRC.

McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan. CRC press.