using Pumas
using PharmaDatasets
using DataFramesMeta
using CairoMakie
using AlgebraOfGraphics
using PumasUtilities
using Markdown
Fitting and Inferring a Compartmental PK Model
1 Introduction
A typical workflow for fitting a Pumas model and deriving parameter precision typically involves:
- Preparing the data and the model.
- Checking model-data compatibility.
- Obtaining initial parameter estimates.
- Fitting the model via a chosen estimation method.
- Interpreting the fit results.
- Computing parameter precision.
- (Optionally) proceeding with more advanced techniques like bootstrapping or SIR for robust uncertainty quantification.
The following sections will walk through these steps using a one-compartment PK model for Warfarin, focusing on the PK aspects only. Exploratory data analysis (EDA), although extremely important, is out of scope here. Readers interested in EDA are encouraged to consult other tutorials.
2 Model and Data
2.1 Model Definition
Below is the PK model, named warfarin_pk_model
, defined in Pumas. This model contains:
- Fixed effects (population parameters):
pop_CL, pop_Vc, pop_tabs, pop_lag
- Inter-individual variability (IIV) components:
pk_Ω
- Residual error model parameters:
σ_prop, σ_add
- Covariates for scaling:
FSZCL
andFSZV
- Differential equations describing the PK behavior in the compartments.
= @model begin
warfarin_pk_model
@metadata begin
= "Warfarin 1-compartment PK model (PD removed)"
desc = u"hr"
timeu end
@param begin
# PK parameters
"""
Clearance (L/hr)
"""
∈ RealDomain(lower = 0.0, init = 0.134)
pop_CL """
Central volume (L)
"""
∈ RealDomain(lower = 0.0, init = 8.11)
pop_Vc """
Absorption lag time (hr)
"""
∈ RealDomain(lower = 0.0, init = 0.523)
pop_tabs """
Lag time (hr)
"""
∈ RealDomain(lower = 0.0, init = 0.1)
pop_lag
# Inter-individual variability
"""
- ΩCL: Clearance
- ΩVc: Central volume
- Ωtabs: Absorption lag time
"""
∈ PDiagDomain([0.01, 0.01, 0.01])
pk_Ω # Residual variability
"""
σ_prop: Proportional error
"""
∈ RealDomain(lower = 0.0, init = 0.00752)
σ_prop """
σ_add: Additive error
"""
∈ RealDomain(lower = 0.0, init = 0.0661)
σ_add end
@random begin
~ MvNormal(pk_Ω) # mean = 0, covariance = pk_Ω
pk_η end
@covariates begin
"""
FSZCL: Clearance scaling factor
"""
FSZCL"""
FSZV: Volume scaling factor
"""
FSZVend
@pre begin
= FSZCL * pop_CL * exp(pk_η[1])
CL = FSZV * pop_Vc * exp(pk_η[2])
Vc
= pop_tabs * exp(pk_η[3])
tabs = log(2) / tabs
Ka end
@dosecontrol begin
= (Depot = pop_lag,)
lags end
@vars begin
:= Central / Vc
cp end
@dynamics begin
' = -Ka * Depot
Depot' = Ka * Depot - (CL / Vc) * Central
Centralend
@derived begin
"""
Concentration (ng/mL)
"""
~ @. Normal(cp, sqrt((σ_prop * cp)^2 + σ_add^2))
conc end
end
PumasModel
Parameters: pop_CL, pop_Vc, pop_tabs, pop_lag, pk_Ω, σ_prop, σ_add
Random effects: pk_η
Covariates: FSZCL, FSZV
Dynamical system variables: Depot, Central
Dynamical system type: Matrix exponential
Derived: conc
Observed: conc
2.2 Data Preparation
The Warfarin data used in this tutorial is pulled from PharmaDatasets
for demonstration purposes. Note how the code reshapes and prepares the data in “wide” format for reading into Pumas. Only the conc
column is treated as observations for the PK model.
= dataset("paganz2024/warfarin_long")
warfarin_data
# Step 2: Fix Duplicate Time Points
# -------------------------------
# Some subjects have duplicate time points for DVID = 1
# For this dataset, the triple (ID, TIME, DVID) should define
# a row uniquely, but
nrow(warfarin_data)
nrow(unique(warfarin_data, ["ID", "TIME", "DVID"]))
# We can identify the problematic rows by grouping on the index variables
@chain warfarin_data begin
@groupby :ID :TIME :DVID
@transform :tmp = length(:ID)
@rsubset :tmp > 1
end
# It is important to understand the reason for the duplicate values.
# Sometimes the duplication is caused by recording errors, sometimes
# it is a data processing error, e.g. when joining tables, or it can
# be genuine records, e.g. when samples have been analyzed in multiple
# labs. The next step depends on which of the causes are behind the
# duplications.
#
# In this case, we will assume that both values are informative and
# we will therefore just adjust the time stamp a bit for the second
# observation.
= @chain warfarin_data begin
warfarin_data_processed @groupby :ID :TIME :DVID
@transform :tmp = 1:length(:ID)
@rtransform :TIME = :tmp == 1 ? :TIME : :TIME + 1e-6
@select Not(:tmp)
end
# Transform the data in a single chain of operations
= @chain warfarin_data_processed begin
warfarin_data_wide @rsubset !contains(:ID, "#")
@rtransform begin
# Scaling factors
:FSZV = :WEIGHT / 70 # volume scaling
:FSZCL = (:WEIGHT / 70)^0.75 # clearance scaling (allometric)
# Column name for the DV
:DVNAME = "DV$(:DVID)"
# Dosing indicator columns
:CMT = ismissing(:AMOUNT) ? missing : 1
:EVID = ismissing(:AMOUNT) ? 0 : 1
end
unstack(Not([:DVID, :DVNAME, :DV]), :DVNAME, :DV)
rename!(:DV1 => :conc, :DV2 => :pca)
end
Row | ID | TIME | WEIGHT | AGE | SEX | AMOUNT | FSZV | FSZCL | CMT | EVID | DV0 | pca | conc |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
String3 | Float64 | Float64 | Int64 | Int64 | Float64? | Float64 | Float64 | Int64? | Int64 | Float64? | Float64? | Float64? | |
1 | 1 | 0.0 | 66.7 | 50 | 1 | 100.0 | 0.952857 | 0.96443 | 1 | 1 | missing | missing | missing |
2 | 1 | 0.0 | 66.7 | 50 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 100.0 | missing |
3 | 1 | 24.0 | 66.7 | 50 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 49.0 | 9.2 |
4 | 1 | 36.0 | 66.7 | 50 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 32.0 | 8.5 |
5 | 1 | 48.0 | 66.7 | 50 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 26.0 | 6.4 |
6 | 1 | 72.0 | 66.7 | 50 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 22.0 | 4.8 |
7 | 1 | 96.0 | 66.7 | 50 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 28.0 | 3.1 |
8 | 1 | 120.0 | 66.7 | 50 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 33.0 | 2.5 |
9 | 2 | 0.0 | 66.7 | 31 | 1 | 100.0 | 0.952857 | 0.96443 | 1 | 1 | missing | missing | missing |
10 | 2 | 0.0 | 66.7 | 31 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | 100.0 | missing |
11 | 2 | 0.5 | 66.7 | 31 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | missing | 0.0 |
12 | 2 | 2.0 | 66.7 | 31 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | missing | 8.4 |
13 | 2 | 3.0 | 66.7 | 31 | 1 | missing | 0.952857 | 0.96443 | missing | 0 | missing | missing | 9.7 |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
306 | 31 | 48.0 | 83.3 | 24 | 1 | missing | 1.19 | 1.13936 | missing | 0 | missing | 24.0 | 6.4 |
307 | 31 | 72.0 | 83.3 | 24 | 1 | missing | 1.19 | 1.13936 | missing | 0 | missing | 22.0 | 4.5 |
308 | 31 | 96.0 | 83.3 | 24 | 1 | missing | 1.19 | 1.13936 | missing | 0 | missing | 28.0 | 3.4 |
309 | 31 | 120.0 | 83.3 | 24 | 1 | missing | 1.19 | 1.13936 | missing | 0 | missing | 42.0 | 2.5 |
310 | 32 | 0.0 | 62.0 | 21 | 1 | 93.0 | 0.885714 | 0.912999 | 1 | 1 | missing | missing | missing |
311 | 32 | 0.0 | 62.0 | 21 | 1 | missing | 0.885714 | 0.912999 | missing | 0 | missing | 100.0 | missing |
312 | 32 | 24.0 | 62.0 | 21 | 1 | missing | 0.885714 | 0.912999 | missing | 0 | missing | 36.0 | 8.9 |
313 | 32 | 36.0 | 62.0 | 21 | 1 | missing | 0.885714 | 0.912999 | missing | 0 | missing | 27.0 | 7.7 |
314 | 32 | 48.0 | 62.0 | 21 | 1 | missing | 0.885714 | 0.912999 | missing | 0 | missing | 24.0 | 6.9 |
315 | 32 | 72.0 | 62.0 | 21 | 1 | missing | 0.885714 | 0.912999 | missing | 0 | missing | 23.0 | 4.4 |
316 | 32 | 96.0 | 62.0 | 21 | 1 | missing | 0.885714 | 0.912999 | missing | 0 | missing | 20.0 | 3.5 |
317 | 32 | 120.0 | 62.0 | 21 | 1 | missing | 0.885714 | 0.912999 | missing | 0 | missing | 22.0 | 2.5 |
2.3 Creating a Pumas Population
Below is the creation of a population object in Pumas using read_pumas
. Only the conc
data are treated as the observation variable:
= read_pumas(
pop
warfarin_data_wide;= :ID,
id = :TIME,
time = :AMOUNT,
amt = :CMT,
cmt = :EVID,
evid = [:SEX, :WEIGHT, :FSZV, :FSZCL],
covariates = [:conc],
observations )
Population
Subjects: 31
Covariates: SEX, WEIGHT, FSZV, FSZCL
Observations: conc
The same data can contain multiple endpoints or PD observations. In this tutorial, the focus is solely on PK fitting. PKPD modeling on this warfarin dataset will be introduced later.
2.4 Checking Model-Data Compatibility
Before performing any fit, it is recommended to verify whether the defined model is consistent with the provided dataset. Pumas offers functions such as loglikelihood
and findinfluential
for these checks.
2.4.1 The loglikelihood
Function
The loglikelihood
function computes the log-likelihood of the model given data and parameters. In Pumas, the signature typically looks like:
loglikelihood(model, population, params, approx)
where:
model
: The Pumas model definition (e.g.,warfarin_pk_model
).population
: A Pumas population object (e.g.,pop
).params
: A named tuple or dictionary containing parameter values.approx
: The approximation method to use. Common options includeFOCE()
,FO()
,LaplaceI()
, etc.
For example, one might write:
# A named tuple of parameter values
= (
param_vals = 0.134,
pop_CL = 8.11,
pop_Vc = 0.523,
pop_tabs = 0.1,
pop_lag = Diagonal([0.01, 0.01, 0.01]),
pk_Ω = 0.00752,
σ_prop = 0.0661,
σ_add
)
= loglikelihood(warfarin_pk_model, pop, param_vals, FOCE()) ll_value
-11626.613157863481
The initial loglikelihood of the warfarin PK model given the data and parameter values is `LL = -11626.613157863481
`.
If the model and data are incompatible (e.g., missing doses for compartments, or out-of-range parameter values), loglikelihood
might return an error or produce a warning. A successful computation is a good sign that the model can handle the data.
2.4.2 The findinfluential
Function
The findinfluential
function helps identify observations that disproportionately influence the fit. It can be used before or after fitting, but even with initial guesses, it can highlight potentially problematic data points. The most notable case is when the loglikelihood cannot be evaluated in which case the data returned for the problematic subject does not return a finite value.
Typical usage is:
findinfluential(warfarin_pk_model, pop, params; approx)
This will list the individual loglikelihoods for each subject. Readers can then inspect these observations further, potentially re-checking the data quality or adjusting the model assumptions if needed for each subject.
= findinfluential(warfarin_pk_model, pop, param_vals, FOCE()) fi
31-element Vector{@NamedTuple{id::String, nll::Float64}}:
(id = "8", nll = 2609.1127547470246)
(id = "14", nll = 2113.146717564112)
(id = "12", nll = 1928.5840732530462)
(id = "13", nll = 1627.4380743700433)
(id = "2", nll = 780.9109417396234)
(id = "7", nll = 512.7488860179473)
(id = "11", nll = 470.0423353858021)
(id = "6", nll = 311.9666456226985)
(id = "3", nll = 192.83907815907168)
(id = "22", nll = 187.1431308413218)
⋮
(id = "16", nll = 22.434033734725986)
(id = "5", nll = 18.475981181576064)
(id = "32", nll = 17.87557264312157)
(id = "17", nll = 17.341505598982042)
(id = "26", nll = 8.191625356662307)
(id = "24", nll = 6.4001549755272755)
(id = "31", nll = 6.363630970170725)
(id = "23", nll = 5.48597914877157)
(id = "29", nll = 4.343343958293616)
The default output is a vector of NamedTuples, which can easily be converted into a DataFrame for further analysis. One can then plot the distribution of the individual loglikelihoods to identify any potential problematic subjects.
= DataFrame(fi)
fidf hist(
fidf.nll,= (xlabel = "Log-likelihood", ylabel = "Frequency"),
axis = (:blue, 0.5),
color )
The plot above shows that there are some subjects with very high initial log-likelihoods, which might be worth investigating. Notice, that a high initial log-likelihood does not necessarily imply a model or data problem. It can simply be that for the chosen initial population parameters the model does not fit the observations well for the subject in question.
2.5 Getting Good Initial Estimates Using Naive Pooled
A reliable starting point for nonlinear mixed-effects modeling is to get reasonable initial parameter estimates. Pumas provides approaches such as:
- Naive Pooled: Treats all data as if there were no inter-individual variability.
- NCA (Non-compartmental Analysis): Uses PK metrics (CL, V, etc.) from a non-compartmental approach to seed the parameter values (not showcased in this tutorial).
Here, the Naive Pooled approach is demonstrated. This method estimates population parameters by ignoring inter-individual differences, effectively pooling data as if from a single “super-subject.”
Below is the signature and documentation for the Pumas fit
function. For more details, see the Fitting in Pumas Documentation.
fit(
model::PumasModel,
population::Population,
param::NamedTuple,
approx::Union{LikelihoodApproximation, MAP};
optim_alg = Optim.BFGS(linesearch = Optim.LineSearches.BackTracking(), initial_stepnorm = 1.0),
optim_options::NamedTuple = NamedTuple(),
optimize_fn = nothing,
constantcoef::Tuple = (),
ensemblealg::SciMLBase.EnsembleAlgorithm = EnsembleThreads(),
checkidentification = true,
diffeq_options = NamedTuple(),
verbose = true,
ignore_numerical_error = true,
)
Fit the Pumas model model to the dataset population with starting values param using the estimation method approx. Currently supported values for the `approx` argument are `FO`, `FOCE`,
`LaplaceI`, `NaivePooled`, and `BayesMCMC`. See the online documentation for more details about the different methods.
To control the optimization procedure for finding population parameters (fixed effects), use the `optim_alg` and `optim_options` keyword arguments. In previous versions of Pumas the argument
`optimize_fn` was used, but is now discouraged and will be removed in a later version of Pumas. These options control the optimization of all `approx` methods except BayesMCMC. The default
optimization function uses the quasi-Newton routine BFGS method from the `Optim` package. It can be changed by setting the `optim_alg` to an algorithm implemented in Optim.jl as long as it
does not use second order derivatives. Optimization specific options can be passed in using the `optim_options` keyword and has to be a `NamedTuple` with names and values that match the
`Optim.Options` type. For example, the optimization trace can be disabled and the algorithm can be changed to L-BFGS by passing `optim_alg=Optim.LBFGS(), optim_options = ;(show_trace=false)`
to fit. See [Optim](https://docs.pumas.ai/stable/basics/estimation/#Optimization-option) for more defails.
It is possible to fix one or more parameters of the fit by passing a Tuple of Symbols as the `constantcoef` argument with elements corresponding to the names of the fixed parameters, e.g.
`constantcoef=(:σ,)`.
When models include an `@random` block and fitting with NaivePooled is requested, it is required that the user sets all variability parameters to zero with `constantcoef` such that these can
be ignored in the optimization, e.g. `constantcoef = (:Ω,)` while overwriting the corresponding values in `param` with `(; init_params..., Ω = zeros(3, 3))`.
Parallelization of the optimization is supported for most estimation methods via the ensemble interface of DifferentialEquations.jl. Currently, the only supported options are:
• `EnsembleThreads()`: the default. Accelerate fits by using multiple threads.
• `EnsembleSerial()`: fit using a single thread.
• `EnsembleDistributed()`: fit by using multiple worker processes.
The `fit` function will check if any gradients and throw an exception if any of the elements are exactly zero unless `checkidentification` is set to false.
Further keyword arguments can be passed via the `diffeq_options` argument. This allows for passing arguments to the differential equations solver such as `alg`, `abstol`, and `reltol`. The
default values for these are `AutoVern7(Rodas5P(autodiff=true)), 1e-12, and 1e-8` respectively. See the [documentation](https://docs.pumas.ai/stable/basics/simulation/#Options-and-settings-for-simobs) for more details.
The keyword `verbose` controls if info statements about initial evaluations of the loglikelihood function and gradient should be printed or not. Defaults to true.
Since the numerical optimization routine can sometimes visit extreme regions of the parameter space during it's exploration we automatically handle the situation where the input
parameters result in errors when solving the dynamical system. This allows the algorithm to recover and continue in many situations that would have otherwise stalled early. Sometimes, it
is useful to turn this error handling off when debugging a model fit, and this can be done by setting `ignore_numerical_error = false`.
Below is how to run a Naive Pooled analysis:
= fit(warfarin_pk_model, pop, param_vals, NaivePooled(); constantcoef = (:pk_Ω,)) naive_fit
[ Info: Checking the initial parameter values.
[ Info: The initial negative log likelihood and its gradient are finite. Check passed.
Iter Function value Gradient norm
0 5.070332e+04 5.741309e+04
* time: 0.025236129760742188
1 1.990291e+04 3.525936e+04
* time: 0.961428165435791
2 3.850566e+03 7.927044e+03
* time: 0.9631240367889404
3 2.326447e+03 4.958575e+03
* time: 0.964752197265625
4 1.147033e+03 2.356630e+03
* time: 0.9663791656494141
5 7.221759e+02 1.181586e+03
* time: 0.9680171012878418
6 5.432686e+02 5.064601e+02
* time: 0.969628095626831
7 4.864460e+02 1.677683e+02
* time: 0.9712140560150146
8 4.724264e+02 7.050247e+01
* time: 0.9728631973266602
9 4.702526e+02 6.322024e+01
* time: 0.9744641780853271
10 4.699078e+02 6.864635e+01
* time: 0.9760980606079102
11 4.695689e+02 7.516235e+01
* time: 0.9777112007141113
12 4.688608e+02 7.294852e+01
* time: 0.9793810844421387
13 4.679633e+02 5.227783e+01
* time: 0.981032133102417
14 4.672220e+02 2.711848e+01
* time: 0.9826500415802002
15 4.669432e+02 2.373879e+01
* time: 0.9842591285705566
16 4.668886e+02 2.300102e+01
* time: 0.9858760833740234
17 4.668459e+02 2.284229e+01
* time: 0.987497091293335
18 4.667377e+02 2.378758e+01
* time: 0.989112138748169
19 4.665009e+02 2.684403e+01
* time: 0.9907581806182861
20 4.659955e+02 2.835852e+01
* time: 0.9924051761627197
21 4.652873e+02 2.371497e+01
* time: 0.9939990043640137
22 4.647592e+02 3.073448e+01
* time: 0.995643138885498
23 4.645466e+02 3.812126e+01
* time: 0.9972801208496094
24 4.645013e+02 3.839585e+01
* time: 0.9988811016082764
25 4.644583e+02 3.795849e+01
* time: 1.000540018081665
26 4.643176e+02 3.599161e+01
* time: 1.002190113067627
27 4.640337e+02 3.141265e+01
* time: 1.0038111209869385
28 4.634991e+02 2.152192e+01
* time: 1.0054550170898438
29 4.629314e+02 1.579662e+01
* time: 1.0071110725402832
30 4.626793e+02 8.229231e+00
* time: 1.0089061260223389
31 4.626228e+02 4.593455e+00
* time: 1.0107340812683105
32 4.626174e+02 4.703436e+00
* time: 1.012557029724121
33 4.626166e+02 4.711371e+00
* time: 1.014397144317627
34 4.626100e+02 4.754289e+00
* time: 1.0162060260772705
35 4.625971e+02 4.818111e+00
* time: 1.018024206161499
36 4.625584e+02 7.283490e+00
* time: 1.019834041595459
37 4.624573e+02 1.322306e+01
* time: 1.021653175354004
38 4.621496e+02 2.504876e+01
* time: 1.0234980583190918
39 4.606593e+02 6.055247e+01
* time: 1.0253350734710693
40 4.574668e+02 9.046732e+01
* time: 1.0271580219268799
41 4.493736e+02 1.320107e+02
* time: 1.028982162475586
42 4.476881e+02 1.201131e+02
* time: 1.0312530994415283
43 4.399545e+02 6.343714e+01
* time: 1.0331010818481445
44 4.308404e+02 1.074644e+02
* time: 1.034924030303955
45 4.279440e+02 4.992808e+01
* time: 1.0367650985717773
46 4.269382e+02 2.089849e+01
* time: 1.038593053817749
47 4.259042e+02 2.284828e+01
* time: 1.040417194366455
48 4.250072e+02 3.457276e+01
* time: 1.0422661304473877
49 4.225016e+02 5.670587e+01
* time: 1.04410719871521
50 4.218595e+02 8.075863e+01
* time: 1.0459740161895752
51 4.156464e+02 5.623544e+01
* time: 1.047762155532837
52 4.144306e+02 1.062421e+01
* time: 1.0496301651000977
53 4.139388e+02 9.442026e+00
* time: 1.0514750480651855
54 4.133251e+02 9.168600e+00
* time: 1.0533311367034912
55 4.125579e+02 1.007895e+01
* time: 1.0551481246948242
56 4.116527e+02 1.162167e+01
* time: 1.0570452213287354
57 4.106767e+02 1.382208e+01
* time: 1.0589020252227783
58 4.092564e+02 2.392522e+01
* time: 1.0619862079620361
59 4.088745e+02 1.991206e+01
* time: 1.06386399269104
60 4.084320e+02 1.729494e+01
* time: 1.0657010078430176
61 4.082868e+02 2.139214e+01
* time: 1.067972183227539
62 4.078151e+02 2.066959e+01
* time: 1.069878101348877
63 4.075154e+02 1.547607e+01
* time: 1.071727991104126
64 4.074365e+02 2.050007e+01
* time: 1.074064016342163
65 4.073436e+02 2.087001e+01
* time: 1.076418161392212
66 4.072557e+02 2.244000e+01
* time: 1.0787930488586426
67 4.071114e+02 2.538153e+01
* time: 1.0811800956726074
68 4.069218e+02 3.482650e+01
* time: 1.0835440158843994
69 4.067389e+02 4.414308e+01
* time: 1.0859031677246094
70 4.066095e+02 4.254209e+01
* time: 1.0882132053375244
71 4.065063e+02 4.513420e+01
* time: 1.0905351638793945
72 4.063491e+02 3.571645e+01
* time: 1.0928380489349365
73 4.061858e+02 6.436632e+01
* time: 1.095163106918335
74 4.060687e+02 6.862397e+01
* time: 1.0975210666656494
75 4.056652e+02 7.310609e+01
* time: 1.0998539924621582
76 4.052419e+02 4.432978e+02
* time: 1.1017911434173584
77 4.042738e+02 7.557341e+02
* time: 1.1038072109222412
78 4.039901e+02 7.253264e+02
* time: 1.1073570251464844
79 4.036413e+02 4.586094e+02
* time: 1.1109340190887451
80 4.034751e+02 3.451488e+02
* time: 1.1138501167297363
81 4.033468e+02 2.505057e+02
* time: 1.1159040927886963
82 4.032322e+02 5.745614e+02
* time: 1.1179749965667725
83 4.031809e+02 1.338629e+02
* time: 1.1199991703033447
84 4.030491e+02 1.345342e+02
* time: 1.1220660209655762
85 4.029509e+02 5.300508e+02
* time: 1.1245272159576416
86 4.028615e+02 8.720789e+02
* time: 1.127089023590088
87 4.027428e+02 4.099451e+03
* time: 1.129134178161621
88 4.026309e+02 7.288032e+02
* time: 1.1312851905822754
89 4.023578e+02 2.467874e+02
* time: 1.1333630084991455
90 4.020910e+02 2.098614e+03
* time: 1.1354210376739502
91 4.018974e+02 4.037941e+03
* time: 1.1375060081481934
92 4.012021e+02 1.473336e+04
* time: 1.1396241188049316
93 4.008006e+02 6.046803e+03
* time: 1.1423029899597168
94 4.001476e+02 2.693180e+03
* time: 1.1444602012634277
95 4.001017e+02 1.559475e+04
* time: 1.147111177444458
96 3.997684e+02 6.068301e+03
* time: 1.1492540836334229
97 3.994981e+02 1.016658e+04
* time: 1.151447057723999
98 3.992637e+02 1.335941e+04
* time: 1.1540892124176025
99 3.990669e+02 1.098893e+04
* time: 1.1567821502685547
100 3.989017e+02 1.598516e+04
* time: 1.1593689918518066
101 3.986346e+02 1.522274e+04
* time: 1.1620709896087646
102 3.984634e+02 1.636653e+04
* time: 1.1647260189056396
103 3.982363e+02 1.281657e+04
* time: 1.167424201965332
104 3.980121e+02 3.185190e+04
* time: 1.1702210903167725
105 3.977609e+02 3.832693e+04
* time: 1.1729621887207031
106 3.975822e+02 3.343144e+04
* time: 1.2365190982818604
107 3.974201e+02 3.817699e+04
* time: 1.2393012046813965
108 3.971903e+02 2.927366e+04
* time: 1.241960048675537
109 3.969852e+02 5.174780e+04
* time: 1.2445671558380127
110 3.967408e+02 6.033199e+04
* time: 1.2472591400146484
111 3.965509e+02 5.743798e+04
* time: 1.2499220371246338
112 3.963667e+02 6.001771e+04
* time: 1.2526271343231201
113 3.961380e+02 4.918007e+04
* time: 1.2552831172943115
114 3.959048e+02 1.159521e+05
* time: 1.2579309940338135
115 3.955981e+02 1.454797e+05
* time: 1.2606890201568604
116 3.954027e+02 1.285961e+05
* time: 1.2633349895477295
117 3.952272e+02 1.437079e+05
* time: 1.2660322189331055
118 3.949712e+02 1.024221e+05
* time: 1.268801212310791
119 3.947260e+02 2.336274e+05
* time: 1.2714710235595703
120 3.945037e+02 2.810362e+05
* time: 1.2741341590881348
121 3.943037e+02 2.600384e+05
* time: 1.2768421173095703
122 3.941227e+02 2.645716e+05
* time: 1.279475212097168
123 3.939040e+02 2.308459e+05
* time: 1.2821671962738037
124 3.936793e+02 3.402413e+05
* time: 1.2849161624908447
125 3.932925e+02 3.218361e+05
* time: 1.2876801490783691
126 3.930721e+02 3.990335e+05
* time: 1.2905850410461426
127 3.926876e+02 2.690740e+05
* time: 1.2934770584106445
128 3.924099e+02 1.175507e+06
* time: 1.2963581085205078
129 3.921903e+02 1.366418e+06
* time: 1.2991790771484375
130 3.914536e+02 1.003257e+06
* time: 1.3019871711730957
131 3.912976e+02 1.257748e+06
* time: 1.3049139976501465
132 3.908486e+02 8.364755e+05
* time: 1.3077380657196045
133 3.905826e+02 2.317819e+06
* time: 1.3105671405792236
134 3.903185e+02 3.015115e+06
* time: 1.3134469985961914
135 3.900954e+02 2.867619e+06
* time: 1.3162801265716553
136 3.898965e+02 2.776502e+06
* time: 1.3190710544586182
137 3.896722e+02 2.655987e+06
* time: 1.3219921588897705
138 3.894262e+02 2.733119e+06
* time: 1.3248140811920166
139 3.891230e+02 1.971994e+06
* time: 1.3277020454406738
140 3.888723e+02 1.442705e+07
* time: 1.3306970596313477
141 3.885389e+02 1.421010e+07
* time: 1.3335599899291992
142 3.876703e+02 1.828068e+07
* time: 1.3360300064086914
143 3.872001e+02 1.936569e+07
* time: 1.3389790058135986
144 3.864437e+02 7.177840e+06
* time: 1.34194016456604
145 3.862728e+02 2.351227e+07
* time: 1.346649169921875
146 3.861345e+02 1.476714e+07
* time: 1.349013090133667
147 3.860958e+02 3.264633e+08
* time: 1.35139799118042
148 3.853745e+02 4.769181e+07
* time: 1.353844165802002
149 3.850837e+02 5.358988e+07
* time: 1.3562231063842773
150 3.846268e+02 5.942891e+07
* time: 1.3586280345916748
151 3.838195e+02 4.154020e+07
* time: 1.361037015914917
152 3.835490e+02 1.465098e+07
* time: 1.3640069961547852
153 3.832654e+02 1.633041e+08
* time: 1.3670220375061035
154 3.829528e+02 5.727909e+07
* time: 1.3694441318511963
155 3.818143e+02 4.284260e+08
* time: 1.3718721866607666
156 3.808262e+02 1.583941e+08
* time: 1.374392032623291
157 3.803226e+02 1.187691e+08
* time: 1.377432107925415
158 3.798624e+02 3.217154e+08
* time: 1.380514144897461
159 3.797492e+02 4.467329e+08
* time: 1.3835740089416504
160 3.795408e+02 4.886666e+08
* time: 1.386714220046997
161 3.792743e+02 5.037401e+08
* time: 1.389847993850708
162 3.789825e+02 5.111849e+08
* time: 1.3929150104522705
163 3.786833e+02 5.148148e+08
* time: 1.3960192203521729
164 3.783367e+02 5.171820e+08
* time: 1.3991541862487793
165 3.779772e+02 5.263783e+08
* time: 1.4023070335388184
166 3.775719e+02 5.000187e+08
* time: 1.4055352210998535
167 3.771293e+02 8.335380e+08
* time: 1.4086661338806152
168 3.759346e+02 1.046396e+09
* time: 1.411933183670044
169 3.755284e+02 1.160020e+09
* time: 1.4151639938354492
170 3.749055e+02 4.168182e+08
* time: 1.418375015258789
171 3.743422e+02 2.645442e+09
* time: 1.4229621887207031
172 3.740647e+02 5.457301e+09
* time: 1.4261691570281982
173 3.737548e+02 6.572679e+09
* time: 1.4294540882110596
174 3.734376e+02 6.309385e+09
* time: 1.4327020645141602
175 3.731891e+02 5.923826e+09
* time: 1.4359500408172607
176 3.729631e+02 5.750542e+09
* time: 1.4391820430755615
177 3.724857e+02 5.555957e+09
* time: 1.4424810409545898
178 3.722050e+02 5.222174e+09
* time: 1.4457581043243408
179 3.717245e+02 6.984869e+09
* time: 1.4490621089935303
180 3.706393e+02 1.976407e+09
* time: 1.4524121284484863
181 3.705300e+02 2.033361e+10
* time: 1.4564402103424072
182 3.702465e+02 2.437418e+10
* time: 1.459813117980957
183 3.700494e+02 2.195495e+10
* time: 1.4631080627441406
184 3.694023e+02 2.298811e+10
* time: 1.4664421081542969
185 3.692452e+02 2.149516e+10
* time: 1.4697721004486084
186 3.687094e+02 2.166279e+10
* time: 1.4730360507965088
187 3.685261e+02 2.143947e+10
* time: 1.476417064666748
188 3.673781e+02 1.883299e+10
* time: 1.4797592163085938
189 3.670650e+02 1.930317e+10
* time: 1.4850001335144043
190 3.670156e+02 1.930877e+10
* time: 1.4908361434936523
191 3.668443e+02 1.930877e+10
* time: 1.5033950805664062
192 3.668443e+02 1.930877e+10
* time: 1.5154950618743896
193 3.668404e+02 1.930877e+10
* time: 1.524463176727295
194 3.668404e+02 1.930877e+10
* time: 1.5372741222381592
195 3.668404e+02 1.930877e+10
* time: 1.549734115600586
FittedPumasModel
Successful minimization: true
Likelihood approximation: NaivePooled
Likelihood Optimizer: BFGS
Dynamical system type: Matrix exponential
Log-likelihood value: -366.84038
Number of subjects: 31
Number of parameters: Fixed Optimized
1 8
Observation records: Active Missing
conc: 239 47
Total: 239 47
-------------------------
Estimate
-------------------------
pop_CL 0.12562
pop_Vc 8.5443
pop_tabs 9.8269e-11
pop_lag 1.0
pk_Ω₁,₁ 0.01
pk_Ω₂,₂ 0.01
pk_Ω₃,₃ 0.01
σ_prop 0.24564
σ_add 2.1685e-8
-------------------------
As you can see, the constantcoef
argument is used to fix the inter-individual variability parameters to zero. The parameter estimates from the naive pooled fit seem out of place for some parameters such as pop_tabs
. Perhaps the initial parameter values are not close to the true values and some adjustments are needed. In Pumas, you can rerun the fit with new initial parameter values very easily. Suppose, we want to run the naive pooled fit again with 4 new initial parameter values that are within 10% of the original values.
We can use the power of the Julia language to generate 4 new parameter values that are within 10% of the original values.
# Generate parameter sets by scaling original values by ±10%
= [
param_sets NamedTuple(k => v * scale for (k, v) in pairs(param_vals)) for
in [0.9, 0.95, 1.05, 1.1]
scale ]
4-element Vector{@NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pop_lag::Float64, pk_Ω::Diagonal{Float64, Vector{Float64}}, σ_prop::Float64, σ_add::Float64}}:
(pop_CL = 0.12060000000000001, pop_Vc = 7.2989999999999995, pop_tabs = 0.4707, pop_lag = 0.09000000000000001, pk_Ω = [0.009000000000000001 0.0 0.0; 0.0 0.009000000000000001 0.0; 0.0 0.0 0.009000000000000001], σ_prop = 0.006768, σ_add = 0.05949000000000001)
(pop_CL = 0.1273, pop_Vc = 7.7044999999999995, pop_tabs = 0.49685, pop_lag = 0.095, pk_Ω = [0.0095 0.0 0.0; 0.0 0.0095 0.0; 0.0 0.0 0.0095], σ_prop = 0.007143999999999999, σ_add = 0.062795)
(pop_CL = 0.14070000000000002, pop_Vc = 8.5155, pop_tabs = 0.54915, pop_lag = 0.10500000000000001, pk_Ω = [0.0105 0.0 0.0; 0.0 0.0105 0.0; 0.0 0.0 0.0105], σ_prop = 0.007896, σ_add = 0.06940500000000001)
(pop_CL = 0.14740000000000003, pop_Vc = 8.921, pop_tabs = 0.5753, pop_lag = 0.11000000000000001, pk_Ω = [0.011000000000000001 0.0 0.0; 0.0 0.011000000000000001 0.0; 0.0 0.0 0.011000000000000001], σ_prop = 0.008272, σ_add = 0.07271000000000001)
Now, we can run the fit again with the new parameter values.
= [
new_fits fit(warfarin_pk_model, pop, param_set, NaivePooled(), constantcoef = (:pk_Ω,)) for
in param_sets
param_set ];
You can compare the results of the new fits with perturbed initial parameter values.
compare_estimates(;
= new_fits[1],
p1 = new_fits[2],
p2 = new_fits[3],
p3 = new_fits[4],
p4 = naive_fit,
original )
Row | parameter | p1 | p2 | p3 | p4 | original |
---|---|---|---|---|---|---|
String | Float64? | Float64? | Float64? | Float64? | Float64? | |
1 | pop_CL | 0.12727 | 0.119671 | 0.12727 | 0.12694 | 0.125617 |
2 | pop_Vc | 8.34567 | 7.65772 | 8.34567 | 8.49744 | 8.54432 |
3 | pop_tabs | 0.371906 | 0.737079 | 0.371904 | 2.83552e-11 | 9.82686e-11 |
4 | pop_lag | 0.764775 | 0.5 | 0.764777 | 1.0 | 1.0 |
5 | pk_Ω₁,₁ | 0.009 | 0.0095 | 0.0105 | 0.011 | 0.01 |
6 | pk_Ω₂,₂ | 0.009 | 0.0095 | 0.0105 | 0.011 | 0.01 |
7 | pk_Ω₃,₃ | 0.009 | 0.0095 | 0.0105 | 0.011 | 0.01 |
8 | σ_prop | 0.228485 | 0.318661 | 0.228485 | 0.252298 | 0.245643 |
9 | σ_add | 0.380457 | 4.0701e-24 | 0.38046 | 8.61276e-9 | 2.16855e-8 |
The compare_estimates
function is a convenience function that is part of the PumasUtilities
package. It is used to compare the estimates of the new fits with the original fit. The p3
results seem to be reasonable amongst all the fits and these results can be taken forward for the FOCE()
fit.
2.6 Estimating Parameters via Maximum Likelihood (FOCE)
After obtaining initial estimates, the next step is fitting the model to data using a maximum likelihood approach. One of the most commonly used methods in Pumas is the First-Order Conditional Estimation (FOCE) method. This is invoked by specifying FOCE()
as the estimation algorithm using the initial parameter values specified in the model definition:
= fit(warfarin_pk_model, pop, init_params(warfarin_pk_model), FOCE()) foce_fit
The init_params
function is a convenience function that extracts the initial parameter values from the model definition.
By default, the fit
function will use all the cores of your machine to parallelize the optimization. You can control this using the ensemblealg
argument. For more details, see the fit options section in the Pumas Documentation.
2.7 Fixing Parameters with constantcoef
Sometimes, it is desirable to hold certain parameters constant during estimation. For instance, suppose one wants to fix tabs
at a particular value:
= fit(
foce_fit_fixedtabs
warfarin_pk_model,
pop,..., pop_tabs = 0.5),
(; param_valsFOCE();
= (:pop_tabs,),
constantcoef # This fixes the pop_tabs parameter );
foce_fit_fixedtabs
FittedPumasModel
Successful minimization: true
Likelihood approximation: FOCE
Likelihood Optimizer: BFGS
Dynamical system type: Matrix exponential
Log-likelihood value: -320.22372
Number of subjects: 31
Number of parameters: Fixed Optimized
1 8
Observation records: Active Missing
conc: 239 47
Total: 239 47
-----------------------
Estimate
-----------------------
pop_CL 0.13091
pop_Vc 8.0799
pop_tabs 0.5
pop_lag 0.91828
pk_Ω₁,₁ 0.051096
pk_Ω₂,₂ 0.019074
pk_Ω₃,₃ 1.0492
σ_prop 0.090995
σ_add 0.33185
-----------------------
This approach is useful for sensitivity analysis, exploring different values, or simplifying the model.
2.8 Changing Tolerance During Fitting
The fit
function accepts additional keyword arguments to tweak the fitting process, such as tolerances for convergence:
= fit(
foce_fit_fixedtabs
warfarin_pk_model,
pop,..., pop_tabs = 0.5),
(; param_valsFOCE();
= (:pop_tabs,),
constantcoef = (; x_reltol = 1e-6, x_abstol = 1e-8, iterations = 200),
optim_options );
These arguments (x_reltol
, x_abstol
, iterations
) help fine-tune the solver’s stopping criteria. For more details, see the Optimization Options during Fitting.
2.9 Interpreting the Printed Results
Once the model fit is complete, Pumas will print a summary of the fit to the console. This typically includes:
- Objective Function Value (OFV): The minimized negative log-likelihood (or similar objective). Lower is better.
- Convergence Status: An indication of whether the solver found a local optimum or if it ran into issues (e.g., maximum iterations reached).
- Parameter Estimates: A table listing final estimates of fixed effects, random effects (variances or standard deviations), and residual variability components.
- Standard Errors (if computed): By default, these might not be computed until using
infer
(discussed in the next section).
The printed output might look like:
julia> foce_fit_fixedtabs
FittedPumasModel
Successful minimization: true
Likelihood approximation: FOCE
Likelihood Optimizer: BFGS
Dynamical system type: Matrix exponential
Log-likelihood value: -320.2235
Number of subjects: 31
Number of parameters: Fixed Optimized
1 8
Observation records: Active Missing
conc: 239 47
Total: 239 47
-----------------------
Estimate
-----------------------
pop_CL 0.13091
pop_Vc 8.08
pop_tabs 0.5
pop_lag 0.91842
pk_Ω₁,₁ 0.050929
pk_Ω₂,₂ 0.019161
pk_Ω₃,₃ 1.056
σ_prop 0.090974
σ_add 0.33189
-----------------------
We can break down the output of the fit
function into the following components:
2.9.1 FittedPumasModel
FittedPumasModel
The FittedPumasModel
object contains the following fields:
model
: The original model definition
foce_fit_fixedtabs.model
PumasModel
Parameters: pop_CL, pop_Vc, pop_tabs, pop_lag, pk_Ω, σ_prop, σ_add
Random effects: pk_η
Covariates: FSZCL, FSZV
Dynamical system variables: Depot, Central
Dynamical system type: Matrix exponential
Derived: conc
Observed: conc
data
: The population data
foce_fit_fixedtabs.data
Population
Subjects: 31
Covariates: SEX, WEIGHT, FSZV, FSZCL
Observations: conc
optim
: The optimization results
foce_fit_fixedtabs.optim
* Status: success
* Candidate solution
Final objective value: 3.202237e+02
* Found with
Algorithm: BFGS
* Convergence measures
|x - x'| = 6.79e-08 ≰ 1.0e-08
|x - x'|/|x'| = 1.72e-08 ≤ 1.0e-06
|f(x) - f(x')| = 6.12e-07 ≰ 0.0e+00
|f(x) - f(x')|/|f(x')| = 1.91e-09 ≰ 0.0e+00
|g(x)| = 4.41e-02 ≰ 1.0e-03
* Work counters
Seconds run: 3 (vs limit Inf)
Iterations: 70
f(x) calls: 79
∇f(x) calls: 72
approx
: The likelihood approximation method
foce_fit_fixedtabs.approx
FOCE
kwargs
: Additional keyword arguments
foce_fit_fixedtabs.kwargs
fixedparamset
: The fixed parameter set
foce_fit_fixedtabs.fixedparamset
optim_state
: The optimization state
foce_fit_fixedtabs.optim_state
2.9.2 Minimization Status
Successful minimization: true
The Successful minimization
field indicates whether the minimization was successful. It depends on the convergence of the optimizer and rules set in the optim_options
argument, for example x_reltol
and x_abstol
.
2.9.3 Likelihood Approximation
Likelihood approximation: FOCE
The Likelihood approximation
field indicates the likelihood approximation method used.
2.9.4 Likelihood Optimizer
Likelihood Optimizer: BFGS
The Likelihood Optimizer
field indicates the optimizer used which by default is BFGS
from the Optim
package. A user can change the optimizer by setting the optim_alg
argument in the fit
function.
2.9.5 Dynamical system type
Dynamical system type: Matrix exponential
The Dynamical system type
field indicates the type of dynamical system used. In the example here, even though the model is a differential equation model, the Matrix exponential
indicates that the model is solved using a matrix exponential solver as the system has been deemed linear. The user can check the linear
nature of the system as follows: foce_fit.model.prob
which results in this case to Pumas.LinearODE(). If the user does not want the automatic check for linearity, they can turn it off in the @options
block of the model definition.
@model begin
@param begin
...
end
@random begin
...
end
@dynamics begin
...
end
@pre begin
...
end
@options begin
= false
checklinear end
end
The checklinear
option is a boolean that determines whether the solver should check if the system defined in the @dynamics
block is linear. If the system is linear, setting this option to true
(default) enables calculating the solution through matrix exponentials. If it is set to false
or time (t) appears in the @pre
block, this optimization is disabled. This option can be useful when the matrix exponential solver is not superior to general numerical integrators or for debugging purposes.
2.9.6 Log-likelihood value
Log-likelihood value: -320.2235
The Log-likelihood value
field indicates the value of the log-likelihood function at the maximum likelihood estimate.
2.9.7 Number of subjects
Number of subjects: 31
The Number of subjects
field indicates the number of subjects from the population data used in the fit.
2.9.8 Number of parameters
Number of parameters: Fixed Optimized
1 8
The Number of parameters
field indicates the number of fixed and optimized parameters. The Fixed
column indicates the number of parameters that were fixed using the constantcoef
argument during the fit and the Optimized
column indicates the number of parameters that were optimized.
2.9.9 Observation records
Observation records: Active Missing
conc: 239 47
Total: 239 47
The Observation records
field indicates the number of active and missing observations in the population data. The Active
column indicates the number of observations that were used in the fit and the Missing
column indicates the number of observations that were missing
and not used in the fit. missing
in this case is due to the missingness of the conc
observations, which is the collection of all the conc
observations from all the subjects.
2.9.10 Parameter estimates
-----------------------
Estimate
-----------------------
pop_CL 0.13091
pop_Vc 8.08
pop_tabs 0.5
pop_lag 0.91842
pk_Ω₁,₁ 0.050929
pk_Ω₂,₂ 0.019161
pk_Ω₃,₃ 1.056
σ_prop 0.090974
σ_add 0.33189
-----------------------
The Parameter estimates
field indicates the final parameter estimates. The Estimate
column indicates the maximum likelihood estimates of the parameter.
2.10 Computing Parameter Precision with infer
The infer
function in Pumas estimates the uncertainty (precision) of parameter estimates. Depending on the chosen method, infer
can provide standard errors, confidence intervals, and correlation matrices.
The signature for infer
often looks like:
infer(
::FittedPumasModel;
fpm= 0.95,
level ::Bool = false,
rethrow_error::Bool = true,
sandwich_estimator )
where:
fpm::FittedPumasModel
: The result offit
(e.g.,foce_fit
).level
: The confidence interval level (e.g., 0.95). The confidence intervals are calculated as the(1-level)/2
and(1+level)/2
quantiles of the estimated parametersrethrow_error
: If rethrow_error is false (the default value), no error will be thrown if the variance-covariance matrix estimator fails. If it is true, an error will be thrown if the estimator fails.sandwich_estimator
: Whether to use the sandwich estimator. If set totrue
(the default value), the sandwich estimator will be used. If set tofalse
, the standard error will be calculated using the inverse of the Hessian matrix.
An example usage:
= infer(foce_fit_fixedtabs; level = 0.95) inference_results
[ Info: Calculating: variance-covariance matrix.
[ Info: Done.
Asymptotic inference results using sandwich estimator
Successful minimization: true
Likelihood approximation: FOCE
Likelihood Optimizer: BFGS
Dynamical system type: Matrix exponential
Log-likelihood value: -320.22372
Number of subjects: 31
Number of parameters: Fixed Optimized
1 8
Observation records: Active Missing
conc: 239 47
Total: 239 47
-----------------------------------------------------------------------
Estimate SE 95.0% C.I.
-----------------------------------------------------------------------
pop_CL 0.13091 0.0055387 [ 0.12006 ; 0.14177 ]
pop_Vc 8.0799 0.23677 [ 7.6158 ; 8.5439 ]
pop_tabs 0.5 NaN [ NaN ; NaN ]
pop_lag 0.91828 0.042229 [ 0.83551 ; 1.001 ]
pk_Ω₁,₁ 0.051096 0.017609 [ 0.016584 ; 0.085608]
pk_Ω₂,₂ 0.019074 0.0056721 [ 0.0079565; 0.030191]
pk_Ω₃,₃ 1.0492 0.56299 [-0.054208 ; 2.1527 ]
σ_prop 0.090995 0.014419 [ 0.062735 ; 0.11926 ]
σ_add 0.33185 0.087824 [ 0.15972 ; 0.50398 ]
-----------------------------------------------------------------------
We can use the sandwich_estimator
argument to get a more robust estimate of the standard errors.
= infer(foce_fit_fixedtabs; level = 0.95, sandwich_estimator = false) inference_results
[ Info: Calculating: variance-covariance matrix.
[ Info: Done.
Asymptotic inference results using negative inverse Hessian
Successful minimization: true
Likelihood approximation: FOCE
Likelihood Optimizer: BFGS
Dynamical system type: Matrix exponential
Log-likelihood value: -320.22372
Number of subjects: 31
Number of parameters: Fixed Optimized
1 8
Observation records: Active Missing
conc: 239 47
Total: 239 47
---------------------------------------------------------------------
Estimate SE 95.0% C.I.
---------------------------------------------------------------------
pop_CL 0.13091 0.0054659 [ 0.1202 ; 0.14162]
pop_Vc 8.0799 0.23392 [ 7.6214 ; 8.5384 ]
pop_tabs 0.5 NaN [ NaN ; NaN ]
pop_lag 0.91828 0.029387 [ 0.86068 ; 0.97588]
pk_Ω₁,₁ 0.051096 0.013885 [ 0.023883 ; 0.07831]
pk_Ω₂,₂ 0.019074 0.006447 [ 0.0064378; 0.03171]
pk_Ω₃,₃ 1.0492 0.55326 [-0.035156 ; 2.1336 ]
σ_prop 0.090995 0.0086472 [ 0.074047 ; 0.10794]
σ_add 0.33185 0.048238 [ 0.2373 ; 0.42639]
---------------------------------------------------------------------
This result above indicates that both with the sandwich estimator and the inverse of the Hessian matrix on the fixed parameter, the inference worked. We will pursue other methods to obtain parameter precision in later tutorials, such as bootstrap and SIR.
3 Concluding Remarks
This tutorial showcased a typical Pumas workflow for PK model fitting using a Warfarin dataset:
- Model Definition and Data Preparation.
- Checking Compatibility via
loglikelihood
andfindinfluential
. - Initial Parameter Estimation using Naive Pooled.
- Fitting with FOCE, demonstrating how to fix parameters or change solver tolerances.
- Interpreting Fit Results, exploring the components of a
fittedpumasmodel
. - Computing Precision of estimates with
infer
using different methods.
Readers are encouraged to refine their understanding by:
- Performing thorough Exploratory Data Analysis prior to modeling.
- Exploring Bootstrap or SIR methods for deeper uncertainty quantification (covered in later tutorials).
- Validating the final model through visual diagnostics (e.g., VPCs, GOF plots).
More advanced topics can be found throughout the Pumas Documentation and Pumas Tutorials.
Real-world workflows are iterative. Analysts often revisit model assumptions, re-check data, and explore alternative parameterizations before finalizing any one model as “best.”