using Pumas
using Pumas.Experimental: VEM
using PumasUtilities
using PharmaDatasets
using DataFramesMeta
using CairoMakie
using AlgebraOfGraphics
using SummaryTables
using Random: default_rng, seed!
Variational Expectation Maximization (VEM)
VEM is an experimental estimation method introduced in Pumas 2.8. Its API may change in future releases without deprecation.
In this tutorial, we introduce Variational Expectation Maximization (VEM), a new estimation method available in Pumas.Experimental. We will cover the conceptual background, fit a warfarin PK model with VEM, compare the results to FOCE, and provide a comprehensive reference of VEM options.
1 Conceptual Background
1.1 The Marginal Likelihood Challenge
In nonlinear mixed-effects (NLME) modeling, we want to estimate population parameters while accounting for between-subject variability through random effects. The marginal likelihood requires integrating over all possible random effect values for every subject, and these integrals are generally intractable.
Different estimation methods handle this challenge in different ways:
- FOCE / LaplaceI approximate the integral using a second-order Taylor expansion around the mode of the random effects.
- SAEM uses Markov Chain Monte Carlo (MCMC) sampling within an Expectation-Maximization framework.
- VEM maximizes a lower bound on the log-likelihood using variational inference.
1.2 The Evidence Lower Bound (ELBO)
VEM works by maximizing the Evidence Lower Bound (ELBO), which is a lower bound on the log marginal likelihood. The ELBO equals the log-likelihood minus a KL divergence term that measures how far the approximate posterior is from the true posterior. A higher ELBO typically means a better fit.
Intuitively, VEM simultaneously finds:
- Population parameters (fixed effects, variance components, residual error), and
- An approximation to each subject’s random effect posterior distribution.
This joint optimization is performed in a single level, unlike FOCE which alternates between inner (random effects) and outer (population parameters) optimization loops.
1.3 Presampling vs. Resampling
VEM has two optimization variants controlled by the presample keyword:
Presampling (presample=true, default) |
Resampling (presample=false) |
|
|---|---|---|
| Optimizer | L-BFGS (deterministic) | Adam (stochastic) |
| Phases | 1 (default) | 10 (default) |
| Samples | Pre-drawn at start of phase, reused across iterations | Re-drawn at every iteration |
In the resampling variant, the number of samples increases (by default, unless overridden) and the learning rate of Adam decreases from one phase to the next.
1.4 Diagonal vs. Full Posterior Covariance
VEM approximates each subject’s random effect posterior with a multivariate normal distribution. The diagonal keyword controls the structure of this approximation:
diagonal=true(default): assumes independent random effects in the posterior. Faster, fewer parameters to optimize.diagonal=false: captures correlations between random effects in the posterior. Slower but may be more accurate when random effects are correlated.
2 Warfarin PK Model with VEM
We will fit a one-compartment oral PK model with allometric scaling to the warfarin dataset, first using VEM and then comparing with FOCE.
2.1 Model Definition
warfarin_model = @model begin
@metadata begin
desc = "Warfarin 1-compartment PK model"
timeu = u"hr"
end
@param begin
pop_CL ∈ RealDomain(lower = 0.0, init = 0.134)
pop_Vc ∈ RealDomain(lower = 0.0, init = 8.11)
pop_tabs ∈ RealDomain(lower = 0.0, init = 0.523)
pk_Ω ∈ PDiagDomain([0.01, 0.01])
σ_prop ∈ RealDomain(lower = 0.0, init = 0.00752)
end
@random begin
pk_η ~ MvNormal(pk_Ω)
end
@covariates begin
FSZCL
FSZV
end
@pre begin
CL = FSZCL * pop_CL * exp(pk_η[1])
Vc = FSZV * pop_Vc * exp(pk_η[2])
Ka = log(2) / pop_tabs
end
@vars begin
cp := Central / Vc
end
@dynamics begin
Depot' = -Ka * Depot
Central' = Ka * Depot - CL * cp
end
@derived begin
conc ~ @. ProportionalNormal(cp, σ_prop)
end
endPumasModel
Parameters: pop_CL, pop_Vc, pop_tabs, pk_Ω, σ_prop
Random effects: pk_η
Covariates: FSZCL, FSZV
Dynamical system variables: Depot, Central
Dynamical system type: Matrix exponential
Derived: conc
Observed: conc
The @model definition is identical regardless of whether you fit with VEM or FOCE. VEM works with standard @model definitions – no special model macro is required.
2.2 Data Loading
warfarin_data = @chain dataset("pumas/warfarin_pumas") begin
@rtransform begin
:FSZV = :wtbl / 70
:FSZCL = (:wtbl / 70)^0.75
end
end
first(warfarin_data, 5)| Row | id | time | evid | amt | cmt | conc | pca | wtbl | age | sex | FSZV | FSZCL |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Int64 | Float64 | Int64 | Float64? | Int64? | Float64? | Float64? | Float64 | Int64 | String1 | Float64 | Float64 | |
| 1 | 1 | 0.0 | 1 | 100.0 | 1 | missing | missing | 66.7 | 50 | M | 0.952857 | 0.96443 |
| 2 | 1 | 0.5 | 0 | missing | missing | 0.0 | missing | 66.7 | 50 | M | 0.952857 | 0.96443 |
| 3 | 1 | 1.0 | 0 | missing | missing | 1.9 | missing | 66.7 | 50 | M | 0.952857 | 0.96443 |
| 4 | 1 | 2.0 | 0 | missing | missing | 3.3 | missing | 66.7 | 50 | M | 0.952857 | 0.96443 |
| 5 | 1 | 3.0 | 0 | missing | missing | 6.6 | missing | 66.7 | 50 | M | 0.952857 | 0.96443 |
pop = read_pumas(
warfarin_data;
id = :id,
time = :time,
amt = :amt,
cmt = :cmt,
evid = :evid,
covariates = [:sex, :wtbl, :FSZV, :FSZCL],
observations = [:conc],
)Population
Subjects: 32
Covariates: sex, wtbl, FSZV, FSZCL
Observations: conc
2.3 Fitting with Default VEM
Fitting with VEM follows the same fit interface as all other estimation methods. Simply pass VEM() as the estimation algorithm:
fpm_vem = fit(
warfarin_model,
pop,
init_params(warfarin_model),
VEM(; optim_options = (; show_every = 20)),
)[ Info: Checking the initial parameter values. [ Info: The initial negative log likelihood and its gradient are finite. Check passed. Iter Function value Gradient norm 0 2.279321e+05 6.758375e+05 * time: 0.03426718711853027 20 5.367890e+02 1.175991e+02 * time: 3.8234550952911377 40 4.281096e+02 3.850182e+01 * time: 4.5615010261535645 60 3.813786e+02 1.697343e+01 * time: 5.391502141952515 80 3.552118e+02 1.053567e+01 * time: 6.185890197753906 100 3.423673e+02 1.080688e+00 * time: 6.896291017532349
FittedPumasModel
Dynamical system type: Matrix exponential
Number of subjects: 32
Observation records: Active Missing
conc: 251 47
Total: 251 47
Number of parameters: Constant Optimized
0 6
Estimation method: VEM (EXPERIMENTAL)
--------------------
Estimate
--------------------
pop_CL 0.13476
pop_Vc 8.4563
pop_tabs 1.0661
pk_Ω₁,₁ 0.055891
pk_Ω₂,₂ 0.011683
σ_prop 0.23646
--------------------
During fitting, VEM displays a convergence trace showing the negative ELBO (the objective being minimized). The objective value is the negative of the lower bound on the log marginal likelihood, so lower values indicate a better fit.
2.4 Extracting Results
Population parameter estimates can be extracted with coef:
simple_table(coeftable(fpm_vem))| parameter | constant | estimate |
| pop_CL | false | 0.135 |
| pop_Vc | false | 8.46 |
| pop_tabs | false | 1.07 |
| pk_Ω₁,₁ | false | 0.0559 |
| pk_Ω₂,₂ | false | 0.0117 |
| σ_prop | false | 0.236 |
The posterior approximation modes of the random effects for each subject are available via empirical_bayes:
ebes = empirical_bayes(fpm_vem)
first(ebes, 3)3-element Vector{@NamedTuple{pk_η::Vector{Float64}}}:
(pk_η = [0.6985324139089205, 0.08666328983991838],)
(pk_η = [-0.028685938174535502, 0.04840779814323112],)
(pk_η = [-0.23938148313246616, -0.06180040475348905],)
Note that these generally may not correspond to modes of the true posterior.
2.5 Log-Likelihood Computation
VEM does not compute the log-likelihood during fitting. To evaluate the log-likelihood after fitting, you must specify an approximation method:
ll_foce = loglikelihood(fpm_vem, FOCE())-460.2132981272954
ll_laplace = loglikelihood(fpm_vem, LaplaceI())-460.5090054481343
When using LaplaceI() or FOCE() for log-likelihood evaluation, the empirical Bayes estimates (EBEs) are computed for each subject starting from the mode of the prior (i.e., zero random effects in the Gaussian case) rather than the VEM posterior approximation’s mode.
For well-behaved models, this is unlikely to cause a significant difference in the log-likelihood approximation. However, this behavior will likely change in a future release to use the VEM posterior approximation’s mode as the starting point for the Laplace approximation.
2.6 Diagnostics
Model diagnostics are available through inspect. For models with Gaussian error models (like our proportional normal error), you must explicitly specify how to compute weighted residuals using wres_approx:
res = inspect(fpm_vem; wres_approx = FOCE())[ Info: Calculating predictions. [ Info: Calculating weighted residuals. [ Info: Calculating empirical bayes. [ Info: Evaluating dose control parameters. [ Info: Evaluating individual parameters. [ Info: Done.
FittedPumasModelInspection
Likelihood approximation used for weighted residuals: FOCE
When computing the weights of the weighted residuals, the linearization of the mean and variance are performed at the mode of the approximate posterior for each subject, learned by VEM, rather than the mode of the true posterior.
The predictions used for the residuals are also computed using the mode of the VEM posterior approximation, which may differ from the EBEs or the modes of the true posterior.
Goodness-of-fit plots work as other estimation methods in Pumas:
goodness_of_fit(res)wres_approx Required for Gaussian Errors
When using VEM with Gaussian-like error models (Normal, ProportionalNormal, etc.), you must pass wres_approx = FOCE() or wres_approx = LaplaceI() to inspect. Omitting it will raise an error. For other error models (Beta, Gamma, Poisson, Bernoulli, TimeToEvent), this is not needed.
2.7 Predictions and VPCs
Predictions and visual predictive checks work as other estimation methods in Pumas:
preds = predict(fpm_vem)
first(preds, 3)Prediction
Subjects: 3
Predictions: conc
Covariates: sex, wtbl, FSZV, FSZCL
The individual predictions are computed using the mode of the VEM posterior approximation for each subject, which may differ from the EBEs or the modes of the true posterior.
vpc_res = vpc(fpm_vem);vpc_plot(vpc_res)2.8 Inference Workaround
VEM does not currently support infer for computing standard errors and confidence intervals:
infer(fpm_vem)ArgumentError: Asymptotic inference is not supported for the estimation method VEM (EXPERIMENTAL). Stacktrace: [1] infer(fpm::Pumas.FittedPumasModel{PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Vector{Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}}, Pumas.Experimental.VEMResult{Pumas.Experimental.ThreadedVEMObj{true, false, true, true, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Vector{Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}, Random.TaskLocalRNG, Nothing, Nothing, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Vector{Array{Float64, 3}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, DifferentiationInterfaceForwardDiffExt.ForwardDiffGradientPrep{Tuple{Pumas._L_param{false, (:pop_CL, :pop_Vc, :pop_tabs, :pk_Ω, :σ_prop), TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, ADTypes.AutoForwardDiff{nothing, Nothing}, SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Tuple{}}, ForwardDiff.GradientConfig{ForwardDiff.Tag{Pumas._L_param{false, (:pop_CL, :pop_Vc, :pop_tabs, :pk_Ω, :σ_prop), TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, Float64}, Float64, 6, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Pumas._L_param{false, (:pop_CL, :pop_Vc, :pop_tabs, :pk_Ω, :σ_prop), TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, Float64}, Float64, 6}}}, Tuple{}}, Vector{DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{Tuple{typeof(Pumas.Experimental.G1_prior), ADTypes.AutoReverseDiff{false}, Vector{Float64}, Tuple{DifferentiationInterface.Constant{Pumas.Experimental._L_rfx2{true, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Matrix{Float64}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}}}}}, ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}}, Vector{DifferentiationInterfaceForwardDiffExt.ForwardDiffGradientPrep{Tuple{typeof(Pumas.Experimental.G1_likelihood), ADTypes.AutoForwardDiff{nothing, Nothing}, Vector{Float64}, Tuple{DifferentiationInterface.Constant{Pumas.Experimental._L_rfx2{true, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Matrix{Float64}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}}}}}, ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(Pumas.Experimental.G1_likelihood), Float64}, Float64, 6, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(Pumas.Experimental.G1_likelihood), Float64}, Float64, 6}}}, Tuple{Nothing}}}, Vector{DifferentiationInterfaceReverseDiffExt.ReverseDiffTwoArgJacobianPrep{Tuple{typeof(Pumas.Experimental.get_randeffs!), SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, ADTypes.AutoReverseDiff{false}, Vector{Float64}, Tuple{DifferentiationInterface.Constant{Pumas.Experimental._L_rfx2{true, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Matrix{Float64}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}}}}}, ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{ReverseDiff.TrackedReal{Float64, Float64, Nothing}}}, Nothing}}, ADTypes.AutoForwardDiff{nothing, Nothing}, ADTypes.AutoReverseDiff{false}, ADTypes.AutoForwardDiff{nothing, Nothing}, Vector{Matrix{Float64}}}, Optim.MultivariateOptimizationResults{Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Returns{Nothing}}, Vector{Float64}, Float64, Float64, Vector{Optim.OptimizationState{Float64, Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Returns{Nothing}}}}, @NamedTuple{x_converged::Bool, f_converged::Bool, g_converged::Bool, f_limit_reached::Bool, g_limit_reached::Bool, h_limit_reached::Bool, time_limit::Bool, callback::Bool, f_increased::Bool, ls_failed::Bool, iterations::Bool, small_trustregion_radius::Bool}}, Vector{Float64}}, VEM{Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Returns{Nothing}}, @NamedTuple{show_every::Int64}, @NamedTuple{}, Random.TaskLocalRNG, EnsembleThreads, ADTypes.AutoForwardDiff{nothing, Nothing}}, Vector{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, @NamedTuple{diffeq_options::@NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, ensemblealg::EnsembleThreads, constantcoef::Tuple{}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, Pumas.OptimState{NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Float64, Vector{Float64}}, Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}}}}; level::Float64, rethrow_error::Bool, ensemblealg::EnsembleThreads, sandwich_estimator::Bool) @ Pumas.Experimental ~/run/_work/PumasTutorials.jl/PumasTutorials.jl/custom_julia_depot/packages/Pumas/lK187/src/estimation/vem.jl:2594 [2] infer(fpm::Pumas.FittedPumasModel{PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Vector{Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}}, Pumas.Experimental.VEMResult{Pumas.Experimental.ThreadedVEMObj{true, false, true, true, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Vector{Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}, Random.TaskLocalRNG, Nothing, Nothing, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Vector{Array{Float64, 3}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, DifferentiationInterfaceForwardDiffExt.ForwardDiffGradientPrep{Tuple{Pumas._L_param{false, (:pop_CL, :pop_Vc, :pop_tabs, :pk_Ω, :σ_prop), TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, ADTypes.AutoForwardDiff{nothing, Nothing}, SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Tuple{}}, ForwardDiff.GradientConfig{ForwardDiff.Tag{Pumas._L_param{false, (:pop_CL, :pop_Vc, :pop_tabs, :pk_Ω, :σ_prop), TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, Float64}, Float64, 6, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Pumas._L_param{false, (:pop_CL, :pop_Vc, :pop_tabs, :pk_Ω, :σ_prop), TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, Float64}, Float64, 6}}}, Tuple{}}, Vector{DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{Tuple{typeof(Pumas.Experimental.G1_prior), ADTypes.AutoReverseDiff{false}, Vector{Float64}, Tuple{DifferentiationInterface.Constant{Pumas.Experimental._L_rfx2{true, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Matrix{Float64}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}}}}}, ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}}, Vector{DifferentiationInterfaceForwardDiffExt.ForwardDiffGradientPrep{Tuple{typeof(Pumas.Experimental.G1_likelihood), ADTypes.AutoForwardDiff{nothing, Nothing}, Vector{Float64}, Tuple{DifferentiationInterface.Constant{Pumas.Experimental._L_rfx2{true, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Matrix{Float64}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}}}}}, ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(Pumas.Experimental.G1_likelihood), Float64}, Float64, 6, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(Pumas.Experimental.G1_likelihood), Float64}, Float64, 6}}}, Tuple{Nothing}}}, Vector{DifferentiationInterfaceReverseDiffExt.ReverseDiffTwoArgJacobianPrep{Tuple{typeof(Pumas.Experimental.get_randeffs!), SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, ADTypes.AutoReverseDiff{false}, Vector{Float64}, Tuple{DifferentiationInterface.Constant{Pumas.Experimental._L_rfx2{true, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::Pumas.PDiagTransform, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, TransformVariables.TransformTuple{@NamedTuple{pop_CL::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_Vc::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pop_tabs::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}, pk_Ω::TransformVariables.Constant{Nothing}, σ_prop::TransformVariables.CompositeScalarTransform{Tuple{TransformVariables.TVShift{Float64}, TransformVariables.TVExp}}}}, PumasModel{(pop_CL = 1, pop_Vc = 1, pop_tabs = 1, pk_Ω = 2, σ_prop = 1), 2, (:Depot, :Central), (Symbol("##A##"), Symbol("##b##")), ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, ParamSet{@NamedTuple{pk_η::Pumas.RandomEffect{false, var"#1#8"}}}, Pumas.PreObj{Pumas.TimeDispatcher{var"#2#9", var"#3#10"}}, Pumas.DCPObj{Returns{Returns{@NamedTuple{}}}}, var"#5#12", Pumas.LinearODE, Pumas.DerivedObj{(:conc,), false, var"#6#13"}, var"#7#14", ModelingToolkit.ODESystem, Pumas.PumasModelOptions}, Subject{@NamedTuple{conc::Vector{Union{Missing, Float64}}}, Pumas.ConstantCovar{@NamedTuple{sex::InlineStrings.String1, wtbl::Float64, FSZV::Float64, FSZCL::Float64}}, DosageRegimen{Float64, Union{Missing, Int64}, Float64, Float64, Float64, Float64}, Vector{Float64}}, @NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, Matrix{Float64}, @NamedTuple{pop_CL::Float64, pop_Vc::Float64, pop_tabs::Float64, pk_Ω::PDMats.PDiagMat{Float64, Vector{Float64}}, σ_prop::Float64}}}}}, ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{ReverseDiff.TrackedReal{Float64, Float64, Nothing}}}, Nothing}}, ADTypes.AutoForwardDiff{nothing, Nothing}, ADTypes.AutoReverseDiff{false}, ADTypes.AutoForwardDiff{nothing, Nothing}, Vector{Matrix{Float64}}}, Optim.MultivariateOptimizationResults{Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Returns{Nothing}}, Vector{Float64}, Float64, Float64, Vector{Optim.OptimizationState{Float64, Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Returns{Nothing}}}}, @NamedTuple{x_converged::Bool, f_converged::Bool, g_converged::Bool, f_limit_reached::Bool, g_limit_reached::Bool, h_limit_reached::Bool, time_limit::Bool, callback::Bool, f_increased::Bool, ls_failed::Bool, iterations::Bool, small_trustregion_radius::Bool}}, Vector{Float64}}, VEM{Optim.LBFGS{Nothing, LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Returns{Nothing}}, @NamedTuple{show_every::Int64}, @NamedTuple{}, Random.TaskLocalRNG, EnsembleThreads, ADTypes.AutoForwardDiff{nothing, Nothing}}, Vector{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, @NamedTuple{diffeq_options::@NamedTuple{ss_abstol::Float64, ss_reltol::Float64, ss_maxiters::Int64, abstol::Float64, reltol::Float64, alg::CompositeAlgorithm{0, Tuple{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}}, OrdinaryDiffEqCore.AutoSwitch{Vern7{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Rodas5P{1, ADTypes.AutoForwardDiff{1, Nothing}, LinearSolve.GenericLUFactorization{LinearAlgebra.RowMaximum}, typeof(OrdinaryDiffEqCore.DEFAULT_PRECS), Val{:forward}(), true, nothing, typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!)}, Rational{Int64}, Int64}}, continuity::Symbol}, ensemblealg::EnsembleThreads, constantcoef::Tuple{}}, ParamSet{@NamedTuple{pop_CL::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_Vc::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pop_tabs::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}, pk_Ω::PDiagDomain{PDMats.PDiagMat{Float64, Vector{Float64}}}, σ_prop::RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}, Pumas.OptimState{NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Float64, Vector{Float64}}, Optim.LBFGSState{Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Float64, Vector{Float64}}}}) @ Pumas.Experimental ~/run/_work/PumasTutorials.jl/PumasTutorials.jl/custom_julia_depot/packages/Pumas/lK187/src/estimation/vem.jl:2587 [3] top-level scope @ ~/run/_work/PumasTutorials.jl/PumasTutorials.jl/tutorials/estimation/01-vem-introduction.qmd:346
The recommended workaround is to use the VEM estimates as a starting point for a FOCE fit, then run inference on the FOCE result. This typically converges very quickly since VEM provides excellent initial values:
fpm_foce_from_vem = fit(
warfarin_model,
pop,
coef(fpm_vem),
FOCE();
init_randeffs = empirical_bayes(fpm_vem),
optim_options = (; show_every = 20),
)[ Info: Checking the initial parameter values. [ Info: The initial negative log likelihood and its gradient are finite. Check passed. Iter Function value Gradient norm 0 4.602133e+02 1.168354e+01 * time: 1.9788742065429688e-5 20 4.600183e+02 3.890707e-03 * time: 1.9674479961395264
FittedPumasModel
Dynamical system type: Matrix exponential
Number of subjects: 32
Observation records: Active Missing
conc: 251 47
Total: 251 47
Number of parameters: Constant Optimized
0 6
Likelihood approximation: FOCE
Likelihood optimizer: BFGS
Termination Reason: NoObjectiveChange
Log-likelihood value: -460.01832
--------------------
Estimate
--------------------
pop_CL 0.13782
pop_Vc 8.4196
pop_tabs 1.0985
pk_Ω₁,₁ 0.056592
pk_Ω₂,₂ 0.013577
σ_prop 0.23567
--------------------
infer(fpm_foce_from_vem)[ Info: Calculating: variance-covariance matrix. [ Info: Done.
Asymptotic inference results using sandwich estimator
Dynamical system type: Matrix exponential
Number of subjects: 32
Observation records: Active Missing
conc: 251 47
Total: 251 47
Number of parameters: Constant Optimized
0 6
Likelihood approximation: FOCE
Likelihood optimizer: BFGS
Termination Reason: NoObjectiveChange
Log-likelihood value: -460.01832
---------------------------------------------------------
Estimate SE 95.0% C.I.
---------------------------------------------------------
pop_CL 0.13782 0.0073552 [ 0.1234 ; 0.15223 ]
pop_Vc 8.4196 0.25694 [ 7.916 ; 8.9232 ]
pop_tabs 1.0985 0.24395 [ 0.62038 ; 1.5766 ]
pk_Ω₁,₁ 0.056592 0.022095 [ 0.013286 ; 0.099898]
pk_Ω₂,₂ 0.013577 0.0057871 [ 0.0022345; 0.024919]
σ_prop 0.23567 0.032137 [ 0.17268 ; 0.29866 ]
---------------------------------------------------------
Initializing an FOCE fit with VEM estimates and running for 0 or more iterations is also a way to get a fitted Pumas model with standard EBEs (true posterior mode, instead of the VEM posterior approximation’s mode) that can be used for downstream functions, such as predict, inspect and loglikelihood.
Using VEM to initialize FOCE is a powerful workflow. VEM provides good population parameter estimates and per-subject random effect posteriors, which together allow FOCE to converge quickly and reliably.
3 Comparing VEM and FOCE
3.1 Direct FOCE Fit
Let us also fit the model directly with FOCE from default initial values for comparison:
fpm_foce_direct = fit(
warfarin_model,
pop,
init_params(warfarin_model),
FOCE();
optim_options = (; show_every = 20),
)[ Info: Checking the initial parameter values. [ Info: The initial negative log likelihood and its gradient are finite. Check passed. Iter Function value Gradient norm 0 9.744619e+04 1.950953e+05 * time: 1.5020370483398438e-5 20 4.770711e+02 1.365789e+01 * time: 0.8785018920898438 40 4.677644e+02 9.868356e+00 * time: 1.5284929275512695 60 4.600183e+02 3.347663e-03 * time: 2.2408230304718018
FittedPumasModel
Dynamical system type: Matrix exponential
Number of subjects: 32
Observation records: Active Missing
conc: 251 47
Total: 251 47
Number of parameters: Constant Optimized
0 6
Likelihood approximation: FOCE
Likelihood optimizer: BFGS
Termination Reason: NoObjectiveChange
Log-likelihood value: -460.01832
--------------------
Estimate
--------------------
pop_CL 0.13782
pop_Vc 8.4196
pop_tabs 1.0986
pk_Ω₁,₁ 0.056591
pk_Ω₂,₂ 0.013582
σ_prop 0.23567
--------------------
3.2 Compare Estimates
compare_estimates(; VEM = fpm_vem, FOCE = fpm_foce_direct)| Row | parameter | VEM | FOCE |
|---|---|---|---|
| String | Float64? | Float64? | |
| 1 | pop_CL | 0.134757 | 0.137816 |
| 2 | pop_Vc | 8.45632 | 8.41964 |
| 3 | pop_tabs | 1.06611 | 1.09855 |
| 4 | pk_Ω₁,₁ | 0.0558908 | 0.0565912 |
| 5 | pk_Ω₂,₂ | 0.0116834 | 0.0135816 |
| 6 | σ_prop | 0.236456 | 0.235668 |
3.3 Compare Log-Likelihoods
DataFrame(
Method = ["VEM (via FOCE approx)", "VEM (via LaplaceI approx)", "FOCE direct"],
LogLikelihood = [
loglikelihood(fpm_vem, FOCE()),
loglikelihood(fpm_vem, LaplaceI()),
loglikelihood(fpm_foce_direct),
],
) |> simple_table| Method | LogLikelihood |
| VEM (via FOCE approx) | -460.0 |
| VEM (via LaplaceI approx) | -461.0 |
| FOCE direct | -460.0 |
4 VEM Options Reference
This section provides a comprehensive reference for all VEM keyword arguments. All options below are keyword arguments to VEM() unless otherwise noted.
4.1 Presampling vs. Resampling (presample)
The presample keyword controls which VEM variant is used. Many other defaults change depending on this choice:
| Option | Presampling (presample=true, default) |
Resampling (presample=false) |
|
|---|---|---|---|
| Optimizer | optim_alg |
L-BFGS (deterministic) | Adam (stochastic) |
| Phases | nphases |
1 | 10 |
| Initial samples | init_nsamples |
15 | 3 |
| Sample increment | nsamples_increment |
half the initial samples (7 by default) | the initial samples (3 by default) |
| Max samples | max_nsamples |
Unlimited | 25 |
Presampling draws samples once per phase and reuses them across iterations, which allows a deterministic optimizer (L-BFGS) to be used. Resampling draws fresh samples at every iteration, which requires a stochastic optimizer (Adam) but can be more appropriate for DeepPumas models with neural networks. However, generally the stochastic optimizer needs many more iterations to converge than the deterministic one, so presampling is a good default for most models.
Note that all of the above options can be overridden regardless of the presample setting.
# Resampling VEM
fpm_resample = fit(
warfarin_model,
pop,
init_params(warfarin_model),
VEM(; presample = false, optim_options = (; show_every = 20)),
)[ Info: phase iter obj ∇obj s∇obj [ Info: 1 / 10 1 / 20000 2.3e+05 8.3e+05 1.0e+00 [ Info: 1 / 10 20 / 20000 7.8e+03 2.4e+04 2.8e-02 [ Info: 1 / 10 40 / 20000 2.7e+03 5.3e+03 6.4e-03 [ Info: 1 / 10 50 / 20000 2.2e+03 4.2e+03 5.0e-03 [ Info: Terminating phase due to sgtol. [ Info: 2 / 10 1 / 20000 2.2e+03 4.2e+03 1.0e+00 [ Info: 2 / 10 20 / 20000 6.2e+02 8.1e+02 1.9e-01 [ Info: 2 / 10 40 / 20000 4.1e+02 2.3e+02 5.4e-02 [ Info: 2 / 10 50 / 20000 3.8e+02 1.4e+02 3.4e-02 [ Info: Terminating phase due to sgtol. [ Info: 3 / 10 1 / 20000 3.9e+02 1.5e+02 1.0e+00 [ Info: 3 / 10 20 / 20000 3.6e+02 3.3e+01 2.2e-01 [ Info: 3 / 10 40 / 20000 3.5e+02 4.5e+01 3.0e-01 [ Info: 3 / 10 60 / 20000 3.6e+02 1.0e+02 6.9e-01 [ Info: 3 / 10 80 / 20000 3.5e+02 3.9e+01 2.6e-01 [ Info: 3 / 10 100 / 20000 3.5e+02 3.4e+01 2.3e-01 [ Info: 3 / 10 120 / 20000 3.5e+02 6.2e+01 4.2e-01 [ Info: 3 / 10 140 / 20000 3.5e+02 2.5e+01 1.7e-01 [ Info: 3 / 10 160 / 20000 3.5e+02 3.6e+01 2.4e-01 [ Info: 3 / 10 180 / 20000 3.5e+02 1.6e+01 1.1e-01 [ Info: 3 / 10 200 / 20000 3.5e+02 7.6e+01 5.1e-01 [ Info: 3 / 10 220 / 20000 3.5e+02 2.4e+01 1.6e-01 [ Info: 3 / 10 240 / 20000 3.5e+02 1.2e+02 8.1e-01 [ Info: 3 / 10 260 / 20000 3.4e+02 7.3e+01 4.9e-01 [ Info: 3 / 10 280 / 20000 3.5e+02 2.3e+01 1.5e-01 [ Info: 3 / 10 300 / 20000 3.4e+02 3.9e+01 2.6e-01 [ Info: 3 / 10 320 / 20000 3.5e+02 6.6e+01 4.4e-01 [ Info: 3 / 10 340 / 20000 3.5e+02 2.4e+01 1.6e-01 [ Info: 3 / 10 360 / 20000 3.5e+02 1.2e+02 8.1e-01 [ Info: 3 / 10 380 / 20000 3.5e+02 2.3e+01 1.6e-01 [ Info: 3 / 10 400 / 20000 3.5e+02 2.5e+01 1.7e-01 [ Info: 3 / 10 420 / 20000 3.5e+02 3.6e+01 2.4e-01 [ Info: 3 / 10 440 / 20000 3.5e+02 2.2e+01 1.5e-01 [ Info: 3 / 10 460 / 20000 3.5e+02 3.4e+01 2.3e-01 [ Info: 3 / 10 480 / 20000 3.5e+02 3.2e+01 2.2e-01 [ Info: 3 / 10 500 / 20000 3.4e+02 3.2e+01 2.2e-01 [ Info: 3 / 10 520 / 20000 3.5e+02 5.8e+01 3.9e-01 [ Info: 3 / 10 540 / 20000 3.5e+02 6.5e+01 4.4e-01 [ Info: 3 / 10 560 / 20000 3.5e+02 4.0e+01 2.7e-01 [ Info: 3 / 10 580 / 20000 3.5e+02 6.5e+01 4.4e-01 [ Info: 3 / 10 600 / 20000 3.5e+02 1.5e+01 1.0e-01 [ Info: 3 / 10 620 / 20000 3.5e+02 3.7e+01 2.5e-01 [ Info: 3 / 10 640 / 20000 3.5e+02 1.6e+01 1.0e-01 [ Info: 3 / 10 660 / 20000 3.5e+02 1.0e+01 6.8e-02 [ Info: 3 / 10 680 / 20000 3.5e+02 1.8e+01 1.2e-01 [ Info: 3 / 10 700 / 20000 3.4e+02 1.1e+02 7.5e-01 [ Info: 3 / 10 720 / 20000 3.5e+02 8.9e+01 5.9e-01 [ Info: 3 / 10 740 / 20000 3.5e+02 1.1e+01 7.1e-02 [ Info: 3 / 10 760 / 20000 3.4e+02 1.4e+01 9.2e-02 [ Info: 3 / 10 780 / 20000 3.5e+02 1.2e+02 7.8e-01 [ Info: 3 / 10 800 / 20000 3.4e+02 1.5e+01 1.0e-01 [ Info: 3 / 10 820 / 20000 3.5e+02 1.9e+01 1.3e-01 [ Info: 3 / 10 840 / 20000 3.5e+02 1.9e+01 1.2e-01 [ Info: 3 / 10 860 / 20000 3.5e+02 5.7e+01 3.8e-01 [ Info: 3 / 10 880 / 20000 3.5e+02 3.4e+01 2.3e-01 [ Info: 3 / 10 900 / 20000 3.5e+02 7.9e+01 5.3e-01 [ Info: 3 / 10 920 / 20000 3.5e+02 1.3e+02 9.0e-01 [ Info: 3 / 10 940 / 20000 3.5e+02 3.0e+01 2.0e-01 [ Info: 3 / 10 960 / 20000 3.5e+02 8.4e+01 5.7e-01 [ Info: 3 / 10 980 / 20000 3.5e+02 2.6e+01 1.8e-01 [ Info: 3 / 10 1000 / 20000 3.5e+02 2.2e+01 1.5e-01 [ Info: 3 / 10 1020 / 20000 3.5e+02 7.7e+01 5.2e-01 [ Info: 3 / 10 1040 / 20000 3.4e+02 3.8e+01 2.5e-01 [ Info: 3 / 10 1060 / 20000 3.5e+02 4.2e+01 2.8e-01 [ Info: 3 / 10 1080 / 20000 3.5e+02 1.7e+01 1.1e-01 [ Info: 3 / 10 1100 / 20000 3.5e+02 9.3e+00 6.2e-02 [ Info: 3 / 10 1120 / 20000 3.5e+02 3.2e+01 2.2e-01 [ Info: 3 / 10 1140 / 20000 3.5e+02 2.9e+01 1.9e-01 [ Info: 3 / 10 1160 / 20000 3.5e+02 5.3e+01 3.5e-01 [ Info: 3 / 10 1180 / 20000 3.4e+02 7.3e+01 4.9e-01 [ Info: 3 / 10 1200 / 20000 3.4e+02 4.7e+01 3.1e-01 [ Info: 3 / 10 1220 / 20000 3.5e+02 8.6e+01 5.8e-01 [ Info: 3 / 10 1240 / 20000 3.5e+02 1.7e+01 1.2e-01 [ Info: 3 / 10 1260 / 20000 3.4e+02 2.2e+01 1.5e-01 [ Info: 3 / 10 1280 / 20000 3.5e+02 5.8e+01 3.9e-01 [ Info: 3 / 10 1300 / 20000 3.5e+02 2.3e+01 1.6e-01 [ Info: 3 / 10 1320 / 20000 3.5e+02 9.0e+01 6.0e-01 [ Info: 3 / 10 1340 / 20000 3.5e+02 2.7e+01 1.8e-01 [ Info: 3 / 10 1360 / 20000 3.5e+02 1.0e+01 6.9e-02 [ Info: 3 / 10 1380 / 20000 3.5e+02 3.2e+01 2.1e-01 [ Info: 3 / 10 1400 / 20000 3.5e+02 8.5e+00 5.7e-02 [ Info: 3 / 10 1420 / 20000 3.4e+02 3.1e+01 2.1e-01 [ Info: 3 / 10 1440 / 20000 3.5e+02 1.8e+01 1.2e-01 [ Info: 3 / 10 1460 / 20000 3.5e+02 2.1e+01 1.4e-01 [ Info: 3 / 10 1480 / 20000 3.5e+02 2.3e+01 1.5e-01 [ Info: 3 / 10 1500 / 20000 3.5e+02 3.7e+01 2.5e-01 [ Info: 3 / 10 1520 / 20000 3.4e+02 7.0e+01 4.7e-01 [ Info: 3 / 10 1540 / 20000 3.5e+02 1.2e+01 7.8e-02 [ Info: 3 / 10 1560 / 20000 3.4e+02 6.9e+01 4.7e-01 [ Info: 3 / 10 1580 / 20000 3.5e+02 4.4e+01 2.9e-01 [ Info: 3 / 10 1600 / 20000 3.5e+02 4.0e+01 2.7e-01 [ Info: 3 / 10 1620 / 20000 3.5e+02 3.2e+01 2.2e-01 [ Info: 3 / 10 1640 / 20000 3.5e+02 4.9e+01 3.3e-01 [ Info: 3 / 10 1660 / 20000 3.4e+02 4.1e+01 2.7e-01 [ Info: 3 / 10 1680 / 20000 3.5e+02 6.9e+01 4.6e-01 [ Info: 3 / 10 1700 / 20000 3.5e+02 1.6e+01 1.1e-01 [ Info: 3 / 10 1720 / 20000 3.5e+02 7.2e+01 4.8e-01 [ Info: 3 / 10 1740 / 20000 3.4e+02 3.4e+01 2.3e-01 [ Info: 3 / 10 1760 / 20000 3.4e+02 2.3e+01 1.6e-01 [ Info: 3 / 10 1780 / 20000 3.5e+02 4.1e+01 2.8e-01 [ Info: 3 / 10 1800 / 20000 3.5e+02 7.3e+01 4.9e-01 [ Info: 3 / 10 1820 / 20000 3.5e+02 1.2e+01 8.2e-02 [ Info: 3 / 10 1840 / 20000 3.5e+02 2.2e+01 1.5e-01 [ Info: 3 / 10 1860 / 20000 3.5e+02 1.9e+01 1.3e-01 [ Info: 3 / 10 1880 / 20000 3.4e+02 6.4e+01 4.3e-01 [ Info: 3 / 10 1900 / 20000 3.5e+02 9.3e+01 6.3e-01 [ Info: 3 / 10 1920 / 20000 3.5e+02 1.8e+01 1.2e-01 [ Info: 3 / 10 1940 / 20000 3.5e+02 2.9e+01 1.9e-01 [ Info: 3 / 10 1960 / 20000 3.5e+02 6.7e+01 4.5e-01 [ Info: 3 / 10 1980 / 20000 3.5e+02 2.8e+01 1.9e-01 [ Info: 3 / 10 2000 / 20000 3.4e+02 4.0e+01 2.7e-01 [ Info: 3 / 10 2020 / 20000 3.4e+02 2.5e+01 1.7e-01 [ Info: 3 / 10 2040 / 20000 3.5e+02 7.4e+01 5.0e-01 [ Info: 3 / 10 2060 / 20000 3.5e+02 6.4e+01 4.3e-01 [ Info: 3 / 10 2080 / 20000 3.5e+02 4.6e+01 3.1e-01 [ Info: 3 / 10 2100 / 20000 3.5e+02 4.1e+01 2.8e-01 [ Info: 3 / 10 2120 / 20000 3.5e+02 5.3e+01 3.6e-01 [ Info: 3 / 10 2140 / 20000 3.4e+02 2.9e+01 1.9e-01 [ Info: 3 / 10 2160 / 20000 3.5e+02 1.8e+01 1.2e-01 [ Info: 3 / 10 2180 / 20000 3.5e+02 6.2e+01 4.2e-01 [ Info: 3 / 10 2200 / 20000 3.4e+02 5.5e+01 3.7e-01 [ Info: 3 / 10 2220 / 20000 3.5e+02 5.2e+01 3.5e-01 [ Info: 3 / 10 2240 / 20000 3.5e+02 2.3e+01 1.5e-01 [ Info: 3 / 10 2260 / 20000 3.5e+02 2.8e+01 1.9e-01 [ Info: 3 / 10 2280 / 20000 3.5e+02 4.3e+01 2.9e-01 [ Info: 3 / 10 2300 / 20000 3.5e+02 3.0e+01 2.0e-01 [ Info: 3 / 10 2320 / 20000 3.5e+02 2.5e+01 1.7e-01 [ Info: 3 / 10 2340 / 20000 3.4e+02 3.9e+01 2.6e-01 [ Info: 3 / 10 2360 / 20000 3.5e+02 3.9e+01 2.7e-01 [ Info: 3 / 10 2380 / 20000 3.5e+02 9.5e+01 6.4e-01 [ Info: 3 / 10 2400 / 20000 3.5e+02 2.3e+01 1.5e-01 [ Info: 3 / 10 2420 / 20000 3.5e+02 1.6e+01 1.1e-01 [ Info: 3 / 10 2423 / 20000 3.5e+02 2.3e+01 1.5e-01 [ Info: Terminating phase due to ftol. [ Info: 4 / 10 1 / 20000 3.4e+02 3.0e+01 1.0e+00 [ Info: 4 / 10 20 / 20000 3.5e+02 1.8e+01 6.0e-01 [ Info: 4 / 10 40 / 20000 3.5e+02 2.6e+01 8.7e-01 [ Info: 4 / 10 60 / 20000 3.5e+02 4.8e+01 1.6e+00 [ Info: 4 / 10 80 / 20000 3.5e+02 8.4e+01 2.8e+00 [ Info: 4 / 10 100 / 20000 3.5e+02 8.2e+01 2.8e+00 [ Info: 4 / 10 120 / 20000 3.5e+02 6.6e+01 2.2e+00 [ Info: 4 / 10 140 / 20000 3.5e+02 4.4e+01 1.5e+00 [ Info: 4 / 10 160 / 20000 3.5e+02 1.7e+01 5.9e-01 [ Info: 4 / 10 180 / 20000 3.5e+02 4.1e+01 1.4e+00 [ Info: 4 / 10 200 / 20000 3.5e+02 2.2e+01 7.4e-01 [ Info: 4 / 10 220 / 20000 3.5e+02 3.4e+01 1.1e+00 [ Info: 4 / 10 240 / 20000 3.5e+02 2.0e+01 6.6e-01 [ Info: 4 / 10 260 / 20000 3.5e+02 6.5e+01 2.2e+00 [ Info: 4 / 10 280 / 20000 3.5e+02 2.7e+01 9.1e-01 [ Info: 4 / 10 300 / 20000 3.5e+02 4.9e+01 1.7e+00 [ Info: 4 / 10 320 / 20000 3.5e+02 2.2e+01 7.6e-01 [ Info: 4 / 10 340 / 20000 3.5e+02 2.1e+01 7.1e-01 [ Info: 4 / 10 360 / 20000 3.5e+02 3.4e+01 1.2e+00 [ Info: 4 / 10 380 / 20000 3.5e+02 2.5e+01 8.5e-01 [ Info: 4 / 10 400 / 20000 3.4e+02 1.4e+01 4.6e-01 [ Info: 4 / 10 420 / 20000 3.5e+02 2.7e+01 9.0e-01 [ Info: 4 / 10 440 / 20000 3.5e+02 3.1e+01 1.0e+00 [ Info: 4 / 10 460 / 20000 3.5e+02 7.0e+01 2.4e+00 [ Info: 4 / 10 480 / 20000 3.5e+02 3.3e+01 1.1e+00 [ Info: 4 / 10 500 / 20000 3.5e+02 5.4e+01 1.8e+00 [ Info: 4 / 10 520 / 20000 3.5e+02 1.7e+01 5.9e-01 [ Info: 4 / 10 540 / 20000 3.5e+02 1.5e+01 5.2e-01 [ Info: 4 / 10 560 / 20000 3.5e+02 7.7e+01 2.6e+00 [ Info: 4 / 10 580 / 20000 3.5e+02 8.3e+01 2.8e+00 [ Info: 4 / 10 600 / 20000 3.5e+02 1.3e+01 4.4e-01 [ Info: 4 / 10 620 / 20000 3.5e+02 2.3e+01 7.7e-01 [ Info: 4 / 10 640 / 20000 3.5e+02 7.2e+01 2.4e+00 [ Info: 4 / 10 660 / 20000 3.5e+02 3.9e+01 1.3e+00 [ Info: 4 / 10 680 / 20000 3.4e+02 7.0e+01 2.3e+00 [ Info: 4 / 10 700 / 20000 3.5e+02 1.5e+01 5.0e-01 [ Info: 4 / 10 720 / 20000 3.5e+02 4.7e+01 1.6e+00 [ Info: 4 / 10 740 / 20000 3.5e+02 3.3e+01 1.1e+00 [ Info: 4 / 10 760 / 20000 3.5e+02 3.6e+01 1.2e+00 [ Info: 4 / 10 780 / 20000 3.5e+02 1.4e+01 4.6e-01 [ Info: 4 / 10 800 / 20000 3.5e+02 4.8e+01 1.6e+00 [ Info: 4 / 10 820 / 20000 3.5e+02 3.0e+01 1.0e+00 [ Info: 4 / 10 840 / 20000 3.5e+02 1.7e+01 5.7e-01 [ Info: 4 / 10 860 / 20000 3.4e+02 3.4e+01 1.2e+00 [ Info: 4 / 10 880 / 20000 3.5e+02 1.2e+01 4.0e-01 [ Info: 4 / 10 900 / 20000 3.5e+02 1.3e+01 4.2e-01 [ Info: 4 / 10 920 / 20000 3.5e+02 2.3e+01 7.8e-01 [ Info: 4 / 10 940 / 20000 3.5e+02 3.2e+01 1.1e+00 [ Info: 4 / 10 960 / 20000 3.5e+02 6.4e+01 2.2e+00 [ Info: 4 / 10 980 / 20000 3.5e+02 1.7e+01 5.8e-01 [ Info: 4 / 10 1000 / 20000 3.4e+02 1.8e+01 6.1e-01 [ Info: 4 / 10 1020 / 20000 3.5e+02 6.6e+01 2.2e+00 [ Info: 4 / 10 1040 / 20000 3.5e+02 4.2e+01 1.4e+00 [ Info: 4 / 10 1060 / 20000 3.5e+02 1.0e+01 3.4e-01 [ Info: 4 / 10 1080 / 20000 3.5e+02 8.2e+00 2.8e-01 [ Info: 4 / 10 1100 / 20000 3.5e+02 3.5e+01 1.2e+00 [ Info: 4 / 10 1120 / 20000 3.5e+02 3.1e+01 1.0e+00 [ Info: 4 / 10 1140 / 20000 3.4e+02 8.9e+01 3.0e+00 [ Info: 4 / 10 1160 / 20000 3.5e+02 1.5e+01 5.0e-01 [ Info: 4 / 10 1180 / 20000 3.4e+02 2.0e+01 6.6e-01 [ Info: 4 / 10 1200 / 20000 3.5e+02 6.4e+01 2.2e+00 [ Info: 4 / 10 1220 / 20000 3.5e+02 1.9e+01 6.3e-01 [ Info: 4 / 10 1240 / 20000 3.4e+02 1.7e+01 5.7e-01 [ Info: 4 / 10 1260 / 20000 3.5e+02 3.1e+01 1.0e+00 [ Info: 4 / 10 1280 / 20000 3.5e+02 3.1e+01 1.0e+00 [ Info: 4 / 10 1300 / 20000 3.5e+02 2.2e+01 7.3e-01 [ Info: 4 / 10 1320 / 20000 3.5e+02 5.0e+01 1.7e+00 [ Info: 4 / 10 1340 / 20000 3.5e+02 2.3e+01 7.9e-01 [ Info: 4 / 10 1360 / 20000 3.5e+02 3.5e+01 1.2e+00 [ Info: 4 / 10 1380 / 20000 3.4e+02 4.0e+01 1.3e+00 [ Info: 4 / 10 1400 / 20000 3.5e+02 6.4e+01 2.2e+00 [ Info: 4 / 10 1420 / 20000 3.5e+02 2.1e+01 7.2e-01 [ Info: 4 / 10 1440 / 20000 3.5e+02 2.3e+01 7.8e-01 [ Info: 4 / 10 1460 / 20000 3.5e+02 3.0e+01 1.0e+00 [ Info: 4 / 10 1480 / 20000 3.5e+02 2.3e+01 7.7e-01 [ Info: 4 / 10 1500 / 20000 3.5e+02 5.6e+01 1.9e+00 [ Info: 4 / 10 1520 / 20000 3.5e+02 2.3e+01 7.7e-01 [ Info: 4 / 10 1520 / 20000 3.5e+02 1.4e+01 4.8e-01 [ Info: Terminating phase due to ftol. [ Info: 5 / 10 1 / 20000 3.5e+02 1.9e+01 1.0e+00 [ Info: 5 / 10 20 / 20000 3.5e+02 2.2e+01 1.1e+00 [ Info: 5 / 10 40 / 20000 3.5e+02 1.1e+01 5.5e-01 [ Info: 5 / 10 58 / 20000 3.5e+02 4.8e+01 2.5e+00 [ Info: Terminating phase due to ftol. [ Info: 6 / 10 1 / 20000 3.5e+02 2.8e+01 1.0e+00 [ Info: 6 / 10 20 / 20000 3.5e+02 3.1e+01 1.1e+00 [ Info: 6 / 10 40 / 20000 3.5e+02 1.6e+01 5.6e-01 [ Info: 6 / 10 60 / 20000 3.5e+02 1.9e+01 6.6e-01 [ Info: 6 / 10 80 / 20000 3.5e+02 1.1e+01 3.7e-01 [ Info: 6 / 10 100 / 20000 3.5e+02 9.6e+00 3.4e-01 [ Info: 6 / 10 120 / 20000 3.5e+02 2.4e+01 8.5e-01 [ Info: 6 / 10 140 / 20000 3.5e+02 2.5e+01 8.7e-01 [ Info: 6 / 10 160 / 20000 3.5e+02 1.2e+01 4.3e-01 [ Info: 6 / 10 180 / 20000 3.5e+02 2.6e+01 9.2e-01 [ Info: 6 / 10 200 / 20000 3.5e+02 3.1e+01 1.1e+00 [ Info: 6 / 10 206 / 20000 3.5e+02 6.8e+01 2.4e+00 [ Info: Terminating phase due to ftol. [ Info: 7 / 10 1 / 20000 3.5e+02 1.7e+01 1.0e+00 [ Info: 7 / 10 20 / 20000 3.5e+02 3.6e+01 2.1e+00 [ Info: 7 / 10 40 / 20000 3.4e+02 5.4e+01 3.1e+00 [ Info: 7 / 10 59 / 20000 3.5e+02 1.2e+01 7.1e-01 [ Info: Terminating phase due to ftol. [ Info: 8 / 10 1 / 20000 3.5e+02 1.7e+01 1.0e+00 [ Info: 8 / 10 20 / 20000 3.5e+02 1.7e+01 9.9e-01 [ Info: 8 / 10 40 / 20000 3.5e+02 1.1e+01 6.5e-01 [ Info: 8 / 10 60 / 20000 3.5e+02 7.1e+00 4.1e-01 [ Info: 8 / 10 80 / 20000 3.4e+02 1.5e+01 8.7e-01 [ Info: 8 / 10 100 / 20000 3.5e+02 2.0e+01 1.1e+00 [ Info: 8 / 10 120 / 20000 3.5e+02 2.4e+01 1.4e+00 [ Info: 8 / 10 140 / 20000 3.5e+02 1.1e+01 6.4e-01 [ Info: 8 / 10 160 / 20000 3.5e+02 3.1e+01 1.8e+00 [ Info: 8 / 10 180 / 20000 3.5e+02 9.1e+00 5.2e-01 [ Info: 8 / 10 200 / 20000 3.5e+02 1.8e+01 1.0e+00 [ Info: 8 / 10 220 / 20000 3.5e+02 1.8e+01 1.0e+00 [ Info: 8 / 10 240 / 20000 3.5e+02 6.8e+00 3.9e-01 [ Info: 8 / 10 260 / 20000 3.4e+02 1.5e+01 8.5e-01 [ Info: 8 / 10 280 / 20000 3.5e+02 1.2e+01 7.0e-01 [ Info: 8 / 10 300 / 20000 3.5e+02 1.5e+01 8.9e-01 [ Info: 8 / 10 320 / 20000 3.5e+02 2.0e+01 1.2e+00 [ Info: 8 / 10 340 / 20000 3.5e+02 2.1e+01 1.2e+00 [ Info: 8 / 10 360 / 20000 3.5e+02 1.4e+01 8.3e-01 [ Info: 8 / 10 380 / 20000 3.5e+02 1.7e+01 1.0e+00 [ Info: 8 / 10 400 / 20000 3.5e+02 1.3e+01 7.6e-01 [ Info: 8 / 10 420 / 20000 3.5e+02 1.3e+01 7.3e-01 [ Info: 8 / 10 440 / 20000 3.5e+02 2.4e+01 1.4e+00 [ Info: 8 / 10 460 / 20000 3.5e+02 3.7e+01 2.1e+00 [ Info: 8 / 10 480 / 20000 3.5e+02 7.1e+00 4.1e-01 [ Info: 8 / 10 500 / 20000 3.5e+02 9.6e+00 5.5e-01 [ Info: 8 / 10 520 / 20000 3.5e+02 1.5e+01 8.6e-01 [ Info: 8 / 10 540 / 20000 3.5e+02 2.5e+01 1.5e+00 [ Info: 8 / 10 560 / 20000 3.5e+02 1.1e+01 6.4e-01 [ Info: 8 / 10 580 / 20000 3.5e+02 1.5e+01 8.9e-01 [ Info: 8 / 10 600 / 20000 3.5e+02 3.4e+01 2.0e+00 [ Info: 8 / 10 620 / 20000 3.5e+02 1.9e+01 1.1e+00 [ Info: 8 / 10 640 / 20000 3.5e+02 2.6e+01 1.5e+00 [ Info: 8 / 10 660 / 20000 3.5e+02 1.5e+01 8.5e-01 [ Info: 8 / 10 680 / 20000 3.5e+02 1.2e+01 7.0e-01 [ Info: 8 / 10 700 / 20000 3.5e+02 2.4e+01 1.4e+00 [ Info: 8 / 10 720 / 20000 3.5e+02 1.5e+01 8.4e-01 [ Info: 8 / 10 740 / 20000 3.5e+02 2.4e+01 1.4e+00 [ Info: 8 / 10 760 / 20000 3.5e+02 1.9e+01 1.1e+00 [ Info: 8 / 10 780 / 20000 3.5e+02 1.9e+01 1.1e+00 [ Info: 8 / 10 800 / 20000 3.5e+02 1.3e+01 7.7e-01 [ Info: 8 / 10 820 / 20000 3.5e+02 1.3e+01 7.3e-01 [ Info: 8 / 10 840 / 20000 3.5e+02 9.8e+00 5.7e-01 [ Info: 8 / 10 860 / 20000 3.5e+02 1.8e+01 1.0e+00 [ Info: 8 / 10 870 / 20000 3.5e+02 1.6e+01 9.2e-01 [ Info: Terminating phase due to ftol. [ Info: 9 / 10 1 / 20000 3.5e+02 3.3e+01 1.0e+00 [ Info: 9 / 10 20 / 20000 3.5e+02 1.8e+01 5.4e-01 [ Info: 9 / 10 40 / 20000 3.5e+02 7.4e+00 2.3e-01 [ Info: 9 / 10 60 / 20000 3.5e+02 2.9e+01 8.9e-01 [ Info: 9 / 10 66 / 20000 3.5e+02 1.7e+01 5.1e-01 [ Info: Terminating phase due to ftol. [ Info: 10 / 10 1 / 20000 3.4e+02 4.9e+01 1.0e+00 [ Info: 10 / 10 20 / 20000 3.5e+02 1.0e+01 2.1e-01 [ Info: 10 / 10 40 / 20000 3.5e+02 1.5e+01 3.1e-01 [ Info: 10 / 10 57 / 20000 3.5e+02 8.2e+00 1.6e-01 [ Info: Terminating phase due to ftol.
FittedPumasModel
Dynamical system type: Matrix exponential
Number of subjects: 32
Observation records: Active Missing
conc: 251 47
Total: 251 47
Number of parameters: Constant Optimized
0 6
Estimation method: VEM (EXPERIMENTAL)
--------------------
Estimate
--------------------
pop_CL 0.13482
pop_Vc 8.4599
pop_tabs 1.0559
pk_Ω₁,₁ 0.053984
pk_Ω₂,₂ 0.010195
σ_prop 0.23707
--------------------
When presampling is enabled, the trace is deterministic and should show steady convergence. The trace printed during fitting has three columns:
| Column | Meaning |
|---|---|
| Iter | Current iteration within the phase. |
| Function value | The negative ELBO — the objective being minimized. Lower values indicate a better fit. |
| Gradient norm | The gradient norm — how steep the objective landscape is. Decreasing values indicate the optimizer is converging. |
When resampling is enabled, the trace is stochastic and may show more fluctuation. In the stochastic case, the trace printed during fitting has five columns:
| Column | Meaning |
|---|---|
| phase | Current phase / total phases. Each phase increases the sample count. |
| iter | Current iteration / max iterations within this phase. |
| obj | The negative ELBO — the objective being minimized. Lower values indicate a better fit. |
| ∇obj | The gradient norm — how steep the objective landscape is. Decreasing values indicate the optimizer is converging. |
| s∇obj | The scaled gradient norm — gradient norm relative to the first iteration of the current phase (normalized to 1.0). Shows relative convergence progress within each phase. |
A phase may terminate early with “Terminating phase due to sgtol” when the scaled gradient norm drops below the stopping tolerance (0.05), triggering the next phase, typically with an increase in the sample size and a decrease in the learning rate. For stochastic optimizers like Adam (resampling VEM), some fluctuation in ∇obj is normal because the gradients are estimated from random samples.
compare_estimates(;
VEM_presample = fpm_vem,
VEM_resample = fpm_resample,
FOCE = fpm_foce_direct,
)| Row | parameter | VEM_presample | VEM_resample | FOCE |
|---|---|---|---|---|
| String | Float64? | Float64? | Float64? | |
| 1 | pop_CL | 0.134757 | 0.134818 | 0.137816 |
| 2 | pop_Vc | 8.45632 | 8.45993 | 8.41964 |
| 3 | pop_tabs | 1.06611 | 1.05595 | 1.09855 |
| 4 | pk_Ω₁,₁ | 0.0558908 | 0.0539843 | 0.0565912 |
| 5 | pk_Ω₂,₂ | 0.0116834 | 0.0101946 | 0.0135816 |
| 6 | σ_prop | 0.236456 | 0.237072 | 0.235668 |
4.2 Sample Size Control
VEM draws samples from a base distribution to estimate the ELBO gradient. More samples improve accuracy but increase computation time.
When nsamples is set to a nonzero value, it overrides init_nsamples and sets nsamples_increment to 0, so the sample count stays fixed across all phases:
# Fixed 30 samples per subject across all phases
fpm_nsamples = fit(
warfarin_model,
pop,
init_params(warfarin_model),
VEM(; nsamples = 30, optim_options = (; show_every = 20)),
)[ Info: Checking the initial parameter values. [ Info: The initial negative log likelihood and its gradient are finite. Check passed. Iter Function value Gradient norm 0 2.276373e+05 6.739686e+05 * time: 2.5033950805664062e-5 20 5.371608e+02 1.175628e+02 * time: 1.6425399780273438 40 4.422550e+02 1.173416e+02 * time: 3.2418899536132812 60 3.962220e+02 9.008979e+00 * time: 4.8291871547698975 80 3.681120e+02 4.579177e+00 * time: 6.266824007034302 100 3.442238e+02 4.467160e+00 * time: 7.735270023345947 120 3.435837e+02 4.128496e-01 * time: 9.26999807357788
FittedPumasModel
Dynamical system type: Matrix exponential
Number of subjects: 32
Observation records: Active Missing
conc: 251 47
Total: 251 47
Number of parameters: Constant Optimized
0 6
Estimation method: VEM (EXPERIMENTAL)
--------------------
Estimate
--------------------
pop_CL 0.13504
pop_Vc 8.4591
pop_tabs 1.0588
pk_Ω₁,₁ 0.053868
pk_Ω₂,₂ 0.010429
σ_prop 0.23725
--------------------
For multi-phase runs, the number of samples in phase p is: min(init_nsamples + (p - 1) * nsamples_increment, max_nsamples). More samples in later phases refine the estimate as the parameters approach their final values.
# Resampling with more phases and custom sample progression
fit(
model,
pop,
params,
VEM(;
presample = false,
nphases = 15,
init_nsamples = 5,
nsamples_increment = 5,
max_nsamples = 50,
),
)4.3 Optimizer Options (optim_options)
Fine-grained control over the optimizer is passed through the optim_options named tuple. The available options depend on which variant is used.
For presampling (L-BFGS):
| Option | Default | Description |
|---|---|---|
iterations |
3000 | Maximum iterations per phase |
show_trace |
true |
Print convergence information |
show_every |
5 | Print every N iterations |
f_reltol |
1e-5 | Relative tolerance on the objective |
g_tol |
1e-3 | Gradient norm tolerance |
fit(model, pop, params, VEM(; optim_options = (; iterations = 5000, show_every = 10)))For resampling (Adam):
| Option | Default | Description |
|---|---|---|
iterations |
20000 | Maximum iterations per phase |
show_trace |
true |
Print convergence information |
show_every |
10 | Print every N iterations |
subject_ratio |
1.0 | Fraction of subjects per iteration (mini-batching) |
Setting subject_ratio < 1.0 enables mini-batching, where only a random subset of subjects is used per iteration. This can speed up each iteration for large datasets at the cost of noisier gradients:
fit(
model,
pop,
params,
VEM(; presample = false, optim_options = (; iterations = 30000, subject_ratio = 0.5)),
)4.4 Prior on Population Parameters (prior)
By default (prior=true), VEM adds the log prior on the population parameters, if defined in the @param block, to the ELBO. This allows for maximum a-posteriori (MAP) estimation of the population parameters, after marginalizing the random effects. Setting prior=false ignores the prior contribution, which means the ELBO depends only on the likelihood:
fit(model, pop, params, VEM(; prior = false))If the @param block has no prior distributions and only includes domains, this option is not used.
4.5 Base Distribution Standard Deviation (base_dist_std)
VEM’s variational family uses a reparameterization trick: samples are drawn from a base distribution and transformed to approximate the posterior. The base_dist_std parameter controls the standard deviation of this base distribution (default: 0.1):
fit(model, pop, params, VEM(; base_dist_std = 0.05))Smaller values start the optimization with a tighter initial approximation to the posterior, which can help stability for sensitive models. Larger values allow broader initial exploration. The default of 0.1 works well for most models.
4.6 Posterior Covariance Structure (diagonal)
VEM approximates each subject’s random effect posterior with a multivariate normal distribution. The diagonal keyword controls the covariance structure:
diagonal=true(default): assumes independent random effects in the posterior. Faster and requires fewer variational parameters.diagonal=false: captures correlations between random effects in the posterior. Slower but may be more accurate when random effects are correlated.
fpm_full_cov = fit(
warfarin_model,
pop,
init_params(warfarin_model),
VEM(; diagonal = false, optim_options = (; show_every = 20)),
)[ Info: Checking the initial parameter values. [ Info: The initial negative log likelihood and its gradient are finite. Check passed. Iter Function value Gradient norm 0 2.271403e+05 6.695666e+05 * time: 1.6927719116210938e-5 20 5.119652e+02 1.224773e+02 * time: 0.7859439849853516 40 3.906853e+02 3.567349e+01 * time: 1.6442999839782715 60 3.460485e+02 2.566981e+00 * time: 2.4627139568328857 80 3.411049e+02 6.556773e-01 * time: 3.1744160652160645 100 3.394609e+02 2.085613e+00 * time: 3.948915958404541
FittedPumasModel
Dynamical system type: Matrix exponential
Number of subjects: 32
Observation records: Active Missing
conc: 251 47
Total: 251 47
Number of parameters: Constant Optimized
0 6
Estimation method: VEM (EXPERIMENTAL)
--------------------
Estimate
--------------------
pop_CL 0.13508
pop_Vc 8.4155
pop_tabs 1.0811
pk_Ω₁,₁ 0.055936
pk_Ω₂,₂ 0.012995
σ_prop 0.23541
--------------------
compare_estimates(;
VEM_diagonal = fpm_vem,
VEM_full_cov = fpm_full_cov,
FOCE = fpm_foce_direct,
)| Row | parameter | VEM_diagonal | VEM_full_cov | FOCE |
|---|---|---|---|---|
| String | Float64? | Float64? | Float64? | |
| 1 | pop_CL | 0.134757 | 0.13508 | 0.137816 |
| 2 | pop_Vc | 8.45632 | 8.41546 | 8.41964 |
| 3 | pop_tabs | 1.06611 | 1.08115 | 1.09855 |
| 4 | pk_Ω₁,₁ | 0.0558908 | 0.0559361 | 0.0565912 |
| 5 | pk_Ω₂,₂ | 0.0116834 | 0.0129952 | 0.0135816 |
| 6 | σ_prop | 0.236456 | 0.23541 | 0.235668 |
4.7 Holding Parameters Constant (constantcoef)
You can hold specific parameters constant during estimation using the constantcoef keyword. This is a keyword to fit, not to VEM():
fit(model, pop, params, VEM(); constantcoef = (:σ_prop,))constantcoef is a keyword argument to fit, not to VEM(). This is consistent with how it works for FOCE and other algorithms.
4.8 Reproducibility (rng)
VEM uses random sampling internally, so results vary across runs by default. Set a specific random number generator for reproducible results:
rng = default_rng()
seed!(rng, 123)
fit(model, pop, params, VEM(; rng))Even with presampling, different seeds produce different results because the pre-drawn samples differ. For exact reproducibility across runs, always set the rng.
4.9 Threading (ensemblealg)
Control parallelism across subjects:
EnsembleThreads()(default): uses multiple threads for gradient computation.EnsembleSerial(): single-threaded, useful for debugging.
fit(model, pop, params, VEM(; ensemblealg = EnsembleSerial()))4.10 ODE Solver Options (diffeq_options)
Pass differential equation solver options through diffeq_options:
fit(model, pop, params, VEM(; diffeq_options = (; abstol = 1e-8, reltol = 1e-8)))This is useful when the default ODE tolerances are insufficient for a stiff or sensitive model.
5 When to Use VEM vs FOCE
Choose VEM when:
- You have a discrete response model (Poisson, Bernoulli, TTE) with random effects and want to use the standard
@modelmacro. - You have truncated or censored data and
LaplaceIis too slow. - You want a fast initial fit to use as a starting point for FOCE, especially for complex models and large populations when mini-batching + stochastic optimization can help.
- FOCE struggles to converge or is too slow per iteration and you need an alternative optimization approach.
Choose FOCE when:
- You need a well-established (non-experimental) fitting method, diagnostics and inference tools.
6 Conclusion
In this tutorial, we introduced VEM as a flexible estimation method in Pumas. We covered its conceptual foundations (ELBO maximization, presampling vs. resampling, and posterior covariance structure), demonstrated its use with a warfarin PK model, showed how to compare results with FOCE, and provided a reference for the main VEM options.