using Pumas, GlobalSensitivity, CairoMakie, PumasPlots
In this tutorial, we will cover running global sensitivity analysis on the HCV model.
peg_inf_model = @model begin @param begin logKa ∈ RealDomain() logKe ∈ RealDomain() logVd ∈ RealDomain() logn ∈ RealDomain() logd ∈ RealDomain() logc ∈ RealDomain() logEC50 ∈ RealDomain() ω²Ka ∈ RealDomain(lower=0.0) ω²Ke ∈ RealDomain(lower=0.0) ω²Vd ∈ RealDomain(lower=0.0) ω²n ∈ RealDomain(lower=0.0) ω²δ ∈ RealDomain(lower=0.0) ω²c ∈ RealDomain(lower=0.0) ω²EC50 ∈ RealDomain(lower=0.0) σ²PK ∈ RealDomain(lower=0.0) σ²PD ∈ RealDomain(lower=0.0) end @random begin ηKa ~ Normal(0.0, sqrt(ω²Ka)) ηKe ~ Normal(0.0, sqrt(ω²Ke)) ηVd ~ Normal(0.0, sqrt(ω²Vd)) ηn ~ Normal(0.0, sqrt(ω²n)) ηδ ~ Normal(0.0, sqrt(ω²δ)) ηc ~ Normal(0.0, sqrt(ω²c)) ηEC50 ~ Normal(0.0, sqrt(ω²EC50)) end @pre begin # constants p = 100.0 d = 0.001 e = 1e-7 s = 20000.0 log_Ka = logKa + ηKa log_Ke = logKe + ηKe log_Vd = logVd + ηVd log_n = logn + ηn log_d = logd + ηδ log_c = logc + ηc log_EC50 = logEC50 + ηEC50 end @init begin T = exp(log_c + log_d)/(p*e) I = (s*e*p - d*exp(log_c + log_d))/(p*exp(log_d)*e) W = (s*e*p - d*exp(log_c + log_d))/(exp(log_c + log_d)*e) end @dynamics begin X' = -exp(log_Ka)*X A' = exp(log_Ka)*X - exp(log_Ke)*A T' = s - T*(e*W + d) I' = e*W*T - exp(log_d)*I W' = p/((A/exp(log_Vd)/exp(log_EC50) + 1e-100)^exp(log_n) + 1)*I - exp(log_c)*W end @derived begin conc = @. A/exp(log_Vd) log10W = @. log10(max.(W, 1e-5)) yPK ~ @. Normal(A/exp(log_Vd), sqrt(σ²PK)) yPD ~ @. Normal(log10W, sqrt(σ²PD)) nca1 := @nca conc cmax_pk = NCA.cmax(nca1) nca2 := @nca log10W cmax_pd = NCA.cmax(nca2) end end
PumasModel Parameters: logKa, logKe, logVd, logn, logd, logc, logEC50, ω²Ka, ω²Ke, ω ²Vd, ω²n, ω²δ, ω²c, ω²EC50, σ²PK, σ²PD Random effects: ηKa, ηKe, ηVd, ηn, ηδ, ηc, ηEC50 Covariates: Dynamical variables: X, A, T, I, W Derived: conc, log10W, yPK, yPD, cmax_pk, cmax_pd Observed: conc, log10W, yPK, yPD, cmax_pk, cmax_pd
We generate a population to simulate with.
peg_inf_dr = DosageRegimen(180.0) t = collect(0.0:1.0:28.0) _pop = map(i -> Subject(id=i, observations=(yPK=[], yPD=[]), events=peg_inf_dr, time=t), 1:3)
Population Subjects: 3 Observations: yPK, yPD
Let's define the parameters now, we'll fix the ω parameters to neutralize effect of random effects and run the GSA only on the population parameters logKa
,logKe
,logVd
,logn
,logd
,logc
and logEC50
.
param_PKPD = ( logKa = log(0.80), logKe = log(0.15), logVd = log(100.0), logn = log(2.0), logd = log(0.20), logc = log(7.0), logEC50 = log(0.12), # random effects variance parameters, set to 0 to neutralize effect of random effects ω²Ka = 0.0, ω²Ke = 0.0, ω²Vd = 0.0, ω²n = 0.0, ω²δ = 0.0, ω²c = 0.0, ω²EC50 = 0.0, # variance parameter in proportional error model σ²PK = 0.04, σ²PD = 0.04)
(logKa = -0.2231435513142097, logKe = -1.8971199848858813, logVd = 4.605170 185988092, logn = 0.6931471805599453, logd = -1.6094379124341003, logc = 1. 9459101490553132, logEC50 = -2.120263536200091, ω²Ka = 0.0, ω²Ke = 0.0, ω²V d = 0.0, ω²n = 0.0, ω²δ = 0.0, ω²c = 0.0, ω²EC50 = 0.0, σ²PK = 0.04, σ²PD = 0.04)
We'll run the Morris method and Sobol method. The derived variables cmax_pk
and cmax_pd
which are CMAX outputs of NCA will be analysed.
Let's take a look at the simulation of the model to ensure everything is working as expected. We'll use threading across subjects for simulation and also utlize this parallelism down the line in GSA as well.
simdata = simobs(peg_inf_model, _pop, param_PKPD, ensemblealg=EnsembleThreads()) sim_plot(peg_inf_model ,simdata, observations=[:yPD,:yPK], all = true)
To run the GSA we'll define the parameter ranges for our parameters of interest. Let's set the range between half and twice of the initial values.
p_range_low = (logKa = log(0.80)/2, logKe = log(0.15)/2, logVd = log(100.0)/2, logn = log(2.0)/2, logd = log(0.20)/2, logc = log(7.0)/2, logEC50 = log(0.12)/2, ) p_range_high = (logKa = log(0.80)*2, logKe = log(0.15)*2, logVd = log(100.0)*2, logn = log(2.0)*2, logd = log(0.20)*2, logc = log(7.0)*2, logEC50 = log(0.12)*2, )
(logKa = -0.4462871026284194, logKe = -3.7942399697717626, logVd = 9.210340 371976184, logn = 1.3862943611198906, logd = -3.2188758248682006, logc = 3. 8918202981106265, logEC50 = -4.240527072400182)
Now, we are ready to run GSA on our model.
morris_ = Pumas.gsa( peg_inf_model, _pop,param_PKPD, GlobalSensitivity.Morris(), [:cmax_pk,:cmax_pd], p_range_low, p_range_high, ensemblealg=EnsembleThreads())
Morris Sensitivity Analysis Means (μ) 2×8 DataFrame Row │ dv_name logKa logKe logVd logn logd ⋯ │ Any Float64 Float64 Float64 Float64 Float64 ⋯ ─────┼───────────────────────────────────────────────────────────────────── ───── 1 │ cmax_pk 0.215144 -0.658552 -2.16182 1.43616e-8 -3.46422 e-9 ⋯ 2 │ cmax_pd 0.000326863 0.00135321 -0.0205303 -0.011866 -0.42875 2 2 columns om itted Means star (μ*) 2×8 DataFrame Row │ dv_name logKa logKe logVd logn logd lo ⋯ │ Any Float64 Float64 Float64 Float64 Float64 Fl ⋯ ─────┼───────────────────────────────────────────────────────────────────── ───── 1 │ cmax_pk 0.215144 0.658552 2.16182 5.42518e-8 3.86123e-9 3. ⋯ 2 │ cmax_pd 0.000326863 0.00181066 0.0205303 0.0156105 0.428752 0. 2 columns om itted Variances 2×8 DataFrame Row │ dv_name logKa logKe logVd logn logd l ⋯ │ Any Float64 Float64 Float64 Float64 Float64 F ⋯ ─────┼───────────────────────────────────────────────────────────────────── ───── 1 │ cmax_pk 0.0591453 0.416196 6.01715 8.91614e-15 3.50687e-17 1 ⋯ 2 │ cmax_pd 1.42465e-7 1.8728e-5 0.00131828 0.00110238 0.00148458 0 2 columns om itted
Let's create bar plots of the means of Morris Method output to visualize the output.
keys_ = [keys(p_range_low)...] cmax_pk_meansstar = [morris_.means_star[1,:][key] for key in keys_] cmax_pd_meansstar = [morris_.means_star[2,:][key] for key in keys_] fig = Figure(resolution = (1200, 400)) plot_pk = barplot(fig[1,1], string.(keys_), cmax_pk_meansstar, axis = (xlabel ="Parameters", title="Cmax PK Morris means")) plot_pd = barplot(fig[1,2], string.(keys_), cmax_pd_meansstar, axis = (xlabel ="Parameters", title="Cmax PD Morris means")) display(fig)
We observe that PK Cmax is most sensitive to logVd
, logKa
and logKe
parameters where as PD Cmax is most sensitive to logc
, logd
and logKe
.
sobol_ = Pumas.gsa( peg_inf_model, _pop, param_PKPD, GlobalSensitivity.Sobol(), [:cmax_pk,:cmax_pd], p_range_low, p_range_high, N=4000, ensemblealg=EnsembleThreads())
Sobol Sensitivity Analysis First Order Indices 2×8 DataFrame Row │ dv_name logKa logKe logVd logn logd l ⋯ │ Any Float64 Float64 Float64 Float64 Float64 F ⋯ ─────┼───────────────────────────────────────────────────────────────────── ───── 1 │ cmax_pk 0.00019134 0.0103614 0.961884 7.19654e-9 -1.33176e-7 - ⋯ 2 │ cmax_pd -0.00111712 -0.095498 0.0176666 0.00241971 0.368883 2 columns om itted Total Order Indices 2×8 DataFrame Row │ dv_name logKa logKe logVd logn logd log ⋯ │ Any Float64 Float64 Float64 Float64 Float64 Flo ⋯ ─────┼───────────────────────────────────────────────────────────────────── ───── 1 │ cmax_pk 0.000469732 0.037356 0.988439 -1.34007e-7 1.98202e-6 1.0 ⋯ 2 │ cmax_pd 0.000810145 0.127653 0.0121693 -0.00256567 0.318561 0.6 2 columns om itted
We use heatmaps to visualize the Sobol output
cmax_pk_s1 = [sobol_.first_order[1,:][key] for key in keys_] cmax_pd_s1 = [sobol_.first_order[2,:][key] for key in keys_] cmax_s1 = hcat(cmax_pk_s1, cmax_pd_s1) fig = Figure(resolution = (1200, 400)) plot_s1 = heatmap(fig[1,1], 1:2, 1:7, cmax_s1, axis = (xticks = (1:2, ["Cmax_PK","Cmax_PD"]), yticks = (1:7, string.(keys_)), title="First Order Indices"), colormap = "darkrainbow") cmax_pk_st = [sobol_.total_order[1,:][key] for key in keys_] cmax_pd_st = [sobol_.total_order[2,:][key] for key in keys_] cmax_st = hcat(cmax_pk_st, cmax_pd_st) plot_st = heatmap(fig[1,2], 1:2, 1:7, cmax_st, axis = (xticks = (1:2, ["Cmax_PK","Cmax_PD"]), yticks = (1:7, string.(keys_)), title="First Order Indices"), colormap = "darkrainbow") display(fig)
For PK Cmax logVd
comes out to be the most dominant parameter by a large margin as evident by both First and Total Order indices of Sobol. PD Cmax is most sensitive to logc
and logd
and the Total Order indices show logKe
is also important, hence we can deduce that logKe
has a greater influence due to its interactions as compared to its contribution individually.