Population-level parameters: \(\theta\), \(\Omega\), \(c\), \(\sigma\)
Covariates: \(\text{Age}_i\), \(\text{Weight}_i\),
Random effects: \(\eta_i \sim \mathcal{N}(0, \Omega)\)
Individual derived parameters: \[ \begin{aligned} \text{Ka}_i & = \theta_1 \cdot e^{\eta_{i,1}} + c_1 \cdot \text{Age}_i + \\ \text{CL}_i & = \theta_2 \cdot e^{\eta_{i,2}} \\ \text{V}_i & = \theta_3 \cdot e^{\eta_{i,3}} + c_2 \cdot \text{Weight}_i^{c_3} + \end{aligned} \]Error model: \[ \text{Outcome}_i \sim \mathcal{N}(\text{Central}_i, \text{Central}_i \cdot \sigma) \]
Population-level parameters: \(\theta\), \(\Omega\), \(c\), \(\sigma\)
Covariates: \(\text{Age}_i\), \(\text{Weight}_i\),
Random effects: \(\eta_i \sim \mathcal{N}(0, \Omega)\)
Individual derived parameters: \[ \begin{aligned} \text{Ka}_i & = \theta_1 \cdot e^{\eta_{i,1}} + c_1 \cdot \text{Age}_i + \\ \text{CL}_i & = \theta_2 \cdot e^{\eta_{i,2}} \\ \text{V}_i & = \theta_3 \cdot e^{\eta_{i,3}} + c_2 \cdot \text{Weight}_i^{c_3} + \end{aligned} \]Dynamics: \[ \begin{aligned} \frac{d[\text{Depot}_i]}{dt} & = - \text{Ka}_i \cdot [\text{Depot}_i] \\ \frac{d[\text{Central}_i]}{dt} & = \text{Ka}_i \cdot [\text{Depot}_i] \, - \end{aligned} \]
Error model: \[ \text{Outcome}_i \sim \mathcal{N}(\text{Central}_i, \text{Central}_i \cdot \sigma) \]
2018 - ”Neural Ordinary Differential Equations”, Chen et al.
2020 - “Universal Differential Equations for Scientific Machine Learning”, Rackauckas et al.
\[ \frac{dx}{dt} = \text{NN}(x(t), t) \]
Use a differential equation solver as a scaffold for continuous time, continuous depth neural networks.
Similar to recurrent neural networks and ResNets
\[ \begin{aligned} \frac{dx}{dt} & = x \cdot y - \text{NN}(x) \\ \frac{dy}{dt} & = p - x \cdot y \end{aligned} \]
Insert universal approximators (like NNs) to capture terms in dynamical systems.
An abstract concept of mixing scientific modeling with machine learning.
\[ \begin{aligned} \frac{d\text{Depot}}{dt} & = \text{NN}(\text{Depot}, \text{Central}, R)[1] \\ \frac{d\text{Central}}{dt} & = \text{NN}(\text{Depot}, \text{Central}, R)[2] \\ \frac{dR}{dt} & = \text{NN}(\text{Depot}, \text{Central}, R)[3] \end{aligned} \]
\[ \begin{aligned} \frac{d\text{Depot}}{dt} & = -\text{NN}_1(\text{Depot}) \\ \frac{d\text{Central}}{dt} & = \text{NN}_1(\text{Depot}) - \text{NN}_2(\text{Central}) \\ \frac{dR}{dt} & = \text{NN}_3(\text{Central}, R) \end{aligned} \]
\[ \begin{aligned} \frac{d\text{Depot}}{dt} & = -K_a \cdot \text{Depot} \\ \frac{d\text{Central}}{dt} & = K_a \cdot \text{Depot} - \frac{\text{CL}}{V_c} \cdot \text{Central} \\ \frac{dR}{dt} & = \text{NN} \Bigg( \frac{\text{Central}}{V_c}, R \Bigg) \end{aligned} \]
\[ \begin{aligned} \frac{d\text{Depot}}{dt} & = -K_a \cdot \text{Depot} \\ \frac{d\text{Central}}{dt} & = K_a \cdot \text{Depot} - \frac{\text{CL}}{V_c} \cdot \text{Central} \\ \frac{dR}{dt} & = k_\text{in} \cdot \Bigg( 1 + \text{NN}\Bigg( \frac{\text{Central}}{V_c} \Bigg) \Bigg) - k_\text{out} \cdot R \end{aligned} \]
Mathematically: Just a function!
NNs are useable anywhere where you’d use a function!
The only hard part is building software for fitting but, with DeepPumas, that’s not your problem!