using BioequivalencePower
Pumas Bioequivalence Power for users of R’s PowerTOST - a comparison with PowerTOST’s vignettes
1 Overview
The BioequivalencePower package in Pumas enables power and sample size calculations for planning of bioequivalence trials. The package provides all the functionality available in R’s PowerTOST package.
The BioequivalencePower package provides complete feature parity with R’s PowerTOST version 1.5.4. Every aspect of the functionality has been tested on dozens of numerical inputs for comparison. Pumas is written in Julia and in the case of `BioequivalencePower’ all code was independently developed to yield similar outputs to PowerTOST.
This tutorial mirrors R’s PowerTOST vignettes using BioequivalencePower. The purpose here is to support users that have previously used R’s PowerTOST. By comparing the Pumas Julia commands with PowerTOST’s R code commands, users can easily port from one package to the other.
We do not repeat the contextual descriptions in the PowerTOST vignette’s. For such a description in the context of Pumas see the Pumas Bioequivalanece documentation. Our purpose here is simply to carry out the numerical examples with identical settings to the PowerTOST vignettes.
PowerTOST (1.5.4) has 6 vignettes. Here, in this tutorial, functionality from all 6 vignettes is condensed into a single tutorial. Each main heading matches a vignette.
2 The Main Description (first vignette)
The main vignette of PowerTOST outlines key definitions and installation instructions of that package.
2.1 Bioequivalence Power Basics
An introduction to power analysis in the context of bioequivlaence and Pumas may be found in this tutorial. To use BioequivalencePower, simply run the following in your Pumas environment:
We also use Julia’s DataFrames and DataFramesMeta packages:
using DataFrames, DataFramesMetaA key BioequivalencePower function is supported_designs_and_studies. It yields an overview of the types and capabilities of the package. This is somewhat similar to PowerTOST’s known.designs function.
The function returns several tables, namely :designs, :studies, :legend, and :methods. See the documentation for a full description.
typeof(supported_designs_and_studies())@NamedTuple{designs::DataFrame, studies::DataFrame, legend::DataFrame, methods::DataFrame}
In general PowerTOST supports two main types of actions, power computation, and sample size determination. These are often carried out via functions that include power or sampleN in their name, and the followed by a suffix. Different functions are then used for different settings.
With BioequivalencePower we use Julia’s multiple dispatch capability to have two functions for these actions, namely power and samplesize. We can then invoke these functions with different arguments, where some of the arguments are typically types themselves. Examples follow.
We note that Pumas BioequivalencePower has two main ways of operation. One mode of operation is called the easy-API and involves function call where all needed arguments are supplied to the function. The other mode of operation is via the complete-API where objects are constructed and are passed to the functions. The easy-API is much closer in behaviour to PowerTOST and in this tutorial we only use the easy-API. See the BioequivalencePower documentation for reference about the complete-API.
3 Further comparisons (remaining vignettes)
The sections below are structured as closely as possible to the PowerTOST vignettes and have links to the appropriate location in the vignettes. You may then compare Pumas BioequivalencePower Julia code and output with PowerTOST R code and output. In each case click the Pumas or PowerTOST tabs to alternate between the two.
Also note that all code comments stating `PowerTOST code’ include R code attributed to the PowerTOST Vignette. This code is pasted here simply for convenience and we acknowledge that it is part of the PowerTOST package and covered by the license of that package.
3.1 Average Bioequivalence
We now parallel the “Average Bioequivalence” examples from the PowerTOST Vignette.
Sample Size Example 1
This example parallel’s this PowerTOST example.
samplesize(RatioTOST; CV = 0.30)Determined sample size: n = 40 achieved power = 81.585%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.3 θ₀ = 0.95 [θ₁, θ₂] = [0.8, 1.25]
sampleN.TOST(CV = 0.30)
n = 40 power = 0.815845
Sample Size Example 2
This example parallel’s this PowerTOST example.
samplesize(FiducialRatioTOST; SE = 0.20, SEb = 0.40, num_sims = 1e6)Determined sample size: n = 28 achieved power = 80.881%
HypothesisTestMetaParams Settings: α = 0.025 target_power = 0.8
Study: BioequivalencePower.FiducialRatioTOST{BioequivalencePower.ED2x2Crossover} SE = 0.2 SEb = 0.4 θ₀ = 0.95 [θ₁, θ₂] = [0.8, 1.25]
sampleN.RatioF(CV = 0.20, CVb = 0.40)n = 28 power = 0.807774
Sample Size Example 3
This example parallel’s this PowerTOST example.
samplesize(RatioTOST; CV = 0.20, peθ₀ = 0.92)Determined sample size: n = 28 achieved power = 82.274%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.2 θ₀ = 0.92 [θ₁, θ₂] = [0.8, 1.25]
sampleN.TOST(CV = 0.20, theta0 = 0.92)n = 28 power = 0.822742
We can now compute a confidence interval for CV assuming we had 16 - 2 degrees of freedom in a 2x2x2 pilot study:
round(confint_cv(0.20, 16 - 2; side = :upper, α = 0.2)[2]; digits = 6)0.244363
CVCL(CV = 0.2,df=16-2,side="upper",alpha = 0.20)[["upper CL"]]0.244363
Now this CV can be used for sample size determination.
CL_upper = confint_cv(0.20, 16 - 2; side = :upper, α = 0.2)[2] #2 is upper limit
samplesize(RatioTOST, CV = CL_upper, peθ₀ = 0.92)Determined sample size: n = 40 achieved power = 81.692%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.2443630748030675 θ₀ = 0.92 [θ₁, θ₂] = [0.8, 1.25]
CL.upper<-CVCL(CV=0.2,df=16-2,side="upper",alpha=0.20)[["upper CL"]]
sampleN.TOST(CV = CL.upper, theta0 = 0.92, print = FALSE)n = 40 power = 0.816919
And if we don’t use the new sample size of 40, but rather stay with the original sample size, then this new_CV yields lower power.
power(RatioTOST; CV = CL_upper, peθ₀ = 0.92, n = 28)Computed achieved power: 67.925%
HypothesisTestMetaParams Settings: n = 28 α = 0.05
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.2443630748030675 θ₀ = 0.92 [θ₁, θ₂] = [0.8, 1.25]
power.TOST(CV = CL.upper, theta0 = 0.92, n = 28)power = 0.679253
And here is one more scenario with a slightly better CV, but slightly worse ratio of T/R, and staying with the higher number of n=40 subjects.
power(RatioTOST; CV = 0.22, peθ₀ = 0.90, n = 40)Computed achieved power: 76.868%
HypothesisTestMetaParams Settings: n = 40 α = 0.05
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.22 θ₀ = 0.9 [θ₁, θ₂] = [0.8, 1.25]
power.TOST(CV = 0.22, theta0 = 0.90, n = 40)power = 0.7686761
Using expected power
This continuation of the example parallel’s this PowerTOST “advanced” discussion.
samplesize(
RatioTOST,
ED2x2Crossover, #Note that when priors we have to specify the design
CVSEPrior, #What is the prior on
DegreesOfFreedomPriorParameters; #How do we parametarize the prior
CV = 0.2,
peθ₀ = 0.92, #Assumptions
n_prior = 16,
design_prior = ED2x2Crossover, #Prior Assumptions
)Determined sample size: n = 30 achieved power = 80.607%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study:
CV Bayesian wrapper on: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.2 θ₀ = 0.92 [θ₁, θ₂] = [0.8, 1.25]
Prior parameters: , BioequivalencePower.DegreesOfFreedomPriorParameters
df_prior: Int64 14
expsampleN.TOST(CV = 0.20, theta0 = 0.92, prior.type = "CV",
prior.parm = list(m = 16, design = "2x2x2"))n = 30 n = 0.806069
Note that to get behaviour similar to PowerTOST we set used_SE_prior to be the associated SE with the CV. For this we use the cv2se function.
samplesize(
RatioTOST,
ED2x2Crossover,
θ0Prior, #Now prior on θ₀
StandardErrorPriorParameters; #Parameter type for prior on θ₀
CV = 0.2,
peθ₀ = 0.92,
n_prior = 16,
design_prior = ED2x2Crossover, #Prior Assumptions
used_SE_prior = cv2se(0.2),
)Determined sample size: n = 46 achieved power = 80.524%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study:
CV Bayesian wrapper on: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.2 θ₀ = 0.92 [θ₁, θ₂] = [0.8, 1.25]
Prior parameters: , BioequivalencePower.StandardErrorPriorParameters
SE_prior: Float64 0.07001849144447606
expsampleN.TOST(CV = 0.20, theta0 = 0.92, prior.type = "theta0",
prior.parm = list(m = 16, design = "2x2x2"))n = 46 n = 0.805236
samplesize(
RatioTOST,
ED2x2Crossover,
BothCVSEandθ0Prior, #Now prior both on CV and θ₀
DegreesOfFreedomAndStandardErrorPriorParameters; #Joint parameters
CV = 0.2,
peθ₀ = 0.92,
n_prior = 16,
design_prior = ED2x2Crossover,
used_SE_prior = cv2se(0.2),
)Determined sample size: n = 54 achieved power = 80.243%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study:
CV Bayesian wrapper on: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.2 θ₀ = 0.92 [θ₁, θ₂] = [0.8, 1.25]
Prior parameters: , BioequivalencePower.DegreesOfFreedomAndStandardErrorPriorParameters
df_prior: Int64 14
SE_prior: Float64 0.07001849144447606
expsampleN.TOST(CV = 0.20, theta0 = 0.92, prior.type = "both",
prior.parm = list(m = 16, design = "2x2x2"))n = 54 n = 0.802440
Sample Size Example 4
This example parallel’s this PowerTOST example.
CV = 0.214
results = DataFrame(
target = vcat(fill(0.8, 5), fill(0.9, 5)),
θ₀ = vcat(fill(vcat(1, collect(0.95:-0.01:0.92)), 2)...),
n_1 = fill(-1, 10),
power = fill(NaN, 10),
sigma_u = vcat(fill(vcat(0.0005, collect(0.05:0.01:0.08)), 2)...),
n_2 = fill(-1, 10),
assurance = fill(NaN, 10),
)
for i = 1:nrow(results)
ss_res = samplesize(
RatioTOST;
CV = CV,
peθ₀ = results.θ₀[i],
target_power = results.target[i],
)
results[i, :n_1] = ss_res.n
results[i, :power] = round(ss_res.achieved_power, digits = 3)
ess_res = samplesize(
RatioTOST,
ED2x2Crossover,
θ0Prior,
StandardErrorPriorParameters;
CV = CV,
peθ₀ = 1,
SE_prior = results.sigma_u[i],
target_power = results.target[i],
)
results[i, :n_2] = ess_res.n
results[i, :assurance] = round(ess_res.achieved_power, digits = 3)
end
results| Row | target | θ₀ | n_1 | power | sigma_u | n_2 | assurance |
|---|---|---|---|---|---|---|---|
| Float64 | Float64 | Int64 | Float64 | Float64 | Int64 | Float64 | |
| 1 | 0.8 | 1.0 | 18 | 0.833 | 0.0005 | 18 | 0.833 |
| 2 | 0.8 | 0.95 | 22 | 0.824 | 0.05 | 22 | 0.833 |
| 3 | 0.8 | 0.94 | 24 | 0.817 | 0.06 | 22 | 0.8 |
| 4 | 0.8 | 0.93 | 26 | 0.801 | 0.07 | 26 | 0.819 |
| 5 | 0.8 | 0.92 | 30 | 0.802 | 0.08 | 28 | 0.803 |
| 6 | 0.9 | 1.0 | 22 | 0.916 | 0.0005 | 22 | 0.916 |
| 7 | 0.9 | 0.95 | 28 | 0.904 | 0.05 | 28 | 0.904 |
| 8 | 0.9 | 0.94 | 32 | 0.909 | 0.06 | 32 | 0.903 |
| 9 | 0.9 | 0.93 | 36 | 0.905 | 0.07 | 38 | 0.902 |
| 10 | 0.9 | 0.92 | 42 | 0.908 | 0.08 | 48 | 0.902 |
CV <- 0.214
res <- data.frame(target = c(rep(0.8, 5), rep(0.9, 5)),
theta0 = rep(c(1, seq(0.95, 0.92, -0.01)), 2),
n.1 = NA_integer_, power = NA_real_,
sigma.u = rep(c(0.0005, seq(0.05, 0.08, 0.01)), 2),
n.2 = NA_integer_, assurance = NA_real_)
for (i in 1:nrow(res)) {
res[i, 3:4] <- sampleN.TOST(CV = CV, targetpower = res$target[i],
theta0 = res$theta0[i], print = FALSE)[7:8]
res[i, 6:7] <- expsampleN.TOST(CV = CV, targetpower = res$target[i],
theta0 = 1, # mandatory!
prior.type = "theta0",
prior.parm = list(sem = res$sigma.u[i]),
print = FALSE)[9:10]
}
res <- signif(res, 3)
res[, 5] <- sprintf("%.2f", res[, 5])
names(res)[c(3, 6)] <- "n"
print(res, row.names = FALSE) target theta0 n power sigma.u n assurance
0.8 1.00 18 0.833 0.00 18 0.833
0.8 0.95 22 0.824 0.05 22 0.833
0.8 0.94 24 0.817 0.06 22 0.800
0.8 0.93 26 0.801 0.07 26 0.819
0.8 0.92 30 0.802 0.08 28 0.803
0.9 1.00 22 0.916 0.00 22 0.916
0.9 0.95 28 0.904 0.05 28 0.904
0.9 0.94 32 0.909 0.06 32 0.903
0.9 0.93 36 0.905 0.07 38 0.902
0.9 0.92 42 0.908 0.08 48 0.902
Sample Size Example 5
This example parallel’s this PowerTOST example.
samplesize(DifferenceTOST; peθ₀ = -5, θ₁ = -15, θ₂ = 15, SE = 20)Determined sample size: n = 52 achieved power = 80.747%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.DifferenceTOST{BioequivalencePower.ED2x2Crossover} SE = 20 θ₀ = -5 [θ₁, θ₂] = [-15, 15]
sampleN.TOST(CV = 20, theta0 = -5, theta1 = -15,
theta2 = 15, logscale = FALSE, design = "2x2x2")n = 52 power = 0.807468
Higher-Order Designs
This discussion parallel’s this PowerTOST discussion.
results = DataFrame(
design = [ED3x6x3Crossover, ED2x2Crossover],
n = zeros(Int, 2),
power = zeros(Float64, 2),
)
for i = 1:nrow(results)
ss_result = samplesize(RatioTOST, results[i, :design], CV = 0.2)
results[i, [:n, :power]] = (ss_result.n, ss_result.achieved_power)
end
results| Row | design | n | power |
|---|---|---|---|
| DataType | Int64 | Float64 | |
| 1 | ED3x6x3Crossover | 18 | 0.808949 |
| 2 | ED2x2Crossover | 20 | 0.83468 |
CV <- 0.20
res <- data.frame(design = c("3x6x3", "2x2x2"), n = NA_integer_,
power = NA_real_, stringsAsFactors = FALSE)
for (i in 1:2) {
res[i, 2:3]<-sampleN.TOST(CV=CV,design = res$design[i], print = FALSE)[7:8]
}
print(res, row.names = FALSE)design n power
3x6x3 18 0.8089486
2x2x2 20 0.8346802
results = DataFrame(
design = [ED4x4Crossover, ED2x2Crossover],
n = zeros(Int, 2),
power = zeros(Float64, 2),
)
for i = 1:nrow(results)
ss_result = samplesize(RatioTOST, results[i, :design], CV = 0.2)
results[i, [:n, :power]] = (ss_result.n, ss_result.achieved_power)
end
results| Row | design | n | power |
|---|---|---|---|
| DataType | Int64 | Float64 | |
| 1 | ED4x4Crossover | 20 | 0.852797 |
| 2 | ED2x2Crossover | 20 | 0.83468 |
CV <- 0.20
res <- data.frame(design = c("4x4", "2x2x2"), n = NA_integer_,
power = NA_real_, stringsAsFactors = FALSE)
for (i in 1:2) {
res[i, 2:3] <- sampleN.TOST(CV = CV, design = res$design[i],
print = FALSE)[7:8]
}
print(res, row.names = FALSE) design n power
4x4 20 0.8527970
2x2x2 20 0.8346802
Power Discussion
This discussion parallel’s this PowerTOST discussion.
round.(100 .* confint(RatioTOST; peθ₀ = 0.9, CV = 0.25, n = [14 - 1, 14 - 2]), digits = 2)(79.87, 101.42)
n <- c(14 - 1, 14 - 2) # 14 dosed in each sequence
round(100*CI.BE(pe = 0.90, CV = 0.25, n = n), 2)lower upper
79.87 101.42
power(RatioTOST; peθ₀ = 0.9, CV = 0.25, n = [13, 12])Computed achieved power: 49.632%
HypothesisTestMetaParams Settings: n = [13, 12] α = 0.05
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.25 θ₀ = 0.9 [θ₁, θ₂] = [0.8, 1.25]
power.TOST(CV = 0.25, theta0 = 0.90, n = c(13, 12))power = 0.4963175
power(RatioTOST; peθ₀ = 0.92, CV = 0.20, n = 28)Computed achieved power: 82.274%
HypothesisTestMetaParams Settings: n = 28 α = 0.05
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.2 θ₀ = 0.92 [θ₁, θ₂] = [0.8, 1.25]
power.TOST(CV = 0.20, theta0 = 0.92, n = 28)
power = 0.822742
power(RatioTOST; peθ₀ = 1, CV = 0.25, n = [13, 12])Computed achieved power: 85.583%
HypothesisTestMetaParams Settings: n = [13, 12] α = 0.05
Study: BioequivalencePower.RatioTOST{BioequivalencePower.ED2x2Crossover} CV = 0.25 θ₀ = 1 [θ₁, θ₂] = [0.8, 1.25]
power.TOST(CV = 0.25, theta0 = 1, n = c(13, 12))
power = 0.8558252
Pooling Discussion
This discussion parallel’s this PowerTOST discussion.
previous_studies = DataFrame(
cvs = [0.2, 0.25],
n = [16, 25],
design = [ED2x2Crossover, ED2x2Crossover],
study_description = ["pilot", "pivotal"],
)| Row | cvs | n | design | study_description |
|---|---|---|---|---|
| Float64 | Int64 | DataType | String | |
| 1 | 0.2 | 16 | ED2x2Crossover | pilot |
| 2 | 0.25 | 25 | ED2x2Crossover | pivotal |
pool_cv(previous_studies)(cv_pooled = 0.23222790775066557, cv_pooled_upper = 0.26033976692161354, α = 0.2, df = 37)
CVs <- (" CV | n | design | study
0.20 | 16 | 2x2x2 | pilot
0.25 | 25 | 2x2x2 | pivotal")
txtcon <- textConnection(CVs)
data <- read.table(txtcon, header = TRUE, sep = "|",
strip.white = TRUE, as.is = TRUE)
close(txtcon)
print(CVpooled(data, alpha = 0.20), digits = 4, verbose = TRUE)
Pooled CV = 0.2322 with 37 degrees of freedom
Upper 80% confidence limit of CV = 0.2603
Power Analysis Discussion
This discussion parallel’s this PowerTOST discussion.
power_analysis(RatioTOST; CV = 0.20, peθ₀ = 0.92)pa.ABE(CV = 0.20, theta0 = 0.92)
Sample size plan ABE
Design alpha CV theta0 theta1 theta2 Sample size Achieved power
2x2 0.05 0.2 0.92 0.8 1.25 28 0.822742
Power analysis
CV, theta0 and number of subjects leading to min. acceptable power of ~0.7:
CV= 0.2377, theta0= 0.9001
n = 21 (power= 0.7104)
Dropouts Discussion
This discussion parallel’s this PowerTOST discussion. Note that this example below does not actually contain any BioequivalencePower Pumas code or PowerTOST code. Still we keep it here to allow R users to see how the same can be done with Julia or vice versa.
balance(n, seq) = Int(seq * (n ÷ seq + (n % seq > 0))) #use \div + [TAB] for ÷balance (generic function with 1 method)
dropout_rate = 0.15
seqs = 3
nvals = 12:12:96
num = length(nvals)
res = DataFrame(
n = nvals,
adj1 = balance.(nvals / (1 - dropout_rate), seqs),
elig1 = zeros(Int, num),
diff1 = fill("", num),
adj2 = balance.(nvals * (1 + dropout_rate), seqs),
elig2 = zeros(Int, num),
diff2 = zeros(Int, num),
)
res.elig1 = floor.(Int, res.adj1 * (1 - dropout_rate))
res.diff1 = map((d) -> d == 0 ? "±0" : "+$d", res.elig1 - res.n) #\pm + [TAB]
res.elig2 = floor.(Int, res.adj2 * (1 - dropout_rate))
res.diff2 = map((d) -> (d == 0 ? "±0" : (d < 0 ? "$d" : "+$d")), res.elig2 - res.n)
res.optim = (res.elig1 - res.n .>= 0) .* res.elig1 + (res.elig1 - res.n .< 0) .* res.elig2
res.diff = map((d) -> d == 0 ? "±0" : "+$d", res.optim - res.n)
rename!(res, [:adj1 => "n'1", :adj2 => "n'2"])| Row | n | n'1 | elig1 | diff1 | n'2 | elig2 | diff2 | optim | diff |
|---|---|---|---|---|---|---|---|---|---|
| Int64 | Int64 | Int64 | String | Int64 | Int64 | String | Int64 | String | |
| 1 | 12 | 15 | 12 | ±0 | 15 | 12 | ±0 | 12 | ±0 |
| 2 | 24 | 30 | 25 | +1 | 30 | 25 | +1 | 25 | +1 |
| 3 | 36 | 45 | 38 | +2 | 42 | 35 | -1 | 38 | +2 |
| 4 | 48 | 57 | 48 | ±0 | 57 | 48 | ±0 | 48 | ±0 |
| 5 | 60 | 72 | 61 | +1 | 69 | 58 | -2 | 61 | +1 |
| 6 | 72 | 87 | 73 | +1 | 84 | 71 | -1 | 73 | +1 |
| 7 | 84 | 99 | 84 | ±0 | 99 | 84 | ±0 | 84 | ±0 |
| 8 | 96 | 114 | 96 | ±0 | 111 | 94 | -2 | 96 | ±0 |
balance <- function(x, y) {
return(y * (x %/% y + as.logical(x %% y)))
}
do <- 0.15 # anticipated dropout-rate 15%
seqs <- 3
n <- seq(12L, 96L, 12L)
res <- data.frame(n = n,
adj1 = balance(n / (1 - do), seqs), # correct
elig1 = NA_integer_, diff1 = NA_integer_,
adj2 = balance(n * (1 + do), seqs), # wrong
elig2 = NA_integer_, diff2 = NA_integer_)
res$elig1 <- floor(res$adj1 * (1 - do))
res$diff1 <- sprintf("%+i", res$elig1 - n)
res$elig2 <- floor(res$adj2 * (1 - do))
res$diff2 <- sprintf("%+i", res$elig2 - n)
invisible(
ifelse(res$elig1 - n >=0,
res$optim <- res$elig1,
res$optim <- res$elig2))
res$diff <- sprintf("%+i", res$optim - n)
names(res)[c(2, 5)] <- c("n'1", "n'2")
res$diff1[which(res$diff1 == "+0")] <- "\u00B10"
res$diff2[which(res$diff2 == "+0")] <- "\u00B10"
res$diff[which(res$diff == "+0")] <- "\u00B10"
print(res, row.names = FALSE) n n'1 elig1 diff1 n'2 elig2 diff2 optim diff
12 15 12 ±0 15 12 ±0 12 ±0
24 30 25 +1 30 25 +1 25 +1
36 45 38 +2 42 35 -1 38 +2
48 57 48 ±0 57 48 ±0 48 ±0
60 72 61 +1 69 58 -2 61 +1
72 87 73 +1 84 71 -1 73 +1
84 99 84 ±0 99 84 ±0 84 ±0
96 114 96 ±0 111 94 -2 96 ±0
Literature Data Discussion
This discussion parallel’s this PowerTOST discussion.
ci2cv(ED2x2x4ReplicateCrossover; lower = 0.8323, upper = 1.0392, n = 26)0.3498608083413456
CVfromCI(lower = 0.8323, upper = 1.0392, design = "2x2x4", n = 26)cv = 0.3498608
n = 26
CV_est = ci2cv(
HypothesisTestMetaParams(n = n),
ED2x2x4ReplicateCrossover;
lower = 0.8323,
upper = 1.0392,
)
n_est = samplesize(RatioTOST, ED2x2x4ReplicateCrossover; CV = CV_est).n
n1 = balance.(26:-1:18, 2) .÷ 2 #balance defined in example above
n2 = n .- n1
nseqs = unique(DataFrame(n1 = n1, n2 = n2, n = n))
res = DataFrame(
n1 = nseqs.n1,
n2 = nseqs.n2,
CV_true = NaN,
CV_est = CV_est,
n_true = -1,
n_est = n_est,
)
for i = 1:nrow(res)
res.CV_true[i] = ci2cv(
HypothesisTestMetaParams(n = [res.n1[i], res.n2[i]]),
ED2x2x4ReplicateCrossover,
lower = 0.8323,
upper = 1.0392,
)
res.n_true[i] = samplesize(RatioTOST, ED2x2x4ReplicateCrossover; CV = res.CV_true[i]).n
res.n_est[i] = samplesize(RatioTOST, ED2x2x4ReplicateCrossover; CV = CV_est).n
end
res| Row | n1 | n2 | CV_true | CV_est | n_true | n_est |
|---|---|---|---|---|---|---|
| Int64 | Int64 | Float64 | Float64 | Int64 | Int64 | |
| 1 | 13 | 13 | 0.349861 | 0.349861 | 26 | 26 |
| 2 | 12 | 14 | 0.348763 | 0.349861 | 26 | 26 |
| 3 | 11 | 15 | 0.345455 | 0.349861 | 26 | 26 |
| 4 | 10 | 16 | 0.339885 | 0.349861 | 24 | 26 |
| 5 | 9 | 17 | 0.331962 | 0.349861 | 24 | 26 |
n <- 26
CV.est <- CVfromCI(lower = 0.8323, upper = 1.0392,
design = "2x2x4", n = 26)
n.est <- sampleN.TOST(CV = CV.est, design = "2x2x4",
print = FALSE)[["Sample size"]]
n1 <- balance(seq(n, 18, -1), 2) / 2
n2 <- n - n1
nseqs <- unique(data.frame(n1 = n1, n2 = n2, n = n))
res <- data.frame(n1 = nseqs$n1, n2 = nseqs$n2,
CV.true = NA_real_,
CV.est = CV.est, n.true = NA_integer_,
n.est = n.est)
for (i in 1:nrow(res)) {
res$CV.true[i] <- CVfromCI(lower = 0.8323, upper = 1.0392,
design = "2x2x4",
n = c(res$n1[i], res$n2[i]))
res$n.true[i] <- sampleN.TOST(CV = res$CV.true[i], design = "2x2x4",
print = FALSE)[["Sample size"]]
res$n.est[i] <- sampleN.TOST(CV = CV.est, design = "2x2x4",
print = FALSE)[["Sample size"]]
}
print(signif(res, 5), row.names = FALSE) n1 n2 CV.true CV.est n.true n.est
13 13 0.34986 0.34986 26 26
12 14 0.34876 0.34986 26 26
11 15 0.34546 0.34986 26 26
10 16 0.33988 0.34986 24 26
9 17 0.33196 0.34986 24 26
Direction of Deviation Discussion
This discussion parallel’s this PowerTOST discussion.
CV = 0.21
d = 0.05
n = samplesize(RatioTOST; CV = CV, peθ₀ = 1 - d).n
res1 = DataFrame(CV = CV, θ0 = [1 - d, 1 / (1 - d)], n = n, power = NaN)
for i = 1:nrow(res1)
res1.power[i] = power(RatioTOST; CV = CV, peθ₀ = res1.θ0[i], n = n).achieved_power
end
n = samplesize(RatioTOST; CV = CV, peθ₀ = 1 + d).n
res2 = DataFrame(CV = CV, θ0 = [1 + d, 1 / (1 + d)], n = n, power = NaN)
for i = 1:nrow(res2)
res2.power[i] = power(RatioTOST; CV = CV, peθ₀ = res2.θ0[i], n = n).achieved_power
end
sort([res1; res2], [:n, :θ0])| Row | CV | θ0 | n | power |
|---|---|---|---|---|
| Float64 | Float64 | Int64 | Float64 | |
| 1 | 0.21 | 0.952381 | 20 | 0.808066 |
| 2 | 0.21 | 1.05 | 20 | 0.808066 |
| 3 | 0.21 | 0.95 | 22 | 0.837437 |
| 4 | 0.21 | 1.05263 | 22 | 0.837437 |
CV <- 0.21
d <- 0.05 # delta 5%, direction unknown
n <- sampleN.TOST(CV = CV, theta0 = 1 - d, print = FALSE,
details = FALSE)[["Sample size"]]
res1 <- data.frame(CV = CV, theta0 = c(1 - d, 1 / (1 - d)),
n = n, power = NA_real_)
for (i in 1:nrow(res1)) {
res1$power[i] <- power.TOST(CV = CV, theta0 = res1$theta0[i], n = n)
}
n <- sampleN.TOST(CV = CV, theta0 = 1 + d,
print = FALSE)[["Sample size"]]
res2 <- data.frame(CV = CV, theta0 = c(1 + d, 1 / (1 + d)),
n = n, power = NA_real_)
for (i in 1:nrow(res1)) {
res2$power[i] <- power.TOST(CV = CV, theta0 = res2$theta0[i], n = n)
}
res <- rbind(res1, res2)
print(signif(res[order(res$n, res$theta0), ], 4), row.names = FALSE)
CV theta0 n power
0.21 0.9524 20 0.8081
0.21 1.0500 20 0.8081
0.21 0.9500 22 0.8374
0.21 1.0530 22 0.8374
3.2 Reference-Scaled Average Bioequivalence
We now parallel the “Reference-Scaled Average Bioequivalence” examples from the PowerTOST Vignette.
Sample Size - EMA
This example parallel’s this PowerTOST example.
samplesize(ExpandingLimitsRatioTOST; CV = 0.55, num_sims = 1e6)Determined sample size: n = 42 achieved power = 80.884%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.ExpandingLimitsRatioTOST{BioequivalencePower.ED2x3x3PartialReplicate} CV = 0.55 θ₀ = 0.9 [θ₁, θ₂] = [0.8, 1.25] regulator = :ema
sampleN.scABEL(CV = 0.55)
n=42 power=0.8085
cv_T, cv_R = round.(unpool_cv(0.55, 1.5), digits = 4)(0.6109, 0.4852)
CV <- signif(CVp2CV(CV = 0.55, ratio = 1.5), 4) 0.6109 0.4852
samplesize(ExpandingLimitsRatioTOST; CV = [cv_T, cv_R], num_sims = 1e6)Determined sample size: n = 45 achieved power = 81.008%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.ExpandingLimitsRatioTOST{BioequivalencePower.ED2x3x3PartialReplicate} CV = [0.6109, 0.4852] θ₀ = 0.9 [θ₁, θ₂] = [0.8, 1.25] regulator = :ema
sampleN.scABEL(CV = CV, details = FALSE)n = 45 power = 0.8114
samplesize(
ExpandingLimitsRatioTOST;
CV = [cv_T, cv_R],
subject_level = true,
num_sims = 1e5,
)Determined sample size: n = 48 achieved power = 81.857%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.ExpandingLimitsRatioTOST{BioequivalencePower.ED2x3x3PartialReplicate} CV = [0.6109, 0.4852] θ₀ = 0.9 [θ₁, θ₂] = [0.8, 1.25] regulator = :ema
sampleN.scABEL.sdsims(CV = CV, details = FALSE)n = 48 power = 0.8161
Note that a few sample sizes differ by 2 or 3 subjects. However if you set num_sims = 1e6 and run the same with PowerTOST, there is no difference.
num_sims = 1e4
CVp = 0.4:0.05:0.7
CV = unpool_cv.(CVp, 2.5)
CV = [round.(cv, digits = 4) for cv in CV] #apply rounding on each tuple
res = DataFrame(
CVp = CVp,
CVwT = first.(CV),
CVwR = last.(CV),
f4_key = -1,
f4_ss = -1,
f3_key = -1,
f3_ss = -1,
p3_key = -1,
p3_ss = -1,
)
for i = 1:nrow(res)
res.f4_key[i] =
samplesize(
ExpandingLimitsRatioTOST,
ED2x2x4ReplicateCrossover;
CV = collect(CV[i]),
).n
res.f4_ss[i] =
samplesize(
ExpandingLimitsRatioTOST,
ED2x2x4ReplicateCrossover;
CV = collect(CV[i]),
subject_level = true,
num_sims = num_sims,
).n
res.f3_key[i] =
samplesize(
ExpandingLimitsRatioTOST,
ED2x2x3ReplicateCrossover;
CV = collect(CV[i]),
).n
res.f3_ss[i] =
samplesize(
ExpandingLimitsRatioTOST,
ED2x2x3ReplicateCrossover;
CV = collect(CV[i]),
subject_level = true,
num_sims = num_sims,
).n
res.p3_key[i] =
samplesize(ExpandingLimitsRatioTOST, ED2x3x3PartialReplicate; CV = collect(CV[i])).n
res.p3_ss[i] =
samplesize(
ExpandingLimitsRatioTOST,
ED2x3x3PartialReplicate;
CV = collect(CV[i]),
subject_level = true,
num_sims = num_sims,
).n
end
res| Row | CVp | CVwT | CVwR | f4_key | f4_ss | f3_key | f3_ss | p3_key | p3_ss |
|---|---|---|---|---|---|---|---|---|---|
| Float64 | Float64 | Float64 | Int64 | Int64 | Int64 | Int64 | Int64 | Int64 | |
| 1 | 0.4 | 0.486 | 0.2975 | 62 | 62 | 90 | 92 | 81 | 87 |
| 2 | 0.45 | 0.549 | 0.3334 | 60 | 60 | 90 | 90 | 78 | 84 |
| 3 | 0.5 | 0.6127 | 0.3688 | 54 | 54 | 82 | 82 | 69 | 75 |
| 4 | 0.55 | 0.6773 | 0.4038 | 50 | 50 | 76 | 76 | 63 | 69 |
| 5 | 0.6 | 0.7427 | 0.4383 | 46 | 48 | 72 | 72 | 60 | 63 |
| 6 | 0.65 | 0.809 | 0.4723 | 46 | 46 | 68 | 68 | 57 | 60 |
| 7 | 0.7 | 0.8762 | 0.5059 | 46 | 46 | 70 | 70 | 57 | 63 |
CVp <- seq(0.40, 0.70, 0.05)
CV <- signif(CVp2CV(CV = CVp, ratio = 2.5), 4)
res <- data.frame(CVp = CVp, CVwT = CV[, 1], CVwR = CV[, 2],
f4.key = NA, f4.ss = NA, # 4-period full replicate
f3.key = NA, f3.ss = NA, # 3-period full replicate
p3.key = NA, p3.ss = NA) # 3-period partial replicate
for (i in seq_along(CVp)) {
res$f4.key[i] <- sampleN.scABEL(CV = CV[i, ], design = "2x2x4",
print = FALSE,
details = FALSE)[["Sample size"]]
res$f4.ss[i] <- sampleN.scABEL.sdsims(CV = CV[i, ], design = "2x2x4",
print = FALSE,
details = FALSE,
progress = FALSE)[["Sample size"]]
res$f3.key[i] <- sampleN.scABEL(CV = CV[i, ], design = "2x2x3",
print = FALSE,
details = FALSE)[["Sample size"]]
res$f3.ss[i] <- sampleN.scABEL.sdsims(CV = CV[i, ], design = "2x2x3",
print = FALSE,
details = FALSE,
progress = FALSE)[["Sample size"]]
res$p3.key[i] <- sampleN.scABEL(CV = CV[i, ], design = "2x3x3",
print = FALSE,
details = FALSE)[["Sample size"]]
res$p3.ss[i] <- sampleN.scABEL.sdsims(CV = CV[i, ], design = "2x3x3",
print = FALSE,
details = FALSE,
progress = FALSE)[["Sample size"]]
}
print(res, row.names = FALSE) CVp CVwT CVwR f4.key f4.ss f3.key f3.ss p3.key p3.ss
0.40 0.4860 0.2975 62 62 90 90 81 87
0.45 0.5490 0.3334 60 60 90 90 78 84
0.50 0.6127 0.3688 54 54 82 82 69 75
0.55 0.6773 0.4038 50 50 76 76 63 69
0.60 0.7427 0.4383 46 46 72 72 60 66
0.65 0.8090 0.4723 46 46 68 68 57 63
0.70 0.8762 0.5059 46 46 70 70 57 63
Sample Size - Health Canada
This example parallel’s this PowerTOST example.
samplesize(ExpandingLimitsRatioTOST; reg_symbol = :hc, CV = 0.55, num_sims = 1e6)
sampleN.scABEL(CV = 0.55, regulator = "HC")n = 39 power = 0.8142
Sample Size - Gulf Cooperation Council
This example parallel’s this PowerTOST example.
samplesize(ExpandingLimitsRatioTOST; reg_symbol = :gcc, CV = 0.55, num_sims = 1e6)
sampleN.scABEL(CV = 0.55, regulator = "GCC")n = 75 power = 0.8021
Sample Size - FDA
This example parallel’s this PowerTOST example.
samplesize( ReferenceScaledRatioTOST; CV = 0.55, num_sims = 1e6)
sampleN.RSABE(CV = 0.55)n=30 power=0.80034
Sample Size - Narrow Therapeutic Index Drugs (FDA, CDE)
This example parallel’s this PowerTOST example.
cv_R1, cv_T1 = round.(unpool_cv(0.125, 2.5), digits = 4) res_ntid = samplesize(NarrowTherapeuticIndexDrugRatioTOST; CV = [cv_R1, cv_T1]) power(NarrowTherapeuticIndexDrugRatioTOST; CV = [cv_R1, cv_T1], n = res_ntid.n, details = true)
CV <- signif(CVp2CV(CV = 0.125, ratio = 2.5), 4)
n <- sampleN.NTID(CV = CV, details = FALSE)[["Sample size"]]
suppressMessages(power.NTID(CV = CV, n = n, details = TRUE))CVw(T) = 0.1497, CVw(R) = 0.09433
n = 38 power = 0.816080
p(BE) p(BE-sABEc) p(BE-ABE) p(BE-sratio)
0.81608 0.93848 1.00000 0.85794
samplesize(NarrowTherapeuticIndexDrugRatioTOST; CV = 0.125)
sampleN.NTID(CV = 0.125, details = FALSE)[["Sample size"]]n = 16 power = 0.822780
Sample Size - Highly Variable Narrow Therapeutic Index Drugs (FDA, CDE)
This example parallel’s this PowerTOST example.
samplesize(NarrowTherapeuticIndexDrugRatioTOST; highly_variable = true, CV = 0.30, num_sims = 1e6)
sampleN.HVNTID(CV = 0.30, details = FALSE)n = 22 power = 0.829700
CV_R1, CV_T1 = round.(unpool_cv(0.125, 2.5), digits = 4) samplesize(NarrowTherapeuticIndexDrugRatioTOST; highly_variable = true, CV = [CV_R1, CV_T1], num_sims = 1e6)
CV <- signif(CVp2CV(CV = 0.125, ratio = 2.5), 4)
sampleN.HVNTID(CV = CV, details = FALSE)n = 34 power =0.818800
Type I Error
This discussion parallel’s this PowerTOST discussion.
CV = 0.35
res = DataFrame(n = [-1], CV = [CV], TIE = [-1])
res.n = [samplesize(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = CV).n]
U = scaled_abel(CV, RegulatoryConstants(:ema)).upper
res.TIE = [
power(
ExpandingLimitsRatioTOST,
ED2x2x4ReplicateCrossover;
CV = CV,
n = res.n[1],
peθ₀ = U,
).achieved_power,
]
res| Row | n | CV | TIE |
|---|---|---|---|
| Int64 | Float64 | Float64 | |
| 1 | 34 | 0.35 | 0.065701 |
CV <- 0.35
res <- data.frame(n = NA, CV = CV, TIE = NA)
res$n <- sampleN.scABEL(CV = CV, design = "2x2x4", print = FALSE,
details = FALSE)[["Sample size"]]
U <- scABEL(CV = CV)[["upper"]]
res$TIE <- power.scABEL(CV = CV, n = res$n, theta0 = U, design = "2x2x4")
print(res, row.names = FALSE) n CV TIE
34 0.35 0.065566
res = DataFrame(
CV = sort([0.25:0.01:0.32; se2cv(0.25)]),
impl_L = NaN,
impl_U = NaN,
impl_TIE = NaN,
des_L = NaN,
des_U = NaN,
des_TIE = NaN,
)
for i = 1:nrow(res)
res[i, [:impl_L, :impl_U]] = values(scaled_abel(res.CV[i], RegulatoryConstants(:fda)))
if cv2se(res.CV[i]) ≤ 0.25 #\le + [TAB]
res[i, [:des_L, :des_U]] = [0.8, 1.25]
else
res[i, [:des_L, :des_U]] = exp.([-1, +1] * log(1.25) / 0.25 * cv2se(res.CV[i]))
end
res[i, :impl_TIE] =
power(
ReferenceScaledRatioTOST,
ED2x2x4ReplicateCrossover;
CV = res.CV[i],
peθ₀ = res.impl_U[i],
n = 32,
num_sims = 1e6,
).achieved_power
res[i, :des_TIE] =
power(
ReferenceScaledRatioTOST,
ED2x2x4ReplicateCrossover;
CV = res.CV[i],
peθ₀ = res.des_L[i],
n = 32,
num_sims = 1e6,
).achieved_power
end
res| Row | CV | impl_L | impl_U | impl_TIE | des_L | des_U | des_TIE |
|---|---|---|---|---|---|---|---|
| Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
| 1 | 0.25 | 0.8 | 1.25 | 0.060927 | 0.8 | 1.25 | 0.060557 |
| 2 | 0.253958 | 0.8 | 1.25 | 0.064035 | 0.8 | 1.25 | 0.063811 |
| 3 | 0.26 | 0.8 | 1.25 | 0.070174 | 0.795898 | 1.25644 | 0.057141 |
| 4 | 0.27 | 0.8 | 1.25 | 0.083506 | 0.789174 | 1.26715 | 0.05065 |
| 5 | 0.28 | 0.8 | 1.25 | 0.101172 | 0.782532 | 1.2779 | 0.047906 |
| 6 | 0.29 | 0.8 | 1.25 | 0.122629 | 0.775972 | 1.28871 | 0.046751 |
| 7 | 0.3 | 0.8 | 1.25 | 0.146519 | 0.769492 | 1.29956 | 0.045924 |
| 8 | 0.31 | 0.763093 | 1.31046 | 0.044711 | 0.763093 | 1.31046 | 0.04491 |
| 9 | 0.32 | 0.756773 | 1.3214 | 0.04345 | 0.756773 | 1.3214 | 0.043604 |
res <- data.frame(CV = sort(c(seq(0.25, 0.32, 0.01), se2CV(0.25))),
impl.L = NA, impl.U = NA, impl.TIE = NA,
des.L = NA, des.U = NA, des.TIE = NA)
for (i in 1:nrow(res)) {
res[i, 2:3] <- scABEL(CV = res$CV[i], regulator = "FDA")
if (CV2se(res$CV[i]) <= 0.25) {
res[i, 5:6] <- c(0.80, 1.25)
} else {
res[i, 5:6] <- exp(c(-1, +1)*(log(1.25)/0.25)*CV2se(res$CV[i]))
}
res[i, 4] <- power.RSABE(CV = res$CV[i], theta0 = res[i, 3],
design = "2x2x4", n = 32, nsims = 1e6)
res[i, 7] <- power.RSABE(CV = res$CV[i], theta0 = res[i, 5],
design = "2x2x4", n = 32, nsims = 1e6)
}
print(signif(res, 4), row.names = FALSE) CV impl.L impl.U impl.TIE des.L des.U des.TIE
0.250 0.8000 1.250 0.06068 0.8000 1.250 0.06036
0.254 0.8000 1.250 0.06396 0.8000 1.250 0.06357
0.260 0.8000 1.250 0.07008 0.7959 1.256 0.05692
0.270 0.8000 1.250 0.08352 0.7892 1.267 0.05047
0.280 0.8000 1.250 0.10130 0.7825 1.278 0.04770
0.290 0.8000 1.250 0.12290 0.7760 1.289 0.04644
0.300 0.8000 1.250 0.14710 0.7695 1.300 0.04562
0.310 0.7631 1.310 0.04515 0.7631 1.310 0.04466
0.320 0.7568 1.321 0.04373 0.7568 1.321 0.04325
Iteratively adjusted α
This discussion parallel’s this PowerTOST discussion.
CV = 0.45
n = samplesize(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = CV).n
adjusted_α(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = CV, n = n)[ Info: Type I error at 0.04955999999999975 is not greater than α at 0.05
(before_adjustment = (α = 0.05, type_I_error = 0.04955999999999975, achieved_power = 0.8101399999999993, n = [14, 14]), after_adjustment = :no_adjustment_made)
CV <- 0.45
n <- sampleN.scABEL(CV = CV, design = "2x2x4", print = FALSE,
details = FALSE)[["Sample size"]]
scABEL.ad(CV = CV, design = "2x2x4", n = n)Empiric TIE for alpha 0.0500 : 0.04889
TIE = nominal alpha; no adjustment of alpha is required.
CV = 0.35
n = samplesize(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = CV).n
adjusted_α(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = CV, n = n)(before_adjustment = (α = 0.05, type_I_error = 0.06588000000000076, achieved_power = 0.8107499999999942, n = [17, 17]), after_adjustment = (α = 0.03659540622824148, type_I_error = 0.04999999999999981, achieved_power = 0.7710300000000015, n = [17, 17]))
CV <- 0.35
n <- sampleN.scABEL(CV = CV, design = "2x2x4", print = FALSE,
details = FALSE)[["Sample size"]]
scABEL.ad(CV = CV, design = "2x2x4", n = n)Empiric TIE for alpha 0.0500 : 0.06557
Iteratively adjusted alpha : 0.03630
Power for theta0 0.9000 : 0.773
CV = 0.35
adjusted_α_with_samplesize(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = CV)[ Info: Basic adjusted α has n_vector = [17, 17] with power = 0.7710300000000015 and adjusted α = 0.03659540622824148 [ Info: new n_vector = [18, 18] with power = 0.7910500000000025 and adjusted α = 0.036500314210828826 [ Info: new n_vector = [19, 19] with power = 0.8095199999999959 and adjusted α = 0.03653824186515225
(before_adjustment = (α = 0.05, type_I_error = 0.06559000000000083, achieved_power = 0.8454500000000067, n = [19, 19]), after_adjustment = (α = 0.03653824186515225, type_I_error = 0.04999999999999997, achieved_power = 0.8095199999999959, n = [19, 19]))
CV <- 0.35
sampleN.scABEL.ad(CV = CV, design = "2x2x4")n 38, adj. alpha: 0.03610 (power 0.8100), TIE: 0.05000
Exact Procedure
This discussion parallel’s this PowerTOST discussion.
CV = 0.35
n = samplesize(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = CV).n
U = scaled_abel(CV, RegulatoryConstants(:ema)).upper
power(
ReferenceScaledRatioTOST,
ED2x2x4ReplicateCrossover;
CV = CV,
n = n,
reg_symbol = :ema,
peθ₀ = U,
subject_level = true,
num_sims = 1e6,
)Computed achieved power: 4.782%
HypothesisTestMetaParams Settings: n = 34 α = 0.05
Study: BioequivalencePower.ReferenceScaledRatioTOST{BioequivalencePower.ED2x2x4ReplicateCrossover} CV = 0.35 θ₀ = 1.2947963645522924 [θ₁, θ₂] = [0.8, 1.25] regulator = :ema
CV <- 0.35
n <- sampleN.scABEL(CV = CV, design = "2x2x4", print = FALSE,
details = FALSE)[["Sample size"]]
U <- scABEL(CV = CV)[["upper"]]
# subject simulations and therefore, relatively slow
power.RSABE2L.sds(CV = CV, design = "2x2x4", theta0 = U,
n = n, SABE_test = "exact", nsims = 1e6,
progress = FALSE)[1] 0.048177
Helpers
BE Limits
This discussion parallel’s this PowerTOST discussion.
Expanded Limits (EMA, Health Canada), widened limits (GCC)
CV = [0.30, 0.40898, 0.50, 0.57382]
res = DataFrame(
CV = CV,
EMA_L = NaN,
EMA_U = NaN,
EMA_cap = "",
HC_L = NaN,
HC_U = NaN,
HC_cap = "",
GCC_L = NaN,
GCC_U = NaN,
GCC_cap = "",
)
for i = 1:nrow(res)
res[i, 2:3] = values(scaled_abel(res.CV[i], RegulatoryConstants(:ema)))
res[i, 5:6] = values(scaled_abel(res.CV[i], RegulatoryConstants(:hc)))
res[i, 8:9] = values(scaled_abel(res.CV[i], RegulatoryConstants(:gcc)))
end
# res.EMA_cap[res.CV .≤ 0.30] .= 2.3
@eachrow! res begin
if :CV <= 0.3
:EMA_cap = "lower"
:HC_cap = "lower"
:GCC_cap = "lower"
end
if :CV >= 0.50
:EMA_cap = "upper"
end
if :CV >= 0.57382
:HC_cap = "upper"
end
end
res| Row | CV | EMA_L | EMA_U | EMA_cap | HC_L | HC_U | HC_cap | GCC_L | GCC_U | GCC_cap |
|---|---|---|---|---|---|---|---|---|---|---|
| Float64 | Float64 | Float64 | String | Float64 | Float64 | String | Float64 | Float64 | String | |
| 1 | 0.3 | 0.8 | 1.25 | lower | 0.8 | 1.25 | lower | 0.8 | 1.25 | lower |
| 2 | 0.40898 | 0.741643 | 1.34836 | 0.741643 | 1.34836 | 0.75 | 1.33333 | |||
| 3 | 0.5 | 0.698368 | 1.43191 | upper | 0.698368 | 1.43191 | 0.75 | 1.33333 | ||
| 4 | 0.57382 | 0.698368 | 1.43191 | upper | 0.666667 | 1.5 | upper | 0.75 | 1.33333 |
CV <- c(0.30, 0.40898, 0.50, 0.57382)
res <- data.frame(CV = CV, EMA.L = NA, EMA.U = NA, EMA.cap = "",
HC.L = NA, HC.U = NA, HC.cap = "",
GCC.L = NA, GCC.U = NA, GCC.cap = "",
stringsAsFactors = FALSE) # this line for R <4.0.0
for (i in seq_along(CV)) {
res[i, 2:3] <- sprintf("%.4f", scABEL(CV[i], regulator = "EMA"))
res[i, 5:6] <- sprintf("%.3f", scABEL(CV[i], regulator = "HC"))
res[i, 8:9] <- sprintf("%.3f", scABEL(CV[i], regulator = "GCC"))
}
res$EMA.cap[res$CV <= 0.30] <- res$HC.cap[res$CV <= 0.30] <- "lower"
res$EMA.cap[res$CV >= 0.50] <- "upper"
res$HC.cap[res$CV >= 0.57382] <- "upper"
res$GCC.cap[res$CV <= 0.30] <- res$GCC.cap[res$CV <= 0.30] <- "lower"
print(res, row.names = FALSE) CV EMA.L EMA.U EMA.cap HC.L HC.U HC.cap GCC.L GCC.U GCC.cap
0.30000 0.8000 1.2500 lower 0.800 1.250 lower 0.800 1.250 lower
0.40898 0.7416 1.3484 0.742 1.348 0.750 1.333
0.50000 0.6984 1.4319 upper 0.698 1.432 0.750 1.333
0.57382 0.6984 1.4319 upper 0.667 1.500 upper 0.750 1.333
FDA
‘Implied’ Limits
res = DataFrame(
CV = [0.25, se2cv(0.25), 0.275, 0.3, 0.5, 1.0],
impl_L = NaN,
impl_U = NaN,
cap = "",
)
for i = 1:nrow(res)
res[i, 2:3] = values(scaled_abel(res.CV[i], RegulatoryConstants(:fda)))
end
@eachrow! res begin
if :CV <= 0.3
:cap = "lower"
end
:CV = round(:CV, digits = 3)
end
res| Row | CV | impl_L | impl_U | cap |
|---|---|---|---|---|
| Float64 | Float64 | Float64 | String | |
| 1 | 0.25 | 0.8 | 1.25 | lower |
| 2 | 0.254 | 0.8 | 1.25 | lower |
| 3 | 0.275 | 0.8 | 1.25 | lower |
| 4 | 0.3 | 0.8 | 1.25 | lower |
| 5 | 0.5 | 0.655974 | 1.52445 | |
| 6 | 1.0 | 0.475629 | 2.10248 |
res <- data.frame(CV = c(0.25, se2CV(0.25), 0.275, 0.3, 0.5, 1.0),
impl.L = NA, impl.U = NA, cap = "",
stringsAsFactors = FALSE) # this line for R <4.0.0
for (i in 1:nrow(res)) {
res[i, 2:3] <- sprintf("%.4f", scABEL(CV = res$CV[i],
regulator = "FDA"))
}
res$cap[res$CV <= 0.30] <- "lower"
res$CV <- sprintf("%.3f", res$CV)
print(res, row.names = FALSE) CV impl.L impl.U cap
0.250 0.8000 1.2500 lower
0.254 0.8000 1.2500 lower
0.275 0.8000 1.2500 lower
0.300 0.8000 1.2500 lower
0.500 0.6560 1.5245
1.000 0.4756 2.1025
Limits of the ‘desired consumer risk model’
res = DataFrame(
CV = [0.25, se2cv(0.25), 0.275, 0.3, 0.5, 1.0],
des_L = NaN,
des_U = NaN,
cap = "",
)
@eachrow! res begin
if cv2se(:CV) <= 0.25
:des_L, :des_U = 0.8, 1.25
else
:des_L, :des_U = exp.([-1, +1] * log(1.25) / 0.25 .* cv2se(:CV))
end
if :CV <= 0.30
:cap = "lower"
end
end
res| Row | CV | des_L | des_U | cap |
|---|---|---|---|---|
| Float64 | Float64 | Float64 | String | |
| 1 | 0.25 | 0.8 | 1.25 | lower |
| 2 | 0.253958 | 0.8 | 1.25 | lower |
| 3 | 0.275 | 0.785843 | 1.27252 | lower |
| 4 | 0.3 | 0.769492 | 1.29956 | lower |
| 5 | 0.5 | 0.655974 | 1.52445 | |
| 6 | 1.0 | 0.475629 | 2.10248 |
res <- data.frame(CV = c(0.25, se2CV(0.25), 0.275, 0.3, 0.5, 1.0),
des.L = NA, des.U = NA, cap = "",
stringsAsFactors = FALSE) # this line for R <4.0.0
for (i in 1:nrow(res)) {
if (CV2se(res$CV[i]) <= 0.25) {
res[i, 2:3] <- sprintf("%.4f", c(0.80, 1.25))
} else {
res[i, 2:3] <- sprintf("%.4f",
exp(c(-1, +1)*(log(1.25)/0.25)*CV2se(res$CV[i])))
}
}
res$cap[res$CV <= 0.30] <- "lower"
res$CV <- sprintf("%.3f", res$CV)
print(res, row.names = FALSE) CV des.L des.U cap
0.250 0.8000 1.2500 lower
0.254 0.8000 1.2500 lower
0.275 0.7858 1.2725 lower
0.300 0.7695 1.2996 lower
0.500 0.6560 1.5245
1.000 0.4756 2.1025
Regulatory Settings
This discussion parallel’s this PowerTOST discussion.
reg = [:ema, :hc, :gcc, :fda]
for r in reg
println(RegulatoryConstants(r))
endRegulatoryConstants
agency: Symbol ema
r_const: Float64 0.76
cv_switch: Float64 0.3
cv_cap: Float64 0.5
apply_pe_constraints: Bool true
estimation_method: Symbol anova
RegulatoryConstants
agency: Symbol hc
r_const: Float64 0.76
cv_switch: Float64 0.3
cv_cap: Float64 0.57382
apply_pe_constraints: Bool true
estimation_method: Symbol isc
RegulatoryConstants
agency: Symbol gcc
r_const: Float64 0.979975816993452
cv_switch: Float64 0.3
cv_cap: Float64 0.3
apply_pe_constraints: Bool true
estimation_method: Symbol anova
RegulatoryConstants
agency: Symbol fda
r_const: Float64 0.8925742052568391
cv_switch: Float64 0.3
cv_cap: Float64 Inf
apply_pe_constraints: Bool true
estimation_method: Symbol isc
reg <- c("EMA", "HC", "GCC", "FDA")
for (i in 1:4) {
print(reg_const(regulator = reg[i]))
cat("\n")
}EMA regulatory settings
- CVswitch = 0.3
- cap on scABEL if CVw(R) > 0.5
- regulatory constant = 0.76
- pe constraint applied
HC regulatory settings
- CVswitch = 0.3
- cap on scABEL if CVw(R) > 0.57382
- regulatory constant = 0.76
- pe constraint applied
GCC regulatory settings
- CVswitch = 0.3
- cap on scABEL if CVw(R) > 0.3
- regulatory constant = 0.9799758
- pe constraint applied
FDA regulatory settings
- CVswitch = 0.3
- no cap on scABEL
- regulatory constant = 0.8925742
- pe constraint applied
3.3 Non-Inferiority
We now parallel the “Non-Inferiority” examples from the PowerTOST Vignette.
Non-Inferiority Example 1
This example parallel’s this PowerTOST example.
samplesize(NonInferiorityRatioOST, ED2x2Crossover; CV = 0.25)Determined sample size: n = 36 achieved power = 82.033%
HypothesisTestMetaParams Settings: α = 0.025 target_power = 0.8
Study: BioequivalencePower.NonInferiorityRatioOST{BioequivalencePower.ED2x2Crossover} CV = 0.25 θ₀ = 0.95 θmargin = 0.8
sampleN.noninf(CV = 0.25)n = 36 power = 0.820330
power(NonInferiorityRatioOST; CV = 0.25, n = 35)[ Info: Note: There is a non-uniform split: [18, 17]
Computed achieved power: 80.859%
HypothesisTestMetaParams Settings: n = 35 α = 0.025
Study: BioequivalencePower.NonInferiorityRatioOST{BioequivalencePower.ED2x2Crossover} CV = 0.25 θ₀ = 0.95 θmargin = 0.8
power.noninf(CV = 0.25, n = 35)Unbalanced design. n(i)=18/17 assumed.
[1] 0.8085908
Non-Superiority Example 2
This example parallel’s this PowerTOST example.
samplesize(
NonInferiorityRatioOST,
ED2x2Crossover;
θmargin = 1.25,
peθ₀ = 1 / 0.95,
CV = 0.25,
)Determined sample size: n = 36 achieved power = 82.033%
HypothesisTestMetaParams Settings: α = 0.025 target_power = 0.8
Study: BioequivalencePower.NonInferiorityRatioOST{BioequivalencePower.ED2x2Crossover} CV = 0.25 θ₀ = 1.0526315789473684 θmargin = 1.25
sampleN.noninf(CV = 0.25, margin = 1.25, theta0 = 1/0.95)n = 36 power = 0.820330
Bracketing Approach
This discussion parallel’s this PowerTOST discussion.
res = DataFrame(
design = ED2x2x4ReplicateCrossover,
metric = ["Cmin", "Cmax"],
margin = [0.8, 1.25],
CV = [0.35, 0.2],
θ0 = [0.95, 1.05],
n = -1,
power = NaN,
)
@eachrow! res begin
res_ss =
samplesize(NonInferiorityRatioOST, :design; θmargin = :margin, peθ₀ = :θ0, CV = :CV)
:n = res_ss.n
:power = res_ss.achieved_power
end
res| Row | design | metric | margin | CV | θ0 | n | power |
|---|---|---|---|---|---|---|---|
| DataType | String | Float64 | Float64 | Float64 | Int64 | Float64 | |
| 1 | ED2x2x4ReplicateCrossover | Cmin | 0.8 | 0.35 | 0.95 | 32 | 0.807793 |
| 2 | ED2x2x4ReplicateCrossover | Cmax | 1.25 | 0.2 | 1.05 | 12 | 0.840641 |
res <- data.frame(design = "2x2x4", metric = c("Cmin", "Cmax"),
margin = c(0.80, 1.25), CV = c(0.35, 0.20),
theta0 = c(0.95, 1.05), n = NA, power = NA,
stringsAsFactors = FALSE) # this line for R <4.0.0)
for (i in 1:2) {
res[i, 6:7] <- sampleN.noninf(design = res$design[i],
margin = res$margin[i],
theta0 = res$theta0[i],
CV = res$CV[i],
details = FALSE,
print = FALSE)[6:7]
}
print(res, row.names = FALSE) design metric margin CV theta0 n power
2x2x4 Cmin 0.80 0.35 0.95 32 0.8077926
2x2x4 Cmax 1.25 0.20 1.05 12 0.8406410
power(
NonInferiorityRatioOST,
ED2x2x4ReplicateCrossover;
θmargin = 1.25,
CV = 0.2,
peθ₀ = 1.05,
n = 32,
)Computed achieved power: 99.85%
HypothesisTestMetaParams Settings: n = 32 α = 0.025
Study: BioequivalencePower.NonInferiorityRatioOST{BioequivalencePower.ED2x2x4ReplicateCrossover} CV = 0.2 θ₀ = 1.05 θmargin = 1.25
power.noninf(design = "2x2x4", margin = 1.25, CV = 0.2,
theta0 = 1.05, n = 32)result: 0.9984996
power(
NonInferiorityRatioOST,
ED2x2x4ReplicateCrossover;
θmargin = 1.25,
CV = 0.25,
peθ₀ = 1.10,
n = 32,
)Computed achieved power: 82.797%
HypothesisTestMetaParams Settings: n = 32 α = 0.025
Study: BioequivalencePower.NonInferiorityRatioOST{BioequivalencePower.ED2x2x4ReplicateCrossover} CV = 0.25 θ₀ = 1.1 θmargin = 1.25
power.noninf(design = "2x2x4", margin = 1.25, CV = 0.25,
theta0 = 1.10, n = 32)[1] 0.8279726
res = DataFrame(
design = ED2x2x4ReplicateCrossover,
intended = [ExpandingLimitsRatioTOST, RatioTOST],
metric = ["Cmin", "Cmax"],
CV = [0.35, 0.2],
θ0 = [0.9, 1.05],
n = -1,
power = NaN,
)
@eachrow! res begin
res_ss = samplesize(:intended, :design; CV = :CV, peθ₀ = :θ0)
:n, :power = res_ss.n, res_ss.achieved_power
end
res| Row | design | intended | metric | CV | θ0 | n | power |
|---|---|---|---|---|---|---|---|
| DataType | UnionAll | String | Float64 | Float64 | Int64 | Float64 | |
| 1 | ED2x2x4ReplicateCrossover | ExpandingLimitsRatioTOST | Cmin | 0.35 | 0.9 | 34 | 0.811889 |
| 2 | ED2x2x4ReplicateCrossover | RatioTOST | Cmax | 0.2 | 1.05 | 10 | 0.85176 |
res <- data.frame(design = "2x2x4", intended = c("ABEL", "ABE"),
metric = c("Cmin", "Cmax"), CV = c(0.35, 0.20),
theta0 = c(0.90, 1.05), n = NA, power = NA,
stringsAsFactors = FALSE) # this line for R <4.0.0
res[1, 6:7] <- sampleN.scABEL(CV = res$CV[1], theta0 = res$theta0[1],
design = res$design[1], print = FALSE,
details = FALSE)[8:9]
res[2, 6:7] <- sampleN.TOST(CV = res$CV[2], theta0 = res$theta0[2],
design = res$design[2], print = FALSE,
details = FALSE)[7:8]
print(res, row.names = FALSE) design intended metric CV theta0 n power
2x2x4 ABEL Cmin 0.35 0.90 34 0.8118400
2x2x4 ABE Cmax 0.20 1.05 10 0.8517596
n = samplesize(ExpandingLimitsRatioTOST, ED2x2x4ReplicateCrossover; CV = 0.35, peθ₀ = 0.9).n
res = DataFrame(
design = ED2x2x4ReplicateCrossover,
intended = [ExpandingLimitsRatioTOST, RatioTOST],
metric = ["Cmin", "Cmax"],
CV = [0.5, 0.25],
θ0 = [0.88, 1.12],
n = n,
power = NaN,
)
@eachrow! res begin
res_pwr = power(:intended, :design; CV = :CV, peθ₀ = :θ0, n = :n)
:power = res_pwr.achieved_power
end
res| Row | design | intended | metric | CV | θ0 | n | power |
|---|---|---|---|---|---|---|---|
| DataType | UnionAll | String | Float64 | Float64 | Int64 | Float64 | |
| 1 | ED2x2x4ReplicateCrossover | ExpandingLimitsRatioTOST | Cmin | 0.5 | 0.88 | 34 | 0.818247 |
| 2 | ED2x2x4ReplicateCrossover | RatioTOST | Cmax | 0.25 | 1.12 | 34 | 0.825811 |
n <- sampleN.scABEL(CV = 0.35, theta0 = 0.90, design = "2x2x4",
print = FALSE, details = FALSE)[["Sample size"]]
# CV and theta0 of both metrics worse than assumed
res <- data.frame(design = "2x2x4", intended = c("ABEL", "ABE"),
metric = c("Cmin", "Cmax"), CV = c(0.50, 0.25),
theta0 = c(0.88, 1.12), n = n, power = NA,
stringsAsFactors = FALSE) # this line for R <4.0.0
res[1, 7] <- power.scABEL(CV = res$CV[1], theta0 = res$theta0[1],
design = res$design[1], n = n)
res[2, 7] <- power.TOST(CV = res$CV[2], theta0 = res$theta0[2],
design = res$design[2], n = n)
print(res, row.names = FALSE) design intended metric CV theta0 n power
2x2x4 ABEL Cmin 0.50 0.88 34 0.8183300
2x2x4 ABE Cmax 0.25 1.12 34 0.8258111
3.4 Dose-Proportionality
We now parallel the “Dose-Proportionality” examples from the PowerTOST Vignette.
Dose-Proportionality Crossover Examples
These examples parallel these PowerTOST examples.
function mod_fibo(lowest, levels)
fib = [2, 1 + 2 / 3, 3 / 2, 1 + 1 / 3]
doses = zeros(levels)
doses[1] = lowest
level = 2
while level <= levels
if level <= 4
doses[level] = doses[level-1] * fib[level-1]
else
doses[level] = doses[level-1] * fib[4]
end
level += 1
end
return round.(doses, digits = 3)
end
lowest = 10
levels1 = 3
doses1 = mod_fibo(lowest, levels1)
samplesize(DoseProportionalityStudy; CV = 0.2, doses = doses1, peβ₀ = 1.02)Determined sample size: n = 15 achieved power = 80.805%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.DoseProportionalityStudy{BioequivalencePower.EDGeneralCrossover} doses = [10.0, 20.0, 33.333] CV = 0.2 CVb = 0.4 β₀ = 1.02 [θ₁, θ₂] = [0.8, 1.25]
mod.fibo <- function(lowest, levels) {
# modified Fibonacci dose-escalation
fib <- c(2, 1 + 2/3, 1.5, 1 + 1/3)
doses <- numeric(levels)
doses[1] <- lowest
level <- 2
repeat {
if (level <= 4) {
doses[level] <- doses[level-1] * fib[level-1]
} else { # ratio 1.33 for all higher doses
doses[level] <- doses[level-1] * fib[4]
}
level <- level + 1
if (level > levels) {
break
}
}
return(signif(doses, 3))
}
lowest <- 10
levels <- 3
doses <- mod.fibo(lowest, levels)
sampleN.dp(CV = 0.20, doses = doses, beta0 = 1.02) n power
15 0.808127
levels2 = 4
doses2 = mod_fibo(lowest, levels2)
x = samplesize(DoseProportionalityStudy; CV = 0.2, doses = doses2, peβ₀ = 1.02)Determined sample size: n = 16 achieved power = 86.76%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.DoseProportionalityStudy{BioequivalencePower.EDGeneralCrossover} doses = [10.0, 20.0, 33.333, 50.0] CV = 0.2 CVb = 0.4 β₀ = 1.02 [θ₁, θ₂] = [0.8, 1.25]
levels <- 4
doses <- mod.fibo(lowest, levels)
x <- sampleN.dp(CV = 0.20, doses = doses, beta0 = 1.02) n power
16 0.867441
res = DataFrame(n = x.n:-1:13, power = NaN)
@eachrow! res begin
:power = round(
power(
DoseProportionalityStudy;
CV = 0.2,
doses = doses2,
peβ₀ = 1.02,
n = :n,
verbose = false,
).achieved_power,
digits = 5,
)
end| Row | n | power |
|---|---|---|
| Int64 | Float64 | |
| 1 | 16 | 0.8676 |
| 2 | 15 | 0.82937 |
| 3 | 14 | 0.81956 |
| 4 | 13 | 0.81536 |
res <- data.frame(n = seq(x[["Sample size"]], 12, -1), power = NA)
for (i in 1:nrow(res)) {
res$power[i] <- signif(suppressMessages(
power.dp(CV = 0.20, doses = doses,
beta0 = 1.02, n = res$n[i])), 5)
}
res <- res[res$power >= 0.80, ]
print(res, row.names = FALSE) n power
16 0.86744
15 0.82914
14 0.81935
13 0.81516
Dose-Proportionality General Design Examples
levels3 = 5
doses3 = mod_fibo(lowest, levels3)
#This matrix, dm, is what one would get from crossdes::balmin.RMD (in R)
dm = [
1 5 3
2 1 4
3 2 5
4 3 1
5 4 2
3 5 1
4 1 2
5 2 3
1 3 4
2 4 5
]
x2 = samplesize(
DoseProportionalityStudy,
EDGeneral;
CV = 0.20,
doses = doses3,
peβ₀ = 1.02,
design_matrix = dm,
)Determined sample size: n = 30 achieved power = 89.887%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.DoseProportionalityStudy{BioequivalencePower.EDGeneral} doses = [10.0, 20.0, 33.333, 50.0, 66.667] CV = 0.2 CVb = 0.4 β₀ = 1.02 [θ₁, θ₂] = [0.8, 1.25]
design matrix: [1 5 3; 2 1 4; 3 2 5; 4 3 1; 5 4 2; 3 5 1; 4 1 2; 5 2 3; 1 3 4; 2 4 5]
levels <- 5
doses <- mod.fibo(lowest, levels)
per <- 3
block <- levels*(levels-1)/(per-1)
dm <- crossdes::balmin.RMD(levels, block, per)
x <- sampleN.dp(CV = 0.20, doses = doses, beta0 = 1.02,
design = "IBD", dm = dm)++++ Dose proportionality study, power model ++++
Sample size estimation
-------------------------------------------------
Study design: IBD (5x10x3)
alpha = 0.05, target power = 0.8
Equivalence margins of R(dnm) = 0.8 ... 1.25
Doses = 10 20 33.3 50 66.7
True slope = 1.02, CV = 0.2, CVb = 0.4
Slope acceptance range = 0.88241 ... 1.1176
Sample size (total)
n power
30 0.898758
res = DataFrame(n = x2.n:-1:(size(dm)[1]), power = NaN)
@eachrow! res begin
:power = round(
power(
DoseProportionalityStudy,
EDGeneral;
CV = 0.2,
doses = doses3,
peβ₀ = 1.02,
n = :n,
design_matrix = dm,
verbose = false,
).achieved_power,
digits = 5,
)
end
@subset! res :power .≥ 0.8
res| Row | n | power |
|---|---|---|
| Int64 | Float64 | |
| 1 | 30 | 0.89887 |
| 2 | 29 | 0.89208 |
| 3 | 28 | 0.87951 |
| 4 | 27 | 0.87215 |
| 5 | 26 | 0.85807 |
| 6 | 25 | 0.83602 |
| 7 | 24 | 0.82486 |
| 8 | 23 | 0.80421 |
res <- data.frame(n = seq(x[["Sample size"]], nrow(dm), -1),
power = NA)
for (i in 1:nrow(res)) {
res$power[i] <- signif(suppressMessages(
power.dp(CV = 0.20, doses = doses,
beta0 = 1.02, design = "IBD",
dm = dm, n = res$n[i])), 5)
}
res <- res[res$power >= 0.80, ]
print(res, row.names = FALSE) n power
30 0.89876
29 0.89196
28 0.87939
27 0.87201
26 0.85793
25 0.83587
24 0.82470
23 0.80405
Dose-Proportionality Expanded Crossover Examples
doses4 = 2 .^ (0:2:8)
levels4 = length(doses4)
samplesize(DoseProportionalityStudy; CV = 0.3, doses = doses4, peβ₀ = 1.02)Determined sample size: n = 70 achieved power = 80.999%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.DoseProportionalityStudy{BioequivalencePower.EDGeneralCrossover} doses = [1.0, 4.0, 16.0, 64.0, 256.0] CV = 0.3 CVb = 0.6 β₀ = 1.02 [θ₁, θ₂] = [0.8, 1.25]
doses <- 2^(seq(0, 8, 2))
levels <- length(doses)
sampleN.dp(CV = 0.30, doses = doses, beta0 = 1.02,
design = "crossover")++++ Dose proportionality study, power model ++++
Sample size estimation
-------------------------------------------------
Study design: crossover (5x5 Latin square)
alpha = 0.05, target power = 0.8
Equivalence margins of R(dnm) = 0.8 ... 1.25
Doses = 1 4 16 64 256
True slope = 1.02, CV = 0.3
Slope acceptance range = 0.95976 ... 1.0402
Sample size (total)
n power
70 0.809991
samplesize(DoseProportionalityStudy; CV = 0.3, doses = doses4, peβ₀ = 1.02, θ₁ = 0.75)Determined sample size: n = 30 achieved power = 82.825%
HypothesisTestMetaParams Settings: α = 0.05 target_power = 0.8
Study: BioequivalencePower.DoseProportionalityStudy{BioequivalencePower.EDGeneralCrossover} doses = [1.0, 4.0, 16.0, 64.0, 256.0] CV = 0.3 CVb = 0.6 β₀ = 1.02 [θ₁, θ₂] = [0.75, 1.3333333333333333]
sampleN.dp(CV = 0.30, doses = doses, beta0 = 1.02,
design = "crossover", theta1 = 0.75)++++ Dose proportionality study, power model ++++
Sample size estimation
-------------------------------------------------
Study design: crossover (5x5 Latin square)
alpha = 0.05, target power = 0.8
Equivalence margins of R(dnm) = 0.75 ... 1.333333
Doses = 1 4 16 64 256
True slope = 1.02, CV = 0.3
Slope acceptance range = 0.94812 ... 1.0519
Sample size (total)
n power
30 0.828246
4 Conclusion
This highly technical tutorial serves as a comparison between BioequivalencePower and PowerTOST. It may help R/PowerTOST users see how things are done in Julia/Pumas, and it may help Julia/Pumas users see the associated R/PowerTOST code.