Power/ sample size analysis for dose-response binomial GLMMs

A few tools have come available recently that has aided power analysis of more complicated models. Here I am going to recreate Johnson et al 2015Power analysis for generalized linear mixed models in ecology and evolution for typical survival dose-response data. You can also find their examples attached to their paper.

How it works

More complex models with random effects require simulations to work out their power. Simulation aren’t nearly as complicated as you think and they can tell you masses about your data. The approach Johnson et al take is to i) simulate some data with known trends, ii) analyse the data to find evidence that the null-hypothesis can be rejected  (i.e p-values  < 0.05), iii) repeated the process many times.

The overall percentage of correct rejections is the power. Generally you want at least 80% power (i.e less than 20% false-negatives). Let’s start.

First we need to design our experiment. A typical dose-response design is some continuous predictor that we want to find thresholds for, and we may run these at two levels. An example of this could be a toxic metal at two temperatures. This design will create two ‘curves’. For each combination of the treatments, let’s trial five replicates, and each tank containing 30 larvae.

#Simulate data
x = c(0.1, 0.3, 1, 3, 10, 30, 100)  #proposed treatment levels
vec.x = rep(x, 5)   #number of tank replicates
vec.x <- sort(vec.x)
data1 <- expand.grid(raw.x = vec.x,  factor = c(“a”,”b”))   #two factors ‘a’ and ‘b’
data1$obs <- factor(formatC(1:nrow(data1), flag=”0″, width = 3))# unique tank ID for later on

data1$n = 30   #total number of organisms per tank
str(data1)  #raw.x is my continous factor, ‘factor’ is by categorical factor

We now have a data frame setup, ready for our response data. Now this is the part that requires a little attention. We need to simulate the minimum trends that we think is realistic to detect. In ecotoxicology, a 10% effect from the control is considered reasonable. Obviously there is some pragmatism being used here, as a sample size to pick up <10% may require an unrealistically large sample size (you can test this later).

For the continuous factor ‘raw.x’, I will choose a very slight slope of -0.01 which I can adjust later if there is not a 10% effect between my ‘control dose’ (0.1) and my highest dose (100). For the categorical factor, I will work out the odds ratio of a 10% effect of factor b intercept from factor a. To do this you need to define what is a likely ‘control’ response. In this experiment, I consider a healthy control to have a minimum of 70% survivorship, and a 10% effect from this is 63%. The input parameters are:

raw.x.slope = -0.01
fac.a.control = 0.7
odds.ratio = (63 / 37) / (70 / 30) #factor b (success/fail) / (factor a (success/fail)
odds.ratio
obs = 0.1 #random tank error

> odds.ratio
[1] 0.73

If you aren’t confident with these numbers, don’t worry. You will have plenty of opportunity to iteratively fix them when we start plotting. Now we will use the package attached to the Johnson paper to simulate all this together.

library(GLMMmisc)
data1 <-
sim.glmm(design.data = data1,
fixed.eff = list(
intercept = qlogis(fac.a.control),
factor = log(c(a = 1, b = odds.ratio)),
raw.x = -raw.x.slope),
rand.V = c(obs = obs),
distribution = “binomial”)

Use can see that there is now a ‘response’ column in your data.frame. Let’s plot it.

library(ggplot2)
p0 = ggplot()+geom_point(data1, mapping = aes(x = raw.x, y = response/n),position = position_jitter(width = .02), alpha = 0.50,size = 2)+theme_bw()+facet_wrap(~factor)+scale_x_log10(name =”raw.x”)
p0= p0+ scale_y_continuous( limits = c(0, 1))
p0

My data will look different to yours because we have randomly simulated once

p1

Can you see the weak trends we simulated in the data? There are (apparently) slight negative trends running from left to right across the x-axes, and data set ‘b’ is slightly lower than data set ‘a’. But because we have binomial error (chance) and a random tank effect, these trends are hard to visually-separate the signal from the noise. Let’s analyse the data and see if any significant trends emerge.

library(lme4)
fit <- glmer(cbind(response, n – response) ~ raw.x + factor +(1 | obs) , family = binomial, data = data1)
summary(fit) #to show how well it estimates the simulation
coef(summary(fit))[“raw.x”, “Pr(>|z|)”] #p-values for raw.x
coef(summary(fit))[“factorb”, “Pr(>|z|)”] #for factor b

My analysis were actually significant for each factor. Were yours?

Ok, now this time let’s wrap everything into a function, set the ‘seed’ so it returns the same response each time, and plot the analysis.

sim.data1 <- function(…){sim.glmm(design.data = data1,
fixed.eff = list(
intercept = qlogis(fac.a.control),
factor = log(c(a = 1, b = odds.ratio)),
raw.x = raw.x.slope),
rand.V = c(obs = obs),
distribution = “binomial”)}
set.seed(978675)
sim.data1()

library(ggplot2)
p0 = ggplot()+geom_point(sim.data1(), mapping = aes(x = raw.x, y = response/n),position = position_jitter(width = .02), alpha = 0.50,size = 2)+theme_bw()+facet_wrap(~factor)+scale_x_log10(name =”dep sed”)
p0= p0+ scale_y_continuous( limits = c(0, 1))
p0

#And check we can still analyse our data witht he function
fit = glmer(cbind(response, n – response) ~ raw.x + factor +(1 | obs) , family = binomial, data = sim.data1())
summary(fit)
vec.x = seq(min(data1$raw.x), max(data1$raw.x), length = 100)
df1 <- expand.grid(raw.x = vec.x, factor = levels(data1$factor))
mm <- model.matrix(~raw.x + factor, df1) # build model matrix
eta <- mm %*% fixef(fit)
df1$prediction <- as.vector(exp(eta) / (1 + exp(eta)))
se <- sqrt(diag(mm %*% vcov(fit) %*% t(mm)))
df1$upper <- as.vector(exp(eta + 1.96 *se) /(1 + exp(eta + 1.96 *se)))
df1$lower <- as.vector(exp(eta – 1.96 *se) /(1 + exp(eta – 1.96 *se)))
p0= p0+ geom_line(data = df1, aes(x = raw.x, y = prediction), color = ‘grey30′, size=1)
p0= p0+ geom_ribbon(data = df1, aes(x = raw.x, ymin=lower, ymax=upper,fill=’grey’), alpha=0.2)
p0

p2

Unsurprisingly, this looks quite similar to coral settlement data that I collect (the input parameters have been built around real assay outputs). At this point, you may want run this a few times and refine some of the input parameters. Change the values of the random effect obs to see if it influences the noise, and how much of the error is from (binomial) chance. You might be quite surprised how much noise really is just chance — welcome to binomial data! You can confirm by running this a few times for the control in factor ‘a’.

rbinom(5, 30, 0.7)/30   #binomial error around 0.7 without random tank error

#[1] 0.7666667 0.7666667 0.6000000 0.6666667 0.5666667    #yep, lots of noise

For me, I want to the option of detecting slightly smaller effects of the continuous predictor, so I will change the slope -0.008, and keep the random effect at 0.1.

raw.x.slope = -0.008

obs = 0.1 #random tank error

Finally we will wrap the analysis into a two functions with the outputs being the p-values for each of the coefficients. Note: I have scaled the predictor variable, which can be important to do for multifactor experiments.

sim.mos.pval <- function(…){
fit <- glmer(cbind(response, n – response) ~ scale(raw.x) + factor +(1 | obs) , family = binomial, data = sim.data1())
coef(summary(fit))[“scale(raw.x)”, “Pr(>|z|)”]
} #if this sig, then we have a true positive (which we want to occur 80% of the time – see below).

sim.mos.pval <- function(…){
fit <- glmer(cbind(response, n – response) ~ scale(raw.x) + factor +(1 | obs) , family = binomial, data = sim.data1())
coef(summary(fit))[“factorb”, “Pr(>|z|)”]
}

Now we can run everything together. So just to recap, we will simulate the response data, analyse it and extract the p-value, and repeat again and again. We will then calculate the proportion of correctly significant p-values. I have only run 20 below for speed, but you should run many more once you have locked in your input parameters.

set.seed(978675) # set seed for random number generator to give repeatable results
sim.mos.pval()
sim.pvals <- sapply(1:20, sim.mos.pval) #creates 20 simulations
mean(sim.pvals < 0.05) #what is the prop. of the true positives. Need to be >0.8
binom.test(table(factor(sim.pvals < 0.05, c(T, F))))$conf.int #confidence intervals around this estimate

For the ‘factorb’ function,  my power is 70% (95% CI: 46 – 88).  Johnson recommends increasing the simulation until the 95% CI is within ~2.5% of the mean. If I run it 1000 times I get 75.8% (95% CI: 73.0 – 78). So I have run enough simulation but my power is just short. That leaves me a few options:

  1. Increase my tank replicate by one (may be an overkill since I am so close)
  2. Increase my larvae in each tank (sounds doable)
  3. Decrease my expectations i.e acknowledge I may not pick up subtle effects. I may need to be happy with only a 15% decrease from my control.

 

My thoughts

What I like about analysis is that the random effects are quite easily incorporated into the simulation. This is great for field data that inherently have lots of random effects, and where we are using null-hypothesis significance testing (NHST) to tease apart many predictor variables.

However in lab studies, we have the benefit of control, and a properly designed experiment will aim to reduce random effects as best as possible. Further, we can move away from NHST and move towards something much more meaningful – effect sizes (sensu Gelman http://www.stat.columbia.edu/~gelman/research/published/retropower20.pdf).

You can see above that it was a bit clunky to estimate a 10% effect of raw.x. In a previous post, I showed how you could extract standardised effect sizes (EC10s) from similar binomial GLMMs. Using a similar methodology to above we could i) simulate the data, ii) extract EC10s and iii) repeat many times, iv) and compare EC10s until 80% were in 10% of the real value. This is something I need to code up…

But this has been done for nonlinear models where the ECx value is a coefficient in the model and therefore can be easily compared. See this excellant post (http://edild.github.io/lc50_bias_sim/) that describes the process.

1. Johnson PC, Barry SJ, Ferguson HM, Müller P. Power analysis for generalized linear mixed models in ecology and evolution. Methods in ecology and evolution. 2015;6(2):133-42.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s