Isolating cause and effect with sensitive mice

10 December 2014
By Natasha Karp

A study assessing the sensitivity of different experiment designs.  B1: One Batch, B2:  Two Batch, B3:  Three Batch, MG: MultiBatch and R: Random. The power of the experiment is significantly  increased when the experiment is organising into multiple batches. Credit: DOI: 10.1371/journal.pone.0111239

A study assessing the sensitivity of different experiment designs. B1: One Batch, B2: Two Batch, B3: Three Batch, MG: MultiBatch and R: Random.
The power of the experiment is significantly increased when the experiment is organising into multiple batches.
Credit: DOI: 10.1371/journal.pone.0111239

It’s tricky designing experiments to make sure you have isolated the particular thing you are testing so you make the right conclusion. Working with mice adds another dimension. Mice are responsive creatures, their survival requires that they constantly adapt to their environment. Often it’s difficult to know if the differences you are seeing in your experiment are interesting results or due to subtle variations in the environment the mice live in.

You could try the classic design where you standardise everything and only allow the therapy you are testing to vary. However, this introduces an artificial situation where the variation is so low, that you can see a treatment effect but it can be very specific to that environment [see references 1 and 2]. You could measure everything that varies and include that in the mathematic model. This becomes too complex and there isn’t enough data to fit. You need a design that encompasses variation and an associated mathematical model that works well enough to reliably detect the effects.

Studies involving mice are critical to the research effort to understand and treat human disease. With animals we have an ethical responsibility to ensure the experiments are reliable to ensure the use of the animals is appropriate and reduce future use of animals.

At the Wellcome Trust Sanger Institute, we are working to understand how genes function in the body by systematically recording the characteristics (phenotypes) of knockout animals where a gene has been switched off. The mice are characterised via a phenotyping pipeline and this includes things such as bone density or cholesterol levels.

This is a large scale project with an international consortium that has ambitions to knockout all known genes. This means we have a lot of control data, but due to operational constraints we don’t have controls collected on every day we have mice phenotyped. Even with our highly standardised environment, whether a behavioural or physiological screen, we can see the environment leading to fluctuations in the measurements.

My research focuses on how to design the experiments and analyse the data to answer the question being asked. We have developed a new approach to analysing the data [see reference 3] and then investigated when the method is reliable [see reference 4]. We have found that we can improve the accuracy of our experiments by phenotyping mice for a knockout line in multiple batches (where a batch is data collected for a day). By phenotyping smaller groups of mice on different days, we can separate fluctuation from environmental variation from the treatment effect and we can have a higher confident that a differences will be reproducible.

It is critical for these projects to unravel how to efficiently design the experiment and analyse the data. The findings from these large scale studies can also inform all animal experiments by reminding us of the critical things to manage and think about to ensure the experiments deliver.

Natasha Karp is a senior biostatistician who supports the International Mouse Phenotyping Consortium.

References

Related Links: