When you first learn experimental design, you learn all the best ways to control your experiments and prevent bias from infiltrating your data and analysis. That being said, I think when experiments using animals models are actually performed, a lot of the controls and procedures that should be used are pushed to the wayside for budgetary, time or personnel constraints.
When you perform behavior experiments, it is especially important to be rigorous about your experimental design and aware of biases because the measuring and analysis of data can be subjective. Make sure that you have the proper genetic, environmental and behavior controls when you design your experiments (more on that in ‘What controls should I use for behavior experiments?’.
You should also consider performing and/or analyzing your experiments blind. When you perform your experiments blind, it means that you had someone relabel your experimental and control groups and save the ‘code’ for their relabelling somewhere safe. Then you perform the experiment, and after you have finished performing and analyzing the experiment, you can look at the code and label your groups the appropriate labels. For example, say you have 2 strains: a mutant line and its genetic background control. Someone in the lab labels the lines ’George’ and ‘Martha’ and doesn’t tell you which is the control and which is the experimental. You perform all the behavior experiments, and analyze the data for ‘George’ and ‘Martha’, then find the code and realize ‘Martha’ was the mutant line and ‘George’ was the experimental line.
This doesn’t always work well since occasionally while fly pushing and collecting your strains, you notice that 'Martha' flies look a bit different than 'George' flies. If your mutant was in the tyrosine hydroxylase (TH) gene, for example, there’s a good chance that the cuticle color would be a bit pale (The gene name for TH is pale for this very reason - when the mutant was first isolated, the mutants cuticle was a bit more light-looking than wild-type). In cases like this, you kind of know which is the mutant and control strain, so you do your best to ignore these differences when you run your experiments. Obviously your biases can still accidently influence you here, so it’s best to have someone a bit less invested in the outcome of your experiments do the data analysis.
Tip: when someone hands you the piece of paper that has the code for your experiment, make sure to put it somewhere that you will remember where it is, and that you can’t accidentally see the coding. There is nothing more frustrating than forgetting where that code was once the experiment was completed and analyzed. I like to tape these codes on small sticky notes folded so I can’t see the writing into the inside back of my lab notebook.
Another type of blinding is to perform the experiments knowing the strains, but to be blinded to the strains for analysis. Although this does mean you don’t know what strain is which during the analysis, it isn’t ideal since clearly the experimenter biases can influence the outcome of the experiment when the experiment is being performed. To continue the example above, say you are a grad student in the lab and you rear and collect the mutant and control strain for your experiment, and you perform the experiment. Then you code the strains as ‘George’ and ‘Martha’ and send the coded data to an undergraduate working with you for analysis. The undergraduate doesn’t know whether ‘George’ or ‘Martha’ is the mutant strain, and chances are knows less about the details or motivation for the experiment and thus may not be as invested in the outcome of the experiment as you are. They perform the analysis and plot the data, hand it back to you, and you reveal that ‘Martha’ is the mutant and ‘George’ is the control.
So what about when you use a more objective type of analysis like an automated behavioral tracking system? Does this allow you to forgo blinding your experiments? Theoretically this should add a layer of objectivity to your analysis since computers don’t care what the outcome of your analysis is. That being said, these types of analysis often produce a lot of high content data, and whenever you are dealing in large sample sizes and many behavioral metrics, it is possible to find effects where, if you look more holistically at the data, the effects are so subtle they are likely negligible in this context in the big picture.
Being blinded to analysis and not while you perform the experiment also doesn’t prevent your bias from leaking into the experiment. For example. maybe you accidentally blow harder on the mutant strain than the control strain when you aspirate your flies into the behavior arena, which results in them getting a bit banged around and affects their activity. Thus, it’s important to be blinded both for performing the experiment, and for the analysis.
What about running large sample sizes? Doesn’t this prevent you from accidentally biasing a group since you are less likely to be able to bias each fly in a sample size of say, 100 individual runs per strain? Yes and no. Running large sample sizes is great for statistical rigor. But you can still accidently bias the performance and analysis of the experiment. You also have to be careful when dealing with high sample sizes since very subtle effects can have p<0.0001 and you might think this great because it’s ‘super significant’, but in reality the effect size is so small (like a difference between 10.5 and 11.0 on a scale of 0-1000), that its contribution to the behavior on a whole is practically negligible.
It’s important to realize that no matter how rigorous or objective you think you are being, there are a hundred different ways that bias can leak into your experiments. This is typically not a conscious process - I think it’s a very rare person that goes into an experiment actually wanting to influence their data. It’s just part of being human - it’s difficult to control our subjective biases which is why it’s so important to be aware of our biases and to rigorously design controls and perform our experiment blind to prevent our biases from influencing our data.
Comentários