Monday, 22 February 2016

Flaws in Designing Experiments

I've been lacking motivation for my February project (Experimental Design), so I decided to turn this bit of it into a blog post. Multitasking! Here are a selection of flaws in the design of experiments that I've learned about. 

Incidentally, while researching this aspect of the project I came upon the page that inspired me to do it in the first place and that I hadn't been able to find since January. You can find it at the third link in the References at the bottom of this post.

These are mostly related to clinical trials ... Let's go!

False Precision

This one is from my Physics teacher, who said while doing an experiment to measure the wavelength of light using a spectrometer that you shouldn't pretend to be more accurate than your measurements actually were, i.e. if you measure everything to the nearest degree you shouldn't start doing things to the nearest tenth of a degree in your calculations because that's just (my words) mathematical tricks and not based off your experiment. This really annoyed me until he explained it, how he'd use significant figures that seemed really vague - but in reality, what I was doing with my calculator-derived decimals was pretending to know more than the experiment had actually told me. 

Failure to use a control, or use of the wrong control

This one is pretty obvious, but it's important so I'm keeping it. 

The basic idea is that you have your experimental group (in which you change the explanatory variable) and your control group (in which you don't change anything), and you measure both of them according to whatever protocol you're using (i.e. presence of a pretest or not). You want the control group to be identical in every way except the explanatory variable, so that there are no confounding variables confusing your results, since the goal is to isolate that variable. So you can't trust your results unless you have a relevant control. You might even need multiple controls to control different confounding variables.

Bad Randomization

I remember reading this in Bad Science by Ben Goldacre. Basically, to make sure your control group and your experimental group are comparable, you want to randomly assign subjects into each group. There are many ways to place subjects into groups, some better than others. For example, if you met the participants before dividing them into groups, you might unconsciously place the healthier patients in the experimental group through the power of wishful thinking. Using alternating assignment is not a good way to assign patients into groups because the researcher knows who'll be in each group and could theoretically manipulate that (just like students rearrange themselves to be with their friends when they figure out the pattern the PE teacher is using to divide them into groups). Good ways: random number generator (even means intervention, odd means control), coin flip (heads means intervention, tails means control). You could run into problems here with unequal numbers of patients in each group due to chance, and there are ways to fix that that I won't get into but you can find in the References. Simple randomisation can be problematic in case it puts too much of one demographic e.g. young people into one group, so stratified randomisation could be used instead.

As I was researching this, I became a bit concerned about the unfairness of clinical trials in that some patients could be receiving a much better treatment just due to the way they fell in the randomisation, whereas those in the control group might stay sick or even die. I know it's uncommon to use just a basic placebo in these trials, but even the disparity between more and less effective treatments seems unfair. Anyway, for that reason, this piqued my interest: 

Play-the-Winner Design

The first subject is given either Treatment A or B based on a coin toss. Then the second subject is given whichever treatment was more successful. If this treatment is also more successful for Subject 2, give it to Subject 3 and continue; if at any point this treatment is less successful than the other, switch to the other and continue switching back and forth whenever a failure is encountered and staying with the same treatment during success. This is good in that more patients get better treatment, but it leads to stats problems and you could have different numbers in each group.

Investigator Bias

When the researcher treats members of the experimental and control groups differently in regards other than the actual treatment being tested. Can be removed using a double-blind study, where neither the subjects nor the investigators know which subjects are in each group.

References:
1. http://www.vicc.org/biostatistics/download/SJTU/ClinicalTrials/RandomizationAndBlinding.pdf
2. https://www.sealedenvelope.com/randomisation/protocols/
3. http://norvig.com/experiment-design.html
___________________________________________________________

I'll leave it there. Note: I am not a clinical researcher, I'm just trying to learn a little about some interesting things. 

No comments:

Post a Comment