Why are so many studies poorly executed?

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

Post Reply
User avatar
Justforfun000
Sith Devotee
Posts: 2503
Joined: 2002-08-19 01:44pm
Location: Toronto
Contact:

Why are so many studies poorly executed?

Post by Justforfun000 »

I won't bother copying any specific ones, I'm sure you've seen many. I know I have. Why is it there are so many studies that end up drawing conclusions that sound like "due to the size of the study sample the statistical difference between the 2 groups was not significant.....better designed studies may verify the positive trend shown..."

I've seen this countless of times. Why fucking bother spending the money and the time doing a half-assed study? Why not do right or not at all? It's ridiculous. It's because of this that many very promising and clinically tantalizing products like Milk Thistle are thrown into the "unverified" no man's land. Even studies on vitamins like Zinc or Vitamin C have the same thing happen. There seems to be a dearth of sub-par clinical studies out there.
You have to realize that most Christian "moral values" behaviour is not really about "protecting" anyone; it's about their desire to send a continual stream of messages of condemnation towards people whose existence offends them. - Darth Wong alias Mike Wong

"There is nothing wrong with being ignorant. However, there is something very wrong with not choosing to exchange ignorance for knowledge when the opportunity presents itself."
User avatar
Bedlam
Jedi Council Member
Posts: 1509
Joined: 2006-09-23 11:12am
Location: Edinburgh, UK

Re: Why are so many studies poorly executed?

Post by Bedlam »

There are several reasons, the ones I can think of at the moment are: -
1) You dont know what results your going to get before the study starts so you dont know what sample size your need to get significant results.

2) No-one wants to publish a negative result so generally even if the result looks negative your put a bit on it saying a larger study might show a better result to try and explain why you wasted your time and money on the study (you didn't really because a negative result can be just as useful as a positive, it just looks worse).

3) Your not going to get money for a big study without some smaller (cheaper) studies showing that its probably going to give positive results so the uncertain study's can lead up to a big enough study to answer the question once and for all.

4) You might not be able to afford as many samples as you want and make do with what you can get.
User avatar
mr friendly guy
The Doctor
Posts: 11235
Joined: 2004-12-12 10:55pm
Location: In a 1960s police telephone box somewhere in Australia

Re: Why are so many studies poorly executed?

Post by mr friendly guy »

Imagine a roulette wheel. My friend bets on a particular box (which I know is the first 12 numbers). My other friend bets on a different box (which is the number 30). They repeat the same bets for every spin. As chance will have it, they each win at one time. Without having an understanding of roulette, one may be tempted to conclude that betting on each box has an equal chance of winning.

Now fast forward to 37 spins, and the first person has won around 12 times, while the second has won only the once. Now you can see that betting on the first 12 numbers has a roughly 12 times more chance of winning than betting on a single number.

Now if I tried this again with odds which are much closer, for example if the first friend bets on the first half to win (18/37 on European roulette and 18/38 on american), while the second friend places 3 bets, such that he encompasses numbers 17 different numbers, ie the odds are now 17/37). Obviously then you would need much more spins before the difference becomes obvious.

The analogy follows with medical studies. You need a larger sample size (analogous to the number of roulete spins) for the difference to become noticeable. The problem with them is, unlike the roulette wheel, where I know the probability before hand and hence can suggest number of spins needed for the difference to become noticeable, I don't know how much better one treatment is vs the other, so its hard to guestimate how much sample size I will need. Note there are mathematical ways to doing this, but since I am not a statistician I won't try.

For example, if my new drug A can reduce the incidence of disease A by 99%, then I only need a small sample size to show this, as it should be obvious. However if new drug A only reduce the incidence of disease by 1%, then I would need a large sample. Large samples are obvious going to be harder to get, even if the disease is relatively common, as the study may not be done in a big enough population size. You could try and get around this by comparing various studies in what is called a meta analysis.

Thus lies the problem. And this is with a randomised double blind trial. With restrospective studies, I would be very cautious.
Never apologise for being a geek, because they won't apologise to you for being an arsehole. John Barrowman - 22 June 2014 Perth Supernova.

Countries I have been to - 14.
Australia, Canada, China, Colombia, Denmark, Ecuador, Finland, Germany, Malaysia, Netherlands, Norway, Singapore, Sweden, USA.
Always on the lookout for more nice places to visit.
User avatar
Twoyboy
Jedi Knight
Posts: 536
Joined: 2007-03-30 08:44am
Location: Perth, Australia

Re: Why are so many studies poorly executed?

Post by Twoyboy »

Another problem is variation. If the result you are looking for is not "yes/no" or "win/lose" like above, then you are looking for a quantitative change in a measured variable. If the variable is perfectly static, a tiny test may result in a statistically significant change. If you have a massive variation, the change can be hard to spot unless you have a huge amount of data.

Built into all statistical significance tests is the standard deviation - a measure of variance. However, you need to test to discover the standard deviation, to determine the number of data points you may need. Rather than test for this separately, you make a guess and then do the test, measuring the standard deviation at the same time. If your assumption was wrong and the SD is higher than you expected, you get no significance, but maybe because you didn't have enough samples.

At least, I hope that's why they say it. But, in my personal experience, it's more likely that they didn't see a result when expecting one and don't want to admit they're wrong. And by personal experience, I mean I've done this in a previous job.
I like pigs. Dogs look up to us. Cats look down on us. Pigs treat us as equals.
-Winston Churchhill

I think a part of my sanity has been lost throughout this whole experience. And some of my foreskin - My cheating work colleague at it again
Post Reply