Most Science Studies Appear to Be Tainted

SLAM: debunk creationism, pseudoscience, and superstitions. Discuss logic and morality.

Moderator: Alyrium Denryle

User avatar
Ace Pace
Hardware Lover
Posts: 8456
Joined: 2002-07-07 03:04am
Location: Wasting time instead of money
Contact:

Most Science Studies Appear to Be Tainted

Post by Ace Pace »

WSJ.
We all make mistakes and, if you believe medical scholar John Ioannidis, scientists make more than their fair share. By his calculations, most published research findings are wrong.

Dr. Ioannidis is an epidemiologist who studies research methods at the University of Ioannina School of Medicine in Greece and Tufts University in Medford, Mass. In a series of influential analytical reports, he has documented how, in thousands of peer-reviewed research papers published every year, there may be so much less than meets the eye.
These flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis. "There is an increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims," Dr. Ioannidis said. "A new claim about a research finding is more likely to be false than true."

The hotter the field of research the more likely its published findings should be viewed skeptically, he determined.

Take the discovery that the risk of disease may vary between men and women, depending on their genes. Studies have prominently reported such sex differences for hypertension, schizophrenia and multiple sclerosis, as well as lung cancer and heart attacks. In research published last month in the Journal of the American Medical Association, Dr. Ioannidis and his colleagues analyzed 432 published research claims concerning gender and genes.

Upon closer scrutiny, almost none of them held up. Only one was replicated.

Statistically speaking, science suffers from an excess of significance. Overeager researchers often tinker too much with the statistical variables of their analysis to coax any meaningful insight from their data sets. "People are messing around with the data to find anything that seems significant, to show they have found something that is new and unusual," Dr. Ioannidis said.

In the U. S., research is a $55-billion-a-year enterprise that stakes its credibility on the reliability of evidence and the work of Dr. Ioannidis strikes a raw nerve. In fact, his 2005 essay "Why Most Published Research Findings Are False" remains the most downloaded technical paper that the journal PLoS Medicine has ever published.

"He has done systematic looks at the published literature and empirically shown us what we know deep inside our hearts," said Muin Khoury, director of the National Office of Public Health Genomics at the U.S. Centers for Disease Control and Prevention. "We need to pay more attention to the replication of published scientific results."

Every new fact discovered through experiment represents a foothold in the unknown. In a wilderness of knowledge, it can be difficult to distinguish error from fraud, sloppiness from deception, eagerness from greed or, increasingly, scientific conviction from partisan passion. As scientific findings become fodder for political policy wars over matters from stem-cell research to global warming, even trivial errors and corrections can have larger consequences.

Still, other researchers warn not to fear all mistakes. Error is as much a part of science as discovery. It is the inevitable byproduct of a search for truth that must proceed by trial and error. "Where you have new areas of knowledge developing, then the science is going to be disputed, subject to errors arising from inadequate data or the failure to recognize new matters," said Yale University science historian Daniel Kevles. Conflicting data and differences of interpretation are common.

To root out mistakes, scientists rely on each other to be vigilant. Even so, findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors.

Stung by frauds in physics, biology and medicine, research journals recently adopted more stringent safeguards to protect at least against deliberate fabrication of data. But it is hard to admit even honest error. Last month, the Chinese government proposed a new law to allow its scientists to admit failures without penalty. Next week, the first world conference on research integrity convenes in Lisbon.

Overall, technical reviewers are hard-pressed to detect every anomaly. On average, researchers submit about 12,000 papers annually just to the weekly peer-reviewed journal Science. Last year, four papers in Science were retracted. A dozen others were corrected.

No one actually knows how many incorrect research reports remain unchallenged.

Earlier this year, informatics expert Murat Cokol and his colleagues at Columbia University sorted through 9.4 million research papers at the U.S. National Library of Medicine published from 1950 through 2004 in 4,000 journals. By raw count, just 596 had been formally retracted, Dr. Cokol reported.

"The correction isn't the ultimate truth either," Prof. Kevles said.
Posted without comment.
Brotherhood of the Bear | HAB | Mess | SDnet archivist |
User avatar
Spin Echo
Jedi Master
Posts: 1490
Joined: 2006-05-16 05:00am
Location: Land of the Midnight Sun

Post by Spin Echo »

My partner, who does data analysis on medical data, has mentioned this is a big problem. He said that around 60% of the statistical analysis for papers published in Nature Medicine was performed incorrectly.

They were trying to combat this by hiring infomatics people to repeat the analysis of submitted papers, but they're having a hard time finding any takers. Apparently redoing other people's data analysis isn't people's idea of a fun and rewarding job.

As for mistakes in publications, this is something I worry about a fair bit. Dumb mistakes can be really easy to do, oh say, forgeting to convert an axis from milliseconds to seconds. In the latest data set I've been analysing, I've made a couple of boneheaded mistakes. I think everything is sorted out properly now, but ...
Doom dOom doOM DOom doomity DooM doom Dooooom Doom DOOM!
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

That's why I've never been particularly impressed when people try to equate "medical research peer review" with peer review in the natural sciences. It's all based on poorly controlled studies with way too many variables that they either don't control or don't even examine, because of the randomized nature of the test subjects. It's all too easy to come up with the result that you want, and when you factor in the fact that so much medical research is bought and paid for by pharmaceutical companies with a vested interest in getting a certain result ...
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Serious dose of Bayes required, stat. The tarpit of orthodox stats is bad enough when you're honestly trying to use it to discover something, inject just a bit of wishful thinking and it becomes a toolkit for making up impressive sounding bullshit. You can abuse Bayesian methods by making nonobvious assumptions and hoping the readers don't notice you're doing it, but it's nowhere near as bad.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

In a normal scientific experiment, you try to control all of the possible variables. In medical research, you use highly variable test subjects and you try to compensate for the resulting irregularity with statistical chicanery and assumptions. That leaves a hell of a lot of room for creative outcomes.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Starglider
Miles Dyson
Posts: 8709
Joined: 2007-04-05 09:44pm
Location: Isle of Dogs
Contact:

Post by Starglider »

Darth Wong wrote:In medical research, you use highly variable test subjects and you try to compensate for the resulting irregularity with statistical chicanery and assumptions. That leaves a hell of a lot of room for creative outcomes.
A proper probabilistic analysis is relatively straightforward. If the domain is horribly noisy, then small trials will only nudge the probability of a hypothesis being correct by a percent or two. Achieving high probability of a weak effect existing as described will take a very large sample. Unlike the assorted essentially arbitrary 'confidence' measures in orthodox stats, there is no way to bullshit this without deliberately getting the maths wrong (i.e. outright lying). It takes some effort to get decent priors, but trying to do an analysis without good information on base rates is insanity anyway.
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

As someone who makes sure fuck-ups don't happen in medical R&D, I can only say that anyone using dodgy statistical data will get a handy visit from the GLPMA or similar national body someday and suddenly find their facility doing no business at all. Making sure GXP guidelines are followed is probably the single most important thing for any science, but especially science where you're working with compounds, biological systems and the like that can later be used in humans. You quite simply can't afford to fuck up, because people die, or in the case of the Northwick Park Hospital incident last year, ruin someone's life. At least with animal research we use genetically engineered or specially bred organisms that are, more or less, of the same template and in enough numbers to allow statistical analysis that isn't going to be horribly skewed by one or two iffy results when applying a Fisher methodology etc.

Unfortunately, biology is not always a precise science and so it's not a case of A + B = C, because the systems are so inherently complex that it's not unlike trying to model turbulence in aerodynamics. The reason the study mentioned above fucked up so much was likely down to CD28-SuperMAB showing up differences in human bio-chemistry that wasn't evident in animal studies. The "cytokine storm" that was witnessed was not apparent in any of the prior trials before humans, so when the immune system essentially went on a totally unguided attack with extreme prejudice on the volunteers' bodies, bad shit went down.

On the other hand, it wouldn't be the first time science has had to deal with big quakes in the industry affecting reliability. There have always been opportunist, lazy or just ignorant people in all walks of life and who tend to grab the headlines better with a scandal than when something goes through (you never hear too much about all the other monoclonal antibody studies done, just the one that had nearly 6 people die in under an hour).

This is why peer review is just so damn important. Any adept, impartial governing organisation will nip such practices in the bud PDQ, or at least, they should do.
User avatar
Fingolfin_Noldor
Emperor's Hand
Posts: 11834
Joined: 2006-05-15 10:36am
Location: At the Helm of the HAB Star Dreadnaught Star Fist

Post by Fingolfin_Noldor »

If there's one thing I never understood, is how Psychologists can claim a trend on a graph when all I see is a splash of data points with no trend whatsoever.

If this is how it is done in Medical Sciences as well, then I am inclined to worry.
Image
STGOD: Byzantine Empire
Your spirit, diseased as it is, refuses to allow you to give up, no matter what threats you face... and whatever wreckage you leave behind you.
Kreia
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Fingolfin_Noldor wrote:If there's one thing I never understood, is how Psychologists can claim a trend on a graph when all I see is a splash of data points with no trend whatsoever.

If this is how it is done in Medical Sciences as well, then I am inclined to worry.
It's not. Even on acute studies with a dozen rodents, you can clearly see trends that no psychology study would ever show you. Range finding and limit/definitive tests are used to go from a broad "There may be something here" to "This is the precise dose range you can have to have an effect or make you dead" within a study.

If you can't show a statistically significant result, then the study is either abandoned or a follow-up study is tasked to a new team. NO drug that gets anywhere near any human will reach even Phase I trials without having every possible side-effect looked at on animals first. There's a reason out of 5 million novel compounds, $100M and 10 years later you may get only one viable product out of it. If you fuck up the testing on how it will affect humans or the environment, you just pissed away a LOT of money for nothing. And those patents only last so long to boot.
User avatar
Darth Wong
Sith Lord
Sith Lord
Posts: 70028
Joined: 2002-07-03 12:25am
Location: Toronto, Canada
Contact:

Post by Darth Wong »

Admiral Valdemar wrote:This is why peer review is just so damn important. Any adept, impartial governing organisation will nip such practices in the bud PDQ, or at least, they should do.
A sufficiently clever operation can get past peer review. That's why the more rigorous sciences require independent third-party verification of results: something that the medical establishment does not feel is necessary.
Image
"It's not evil for God to do it. Or for someone to do it at God's command."- Jonathan Boyd on baby-killing

"you guys are fascinated with the use of those "rules of logic" to the extent that you don't really want to discussus anything."- GC

"I do not believe Russian Roulette is a stupid act" - Embracer of Darkness

"Viagra commercials appear to save lives" - tharkûn on US health care.

http://www.stardestroyer.net/Mike/RantMode/Blurbs.html
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Darth Wong wrote: A sufficiently clever operation can get past peer review. That's why the more rigorous sciences require independent third-party verification of results: something that the medical establishment does not feel is necessary.
For medical procedures, it is a lot more blurry given the variables and smaller sample size you can go with (for humans at least).

However, most surgical methods are tested on animals as well to a certain degree, although this can be far more trickier to fully appreciate given your patients can't really give you much feedback and physiologies can be quite different to boot.

What I find is a bigger problem, from a medical standpoint, is that, at least with drugs, you can only fully grasp the true effects of a novel compound when it's literally on the shelves in Phase IV trials. If a drug for helping treat heart conditions causes severe intestinal bleeding in 1 out of every million people, you can't really test for that accurately until it manifests in an actual person. Given the prevalence of some products, that could be a lot of people with very serious problems arising that Joe Sixpack now thinks were down to shoddy research.

This is why it'd be great the day we can accurately model a whole organism, like a human, on computer. It'll take quantum computing to get anything like that detail, but once done, you could replace any live studies taking place overnight and make it far more efficient a process to boot (not to mention cheaper).
kinnison
Padawan Learner
Posts: 298
Joined: 2006-12-04 05:38am

Post by kinnison »

Hear hear, Darth. Where was it that I read that there is no requirement to publish studies with negative results?

Why is this important? Simple. Assume that you have a prospective patentable drug, for a common condition, hence capable of making a shitload of money if it gets approved, and you are fairly sure it has neglible effect but you want to make said money, what do you do? Do the study as many times as it takes to get the result you want. After all, a 95% confidence interval implies that a drug with no effect at all will have an statistically significant effect in 5% of the studies. Do the study 10 times, and you have (roughly) a 50/50 chance of the desired result.

And of course if you can't patent it don't, in any circumstances whatsoever, do any studies at all. And of course spend any amount of money necessary to get the regulatory structure you want, too.
User avatar
drachefly
Jedi Master
Posts: 1323
Joined: 2004-10-13 12:24pm

Post by drachefly »

I used to work at a well-known and respected cancer research institute, doing bioinformatics.

I ran into a postdoc who really didn't understand why you couldn't get a reliable standard deviation without having a lot of replication.

Eventually I beat it into him, but he grumbled about the cost. Well, man, at least you have a real experiment once you added the replicates...[/i]
User avatar
Alyrium Denryle
Minister of Sin
Posts: 22224
Joined: 2002-07-11 08:34pm
Location: The Deep Desert
Contact:

Post by Alyrium Denryle »

Darth Wong wrote:That's why I've never been particularly impressed when people try to equate "medical research peer review" with peer review in the natural sciences. It's all based on poorly controlled studies with way too many variables that they either don't control or don't even examine, because of the randomized nature of the test subjects. It's all too easy to come up with the result that you want, and when you factor in the fact that so much medical research is bought and paid for by pharmaceutical companies with a vested interest in getting a certain result ...
This is a problem I find in the privatization of research in general. For one thing the private firm has control over whether their researchers can publish their findings and I imagine censor those findings or creatively persuade their researchers to look at the data a certain way. This will bias the results...
GALE Force Biological Agent/
BOTM/Great Dolphin Conspiracy/
Entomology and Evolutionary Biology Subdirector:SD.net Dept. of Biological Sciences


There is Grandeur in the View of Life; it fills me with a Deep Wonder, and Intense Cynicism.

Factio republicanum delenda est
User avatar
Broomstick
Emperor's Hand
Posts: 28822
Joined: 2004-01-02 07:04pm
Location: Industrial armpit of the US Midwest

Post by Broomstick »

One factor I haven't seen mentioned yet is (at least in the US) the "publish or perish" rule in both academia and corporate science. A scientist's output must be measured and evaluated by his/her employers, and without exception that benchmark is how much and how often published. Quantity is valued over quality.
A life is like a garden. Perfect moments can be had, but not preserved, except in memory. Leonard Nimoy.

Now I did a job. I got nothing but trouble since I did it, not to mention more than a few unkind words as regard to my character so let me make this abundantly clear. I do the job. And then I get paid.- Malcolm Reynolds, Captain of Serenity, which sums up my feelings regarding the lawsuit discussed here.

If a free society cannot help the many who are poor, it cannot save the few who are rich. - John F. Kennedy

Sam Vimes Theory of Economic Injustice
Mobiboros
Jedi Knight
Posts: 506
Joined: 2004-12-20 10:44pm
Location: Long Island, New York
Contact:

Post by Mobiboros »

Fingolfin_Noldor wrote:If there's one thing I never understood, is how Psychologists can claim a trend on a graph when all I see is a splash of data points with no trend whatsoever.
As a psych major I'll jump in to explain.

It's because the vast majority of statistics in psychology are complete BS. The only way you learn to "interpret" statistical psych data is by taking courses like "Statistics in Psychology". Wherein you get an insufficnet set of data to fully plot a graph, and are then given a data sheet of numbers to select from a group to both fill in the missing data as well as fit a normal curve.

Which is why I don't bother reading psych statistics so much as read the experiment and draw my own conclusions.
User avatar
Alyrium Denryle
Minister of Sin
Posts: 22224
Joined: 2002-07-11 08:34pm
Location: The Deep Desert
Contact:

Post by Alyrium Denryle »

Broomstick wrote:One factor I haven't seen mentioned yet is (at least in the US) the "publish or perish" rule in both academia and corporate science. A scientist's output must be measured and evaluated by his/her employers, and without exception that benchmark is how much and how often published. Quantity is valued over quality.
I have to disagree here. They are also judged by how often the work is cited by other researchers, which is a measure of quality. A researcher not only has to publish, he has to publish in good journals and the articles have to have enough impact that they get cited
GALE Force Biological Agent/
BOTM/Great Dolphin Conspiracy/
Entomology and Evolutionary Biology Subdirector:SD.net Dept. of Biological Sciences


There is Grandeur in the View of Life; it fills me with a Deep Wonder, and Intense Cynicism.

Factio republicanum delenda est
User avatar
Setesh
Jedi Master
Posts: 1113
Joined: 2002-07-16 03:27pm
Location: Maine, land of the Laidback
Contact:

Post by Setesh »

Alyrium Denryle wrote:This is a problem I find in the privatization of research in general. For one thing the private firm has control over whether their researchers can publish their findings and I imagine censor those findings or creatively persuade their researchers to look at the data a certain way. This will bias the results...
In theory they won't bias their results because rival companies R&D teams researching the same thing will call 'bullshit' if the fabricate or alter results. In practice it rarely works out that way.
"Nobody ever inferred from the multiple infirmities of Windows that Bill Gates was infinitely benevolent, omniscient, and able to fix everything. " Argument against god's perfection.

My Snow's art portfolio.
User avatar
Alyrium Denryle
Minister of Sin
Posts: 22224
Joined: 2002-07-11 08:34pm
Location: The Deep Desert
Contact:

Post by Alyrium Denryle »

Setesh wrote:
Alyrium Denryle wrote:This is a problem I find in the privatization of research in general. For one thing the private firm has control over whether their researchers can publish their findings and I imagine censor those findings or creatively persuade their researchers to look at the data a certain way. This will bias the results...
In theory they won't bias their results because rival companies R&D teams researching the same thing will call 'bullshit' if the fabricate or alter results. In practice it rarely works out that way.
They dont necessarily publish everything they find out either. Trade secrets, etc
GALE Force Biological Agent/
BOTM/Great Dolphin Conspiracy/
Entomology and Evolutionary Biology Subdirector:SD.net Dept. of Biological Sciences


There is Grandeur in the View of Life; it fills me with a Deep Wonder, and Intense Cynicism.

Factio republicanum delenda est
User avatar
Eris
Jedi Knight
Posts: 541
Joined: 2005-11-15 01:59am

Post by Eris »

Alyrium Denryle wrote:I have to disagree here. They are also judged by how often the work is cited by other researchers, which is a measure of quality. A researcher not only has to publish, he has to publish in good journals and the articles have to have enough impact that they get cited
I hate to rely on an anecdote, but this does vary from place to place. Good departments do look at that, but then you also have places like the one which discounted my father's work because he is very rarely primary author. He has scads of publications, but he's a statistician, so he never does the primary research. This was the same place that denied a woman tenure because they wouldn't count papers that she co-wrote with her husband. (Her husband, however, did get credit for them.)

While ideally we have good standards for judging quality of work in academia, a non-trivial amount of the time it turns into paper counting under very odd standards. The dept. I refer to in my examples is actually so bad about it it's a common joke that it's a perfect reverse test for how successful someone will be. Get tenure and you're doomed to a minor out of the way position. Get denied tenure and you'll end up a Dean, department head, or Vice Provost somewhere else.

I realise of course one example a trend does not make, but I doubt I'd get much disagreement from any academics around on how ossified and reliant on status and in general ass backwards the ivory tower can actually be. It varies from place to place, and the harder the science the better it does tend to be - sort of - but the trend is definitely there.

In response to the original article, I hate to echo sentiments already stated, but why is this supposed to be surprising? I'm a little miffed with the title and article itself seeming to equate medical research with, say, physics, chemistry, engineering, et alia research, but with that proviso, I would be surprised if most research isn't tainted. Beyond the inherent difficulty of doing biological research, most doctors are just bad at doing research.

Remember the joke about the answers you get when asking what 2*2 is. The engineer whips out his slide rule and tells you it's 3.99. The scientist does some experiments and says it's between 3.98 and 4.02. The mathematician doesn't give you answer but assures you an answer exists. The med student says four. How does he know? He memorised it.

This is funny 'cause it's true in many cases. Doctors tend to be heavy on knowing lots of particular facts, which is a very good thing considering their profession, but they're often very bad scientific thinkers. Science is bad enough at accepting new ideas at times, but at the very least we pay lip service to the idea and in many cases actually do live up to our ideals. I've been a tech in medical research labs where you'll actually go looking for ways to make your data support your preconceptions.

Of course, not all medical research is like this, and not all "hard science" research lives up to the ideal, but as a general trend this article fits very much into the "uh-huh, and did you know that water's wet?" category.
"Hey, gang, we're all part of the spleen!"
-PZ Meyers
User avatar
Alyrium Denryle
Minister of Sin
Posts: 22224
Joined: 2002-07-11 08:34pm
Location: The Deep Desert
Contact:

Post by Alyrium Denryle »

I hate to rely on an anecdote, but this does vary from place to place. Good departments do look at that, but then you also have places like the one which discounted my father's work because he is very rarely primary author. He has scads of publications, but he's a statistician, so he never does the primary research. This was the same place that denied a woman tenure because they wouldn't count papers that she co-wrote with her husband. (Her husband, however, did get credit for them.)
Ok, that is just rediculous. What university and department are these so I can avoid them like the plague.
Remember the joke about the answers you get when asking what 2*2 is. The engineer whips out his slide rule and tells you it's 3.99. The scientist does some experiments and says it's between 3.98 and 4.02. The mathematician doesn't give you answer but assures you an answer exists. The med student says four. How does he know? He memorised it.


And that kids is why I rethlessly mock premed students. As a bio major who wants to go into research I have to actually understand and retain everything I learn in the sciences as an undergrad. The pre-meds get to and do hit the purge button after every semester
GALE Force Biological Agent/
BOTM/Great Dolphin Conspiracy/
Entomology and Evolutionary Biology Subdirector:SD.net Dept. of Biological Sciences


There is Grandeur in the View of Life; it fills me with a Deep Wonder, and Intense Cynicism.

Factio republicanum delenda est
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

Honestly, medical research relies primarily on scientists, like yours truly, not med students. Anyone who seriously thinks Big Pharma is letting physicians dictate drug standards may as well say Boeing is letting marine engineers build their planes.

That people aren't dropping dead daily from OTC drugs is pretty good evidence that the system works. If it didn't, the Vioxx scandal would be the least of your worries, believe me.
User avatar
Eris
Jedi Knight
Posts: 541
Joined: 2005-11-15 01:59am

Post by Eris »

Alyrium Denryle wrote:Ok, that is just rediculous. What university and department are these so I can avoid them like the plague.
That one in particular is the University of Arizona department of sociology. Granted, it's not a hard science by any means, but the practices aren't at all uncommon in academia generally. I point to the College of Public Health for an example of scarily similar situation in a place accredited to give out science degrees. (I can't remember if there's a BS in soc.) As I said, the practice decreases somewhat as your science gets closer to mathematics, but it is all over to some degree or another.

Nepotism, status quarrels, paper-counting, and so on is endemic to academia. You won't find a department anywhere that doesn't have it to some degree. I hate to sound cynical, but half of everything in the ivory tower is teaching and research, and the other half is politics.
Remember the joke about the answers you get when asking what 2*2 is. The engineer whips out his slide rule and tells you it's 3.99. The scientist does some experiments and says it's between 3.98 and 4.02. The mathematician doesn't give you answer but assures you an answer exists. The med student says four. How does he know? He memorised it.


And that kids is why I rethlessly mock premed students. As a bio major who wants to go into research I have to actually understand and retain everything I learn in the sciences as an undergrad. The pre-meds get to and do hit the purge button after every semester
I can sympathise. It took me until analytic chemistry until I met another student who wasn't in a chemistry course for pre-med, or something similar. The practice of cramming has never made much sense to me, but after attending some of the study sessions with classmates, it now terrifies me like few other things do.

Tangenitally, do you know weird it is to be in an organic chemistry lab nominally for chem E, chem, and biochem majors and have all of your labmates be physiology, psychology, or similar majors? Pretty damn weird. The end result of all of which being that I think I may have picked up a bit of an elitism streak with regards to people like pre-med. Don't tell anyone: I don't think they've noticed yet.
"Hey, gang, we're all part of the spleen!"
-PZ Meyers
User avatar
Eris
Jedi Knight
Posts: 541
Joined: 2005-11-15 01:59am

Post by Eris »

Admiral Valdemar wrote:Honestly, medical research relies primarily on scientists, like yours truly, not med students. Anyone who seriously thinks Big Pharma is letting physicians dictate drug standards may as well say Boeing is letting marine engineers build their planes.
True, I tend to be somewhat on the harsh side of things. Although the reason I'm targeting med students is that med students are the ones who turn into doctors, who do conduct a fair amount of the medical research out there. The bad practices of medical school do carry over to some degree into the wider world of medical technology.

When you actually let the scientists take over, quality tends to improve. Although you'll forgive me if the idea of Big Pharma taking over the research and standards from medical doctors disquiets me as much as it comforts me.
That people aren't dropping dead daily from OTC drugs is pretty good evidence that the system works. If it didn't, the Vioxx scandal would be the least of your worries, believe me.
We do manage to avoid or mitigate most and the worst of disasters, and I'd be the last to argue that our medical institution is a bad thing, just very far from perfect. I also seem to have gotten somewhat off topic, since my criticisms seemed to have wandered into blasting largely at MD (and related) guided research, and academia general.
"Hey, gang, we're all part of the spleen!"
-PZ Meyers
User avatar
Tsyroc
Emperor's Hand
Posts: 13748
Joined: 2002-07-29 08:35am
Location: Tucson, Arizona

Post by Tsyroc »

Eris wrote: That one in particular is the University of Arizona department of sociology. Granted, it's not a hard science by any means, but the practices aren't at all uncommon in academia generally.
Now that is funny. :D

I never took a sociology class there but between them and the psych students I got hit up enough for questionaires/studies. Talk about a bunch of crap. It's not all that surprising that they wouldn't give credit to someone who really knows something about statistics.

Some of the other colleges and departments at the UofA aren't much better about giving tenure.
By the pricking of my thumb,
Something wicked this way comes.
Open, locks,
Whoever knocks.
Post Reply