Can someone please help me with a statstical question. I am doing an personal survey of the "political" views of second hand smoke, and by far, the majority of medical organizations claim it is harmful for a variety of reasons.
However, I am not well-versed in any type of statistics. I am trying to learn, but I am a n00b at it.
For relative risk I will use RR.
Are the true? If not...why. If you have two groups, a control and an experimental:
1. A RR of 1 means there's no difference in risk between groups
2. A RR > 1 means there's an increased risk in the experimental
3. A RR < 1 means there's a decreased risk
That is what I am being told. In the context of SHS, I am being told that, in terms of SHS for children, they have a RR less than 1. Now, I am skeptical, because it seems like bullshit (especially since I can't find a single source to agree with the following), but they are trying to tell me this means that SHS is good for children because it "is beneficial" by decreasing their releative risk of developing lung cancer.
That floored me. I can't see how that's possible. I can't find any doctors or medical profesionals who agree with that assessment. Alas, If it's not true, I don't really know enough about stats anyway. It just seems wrong .
Secondly, what I have notice is a tendancy of certain political groups and websites to use quotes that I think are deliberately truncatedchange meaning. For example, "smoker's rights" websites love to taut authorites as saying: "As a general rule of thumb, we are looking for a relative risk of 3 or more before accepting a paper for publication." - Marcia Angell, editor of the New England Journal of Medicine"
This isn't her actual quote, though. The real quote in full, is this:"As a general rule of thumb we are looking for a relative risk of three or more [before accepting a paper for publication], particularly if it is biologically implausible or if it's a brand-new finding" but not if many studies show it or if there is biomedical information to support it.
Is this significant? It seems to me to be so. Why would they truncate the quotes unless it would change the meaning? They are trying to say anything with less than a 2 or 3 RR is worthless. SHS has less than a 2, therefore, causality is worthless from its figures given by the EPA, the CDC, etc.
I find this odd because no doctors, and all major medical organizations, including the AMJ and BMA seem to disagree with that statement of "insignificance" and "worthlessness."
Am I misunderstanding, or is someone flinging a lot of bullshit around to discredit the science by manipulating statistics? I am very confused. If it's not significant, why are they making it sound so. Although, I can't see why they would it if really were insignificant and only 2-3 RR levels were accepted in papers. It contradicts what they are doing.
Biomedical Statistics and SHS help.
Moderator: Alyrium Denryle
- Boyish-Tigerlilly
- Sith Devotee
- Posts: 3225
- Joined: 2004-05-22 04:47pm
- Location: New Jersey (Why not Hawaii)
- Contact:
- Boyish-Tigerlilly
- Sith Devotee
- Posts: 3225
- Joined: 2004-05-22 04:47pm
- Location: New Jersey (Why not Hawaii)
- Contact:
- mr friendly guy
- The Doctor
- Posts: 11235
- Joined: 2004-12-12 10:55pm
- Location: In a 1960s police telephone box somewhere in Australia
Forgive my ignorance, but what is SHS.
Secondly RR just means if you have factor A you are x number of times more likely to develop condition B.
Its been a while, but AFAIK, RR is not the way to determine statisitcal significance.
We look at things like p-values and confidence intervals to determine statisitcal significance.
Having a low relative risk, say of 2 just means you are ONLY 2 times more likely to developing this condition. Statistical significance means there is a high probability (we usually p-values to give us a probability up to 95%) that there is a link between factor A and condition B. It DOES NOT tell you how much more likely your risk is for developing condition B given factor A (thats what RR is for).
In short they are confusing a) increase risk (RR) with
b) whether that increase risk is significant (ie there is a link) or due to chance (ie we just happened to pick a bunch of people in our study which has an increase risk).
Hope that helps.
Secondly RR just means if you have factor A you are x number of times more likely to develop condition B.
Its been a while, but AFAIK, RR is not the way to determine statisitcal significance.
We look at things like p-values and confidence intervals to determine statisitcal significance.
Having a low relative risk, say of 2 just means you are ONLY 2 times more likely to developing this condition. Statistical significance means there is a high probability (we usually p-values to give us a probability up to 95%) that there is a link between factor A and condition B. It DOES NOT tell you how much more likely your risk is for developing condition B given factor A (thats what RR is for).
In short they are confusing a) increase risk (RR) with
b) whether that increase risk is significant (ie there is a link) or due to chance (ie we just happened to pick a bunch of people in our study which has an increase risk).
Hope that helps.
Never apologise for being a geek, because they won't apologise to you for being an arsehole. John Barrowman - 22 June 2014 Perth Supernova.
Countries I have been to - 14.
Australia, Canada, China, Colombia, Denmark, Ecuador, Finland, Germany, Malaysia, Netherlands, Norway, Singapore, Sweden, USA.
Always on the lookout for more nice places to visit.
Countries I have been to - 14.
Australia, Canada, China, Colombia, Denmark, Ecuador, Finland, Germany, Malaysia, Netherlands, Norway, Singapore, Sweden, USA.
Always on the lookout for more nice places to visit.
- mr friendly guy
- The Doctor
- Posts: 11235
- Joined: 2004-12-12 10:55pm
- Location: In a 1960s police telephone box somewhere in Australia
Reread that and the phrasing is a bit off.mr friendly guy wrote:
In short they are confusing a) increase risk (RR) with
b) whether that increase risk is significant (ie there is a link) or due to chance (ie we just happened to pick a bunch of people in our study which has an increase risk).
.
In short they are confusing a) increase risk (RR) with
b) whether that increase risk is significant (ie there is a link) or due to chance (ie we just happened to pick a bunch of people in our study who ended up developing condition B).
Never apologise for being a geek, because they won't apologise to you for being an arsehole. John Barrowman - 22 June 2014 Perth Supernova.
Countries I have been to - 14.
Australia, Canada, China, Colombia, Denmark, Ecuador, Finland, Germany, Malaysia, Netherlands, Norway, Singapore, Sweden, USA.
Always on the lookout for more nice places to visit.
Countries I have been to - 14.
Australia, Canada, China, Colombia, Denmark, Ecuador, Finland, Germany, Malaysia, Netherlands, Norway, Singapore, Sweden, USA.
Always on the lookout for more nice places to visit.
- mr friendly guy
- The Doctor
- Posts: 11235
- Joined: 2004-12-12 10:55pm
- Location: In a 1960s police telephone box somewhere in Australia
I just realised another way I can explain this.
Using an analogy of a roulette wheel. Basically the probability of winning by betting on red is 1/37 (1/38 in american roulette). The probability of winning by betting on black is the same.
Lets say in a sample of four (ie 4 spins). The results are 3 blacks and 1 red. In this sample, black wins 3 times as many as red does (analogous to a relative risk of 3).
We know this is not because black really has 3 times as much chance to win, but simply due to chance. We know this because of our a priori knowledge of gambling and probabilities.
However if you present the claim that in this sample "black won 3 times as much as red", someone without knowledge of statistics or gambling may be tempted to say that black has 3 times the chance of winning as red.
In this case, knowing the relative risk doesn't help us determine whether this result is due to chance or because black really does have 3 times the chance of winning.
However if you had some knowledge of statistics, you would want to know the actual sample size. With a small sample size of 4, your methodology of calculating p-values will reveal no statistical significance, hence you can conclude this result was due to chance.
Note that in medical statistics, some studies may lack the "power" to determine statistical significance especially if they have only a small sample size. I would imagine this especially difficult if say the actual RR (ie at this point we haven't worked it out yet) is very low. Hence the use of meta-analyses, where they combine the results of several studies and analyse that.
Using an analogy of a roulette wheel. Basically the probability of winning by betting on red is 1/37 (1/38 in american roulette). The probability of winning by betting on black is the same.
Lets say in a sample of four (ie 4 spins). The results are 3 blacks and 1 red. In this sample, black wins 3 times as many as red does (analogous to a relative risk of 3).
We know this is not because black really has 3 times as much chance to win, but simply due to chance. We know this because of our a priori knowledge of gambling and probabilities.
However if you present the claim that in this sample "black won 3 times as much as red", someone without knowledge of statistics or gambling may be tempted to say that black has 3 times the chance of winning as red.
In this case, knowing the relative risk doesn't help us determine whether this result is due to chance or because black really does have 3 times the chance of winning.
However if you had some knowledge of statistics, you would want to know the actual sample size. With a small sample size of 4, your methodology of calculating p-values will reveal no statistical significance, hence you can conclude this result was due to chance.
Note that in medical statistics, some studies may lack the "power" to determine statistical significance especially if they have only a small sample size. I would imagine this especially difficult if say the actual RR (ie at this point we haven't worked it out yet) is very low. Hence the use of meta-analyses, where they combine the results of several studies and analyse that.
Never apologise for being a geek, because they won't apologise to you for being an arsehole. John Barrowman - 22 June 2014 Perth Supernova.
Countries I have been to - 14.
Australia, Canada, China, Colombia, Denmark, Ecuador, Finland, Germany, Malaysia, Netherlands, Norway, Singapore, Sweden, USA.
Always on the lookout for more nice places to visit.
Countries I have been to - 14.
Australia, Canada, China, Colombia, Denmark, Ecuador, Finland, Germany, Malaysia, Netherlands, Norway, Singapore, Sweden, USA.
Always on the lookout for more nice places to visit.
- Boyish-Tigerlilly
- Sith Devotee
- Posts: 3225
- Joined: 2004-05-22 04:47pm
- Location: New Jersey (Why not Hawaii)
- Contact: