1
   

Statistical significance

 
 
Reply Tue 1 Apr, 2008 12:33 pm
I got into a debate with someone regarding whether gay people are more religiously inclined or not. They cited an article that supposedly proved a link between homosexuality and the size of a part of the brain called the anterior commisure.

Apart from the fact that it's unclear as to whether the anterior commissure has anything to do with religiosity, I pointed out to him that the study only used 90 postmortem brains from heterosexual men, homosexual men and heterosexual women. This was, as far as I know, statistically insignificant.

If I remember my statistics course correctly, you'd only get remotely satisfactory statistical significance if you use a minimum of 1000 test subjects. Thing is, if that's true, how on earth did that study (by Allen and Gorski) get published in PNAS?

Am I wrong about statistical significance and is it the be-all and end-all of whether a study actually proves a link between two things?
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Discussion • Score: 1 • Views: 1,393 • Replies: 19
No top replies

 
georgeob1
 
  1  
Reply Tue 1 Apr, 2008 12:56 pm
I believe it is certainly one of the most important questions to ask in interpreting the findings. Perhaps the only equally important questions would relate to the manner in which the sample was obtained and the unknown character of the larger population they are assumed to represent.

As I recall the confidence interval problem establishes the % confidence one can have that a measured feature of the sample distribution (mean, standard deviation, etc.) falls within an arbitrary interval of the same parameter of the population distribution, based on (1) an assumption about the character of the population distribution, and (2) the assumed randomness of the selection of the sample elements, and (3) the size of the sample. Usually the interval associated with 85% or 90% confidence is used as the measure of expected error.

There really isn't any good way to deal with the possibility that the unknown population distribution is itself of unusual character (double-peaked, or something like that).

Finally lots of errors arise from flaws (non-randomness) in the manner in which the sample elements are selected. In this case the sample appears to have been defined just by availability, and not any random process at all. That too could well be a source of error..
0 Replies
 
Wolf ODonnell
 
  1  
Reply Tue 1 Apr, 2008 01:31 pm
georgeob1 wrote:
I believe it is certainly one of the most important questions to ask in interpreting the findings. Perhaps the only equally important questions would relate to the manner in which the sample was obtained and the unknown character of the larger population they are assumed to represent.


Well, I could only access the first page and I can't paste the link here because it's got brackets in it that disrupt the HTML function. However, I will tell you that it is:

Sexual Orientation and the Size of the Anterior Commissure in the Human Brain
Laura S. Allen, Roger A. Gorski
Proceedings of the National Academy of Sciences of the United States of America, Vol. 89, No. 15 (Aug. 1, 1992), pp. 7199-7202

And on the first page, I read that the brains were obtained from three Southern California hospitals between 1983 and 1991, and had been removed within 24 hours postmortem.

They collected 256 samples of brain tissue from healthy brains. Samples were eliminated when medical records showed any signs that the subjects had diseases that could have affected the anterior commissure. Subjects were classified as heterosexual when medical records did not indicate homosexual orientation (which seemed a rather suspect phrase to me).

Apparently this generated 34 homosexual men, 84 heterosexual women and 75 heterosexual men... which begs the question of why they stated they examined 90 postmortem brains from homosexual men, heterosexual men and heterosexual women in the abstract.

That's all I could get from what little of the Methods section I could read.

I can see the study is flawed just from looking at where they collected the brains from. What puzzles me is how it got published.
0 Replies
 
High Seas
 
  1  
Reply Tue 1 Apr, 2008 01:37 pm
Curious to find out if this link works on A2K >

http://www.jstor.org/view/00278424/di993964/99p03155/3?frame=noframe&[email protected]/01c0a8346c00501c289ed&dpi=3&config=jstor

> but the conclusions of the study aren't highly correlated with sample size.
0 Replies
 
TheCorrectResponse
 
  1  
Reply Tue 1 Apr, 2008 01:51 pm
Without getting into any of this too deeply; a normal Phase 1 clinical study contains about 20-100 patients. A normal Phase 2 study maybe 200, 400-500 would be a very large study group. So statistically the numbers in the study you are describing are not unusual. Just using sample size alone tells you very little or nothing about the validity of the study.
0 Replies
 
Wolf ODonnell
 
  1  
Reply Wed 2 Apr, 2008 12:26 pm
Yes, but clinical trials go from Phase 1 to Phase 3, with the final phase numbering in the thousands. If drugs fail before phase 3, then they don't get through.
0 Replies
 
fishin
 
  1  
Reply Wed 2 Apr, 2008 12:39 pm
Re: Statistical significance
Wolf_ODonnell wrote:
I got into a debate with someone regarding whether gay people are more religiously inclined or not. They cited an article that supposedly proved a link between homosexuality and the size of a part of the brain called the anterior commisure.

Apart from the fact that it's unclear as to whether the anterior commissure has anything to do with religiosity, I pointed out to him that the study only used 90 postmortem brains from heterosexual men, homosexual men and heterosexual women. This was, as far as I know, statistically insignificant.

If I remember my statistics course correctly, you'd only get remotely satisfactory statistical significance if you use a minimum of 1000 test subjects. Thing is, if that's true, how on earth did that study (by Allen and Gorski) get published in PNAS?

Am I wrong about statistical significance and is it the be-all and end-all of whether a study actually proves a link between two things?


Maybe this answers your question:
http://stats.org/in_depth/faq/statistical_significance.htm
0 Replies
 
fishin
 
  1  
Reply Wed 2 Apr, 2008 12:57 pm
btw, you can read a full copy of the study report here:
http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=49673&blobtype=pdf

The 90 brains examined shows up in table 1.
0 Replies
 
TheCorrectResponse
 
  1  
Reply Wed 2 Apr, 2008 01:04 pm
Quote:

Yes, but clinical trials go from Phase 1 to Phase 3, with the final phase numbering in the thousands. If drugs fail before phase 3, then they don't get through.


A phase 1 study (drug safety) is NOT repeated in the next phases (efficacy). While a drug could show safety issues in latter studies it is not what these studies are designed to find.

Even if it was would it make any sense to use what it seems you would term a statistically insignificant result to determine if the drug should move through the pipeline to a larger group?

If they needed to demonstrate safe dosage they would use a study population size that would be significant not one that would obviously not be. Subjecting a group of people to a study with obvious potential risks knowing it couldn't reliably demonstrate those risks because the sample size was known to be too small would not be ethical.
0 Replies
 
High Seas
 
  1  
Reply Wed 2 Apr, 2008 05:34 pm
TheCorrectResponse wrote:
.............

...............those risks because the sample size was known to be too small would not be ethical.


This would actually be hilarious if it weren't quite so macabre - you do realize these people were all totally and irrevocably dead at the time someone started cutting slices from their brains?

Risks - to whom?!
0 Replies
 
TheCorrectResponse
 
  1  
Reply Thu 3 Apr, 2008 06:32 am
So the people in Phase 1 drug studies are dead? Well I have learned something on A2K today Laughing
0 Replies
 
Setanta
 
  1  
Reply Thu 3 Apr, 2008 06:36 am
Reminds me of a line from Pirates of the Caribbean . . .

"Ya see, that's the attitude that lost ya the Pearl in the first place, Jack, people are easier to search when they're dead."
0 Replies
 
High Seas
 
  1  
Reply Thu 3 Apr, 2008 12:13 pm
TheCorrectResponse wrote:
So the people in Phase 1 drug studies are dead? Well I have learned something on A2K today Laughing


LOL - so subjects in the study we're talking about here >

Quote:
......the brains were obtained from three Southern California hospitals between 1983 and 1991, and had been removed within 24 hours postmortem.

They collected 256 samples of brain tissue from healthy brains.


> had reports of their deaths exaggerated - is that what you're suggesting? Smile
0 Replies
 
TheCorrectResponse
 
  1  
Reply Thu 3 Apr, 2008 12:19 pm
If you'll take the time to read a little closer you'll see I was using clinical studies to show Wolf that generally his ideas about the size of study groups was incorrect. I nowhere in my posts mentioned the article he was referring to since I can't access it and so would have no idea if it was a valid study.
0 Replies
 
High Seas
 
  1  
Reply Thu 3 Apr, 2008 12:27 pm
TheCorrectResponse wrote:
If you'll take the time to read a little closer you'll see I was using clinical studies to show Wolf that generally his ideas about the size of study groups was incorrect. I nowhere in my posts mentioned the article he was referring to since I can't access it and so would have no idea if it was a valid study.


Surely it occured to you that studies made on inanimate matter are free of the multicollinearity / heterodasticity among many of the variables involved in studies of living creatures? Start with the absolute essence of science: the repeatable experiment.

Sample sizes can never be mathematically comparable in the 2 cases.
0 Replies
 
TheCorrectResponse
 
  1  
Reply Thu 3 Apr, 2008 12:46 pm
I wasn't comparing anything I was speaking to his statement
Quote:

If I remember my statistics course correctly, you'd only get remotely satisfactory statistical significance if you use a minimum of 1000 test subjects.


You seem to be saying that the study he was speaking about was much less complex as fewer interrelated variables were involved. You are certainly not then concluding that because of this the study he references would require a larger sample size????

I'll stand by my statement:
Quote:

Just using sample size alone tells you very little or nothing about the validity of the study.
0 Replies
 
High Seas
 
  1  
Reply Thu 3 Apr, 2008 01:01 pm
sorry typo (unrelated to your sample size question) in my previous post on the matter of heteroscedasticity (the variance of the dependent variable varies across the data).

To address your point, Correct Response: sample size matters depending on WHAT is being studied. So I also stand by my previous statement.
0 Replies
 
Wolf ODonnell
 
  1  
Reply Sat 5 Apr, 2008 03:33 am
Well, yes, I suppose you have to take all the other factors into account. But you're wrong in saying you can't read the study as fishin actually posted a link to it, which I shall repost here...

http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=49673&blobtype=pdf

Personally, I don't think much of the study because all the samples came from a certain specific area (the same three hospitals in Southern California). Combined with the small sample size, their conclusions don't look too conclusive.
0 Replies
 
solipsister
 
  1  
Reply Sat 5 Apr, 2008 03:56 am
Re: Statistical significance
Wolf_ODonnell wrote:
I got into a debate with someone regarding whether gay people are more religiously inclined or not. They cited an article that supposedly proved a link between homosexuality and the size of a part of the brain called the anterior commisure.

Apart from the fact that it's unclear as to whether the anterior commissure has anything to do with religiosity, I pointed out to him that the study only used 90 postmortem brains from heterosexual men, homosexual men and heterosexual women. This was, as far as I know, statistically insignificant.

If I remember my statistics course correctly, you'd only get remotely satisfactory statistical significance if you use a minimum of 1000 test subjects. Thing is, if that's true, how on earth did that study (by Allen and Gorski) get published in PNAS?

Am I wrong about statistical significance and is it the be-all and end-all of whether a study actually proves a link between two things?



ooo err such statistical inference lemma guess that the sampling error is tolerable and a statistic sufficient when the sample size is 36 which proves nothing

as for the multicollinearity/ heterodasticity i'd suggest the kurtosis of the mgf will show a large amount of weight in the tail
0 Replies
 
Chumly
 
  1  
Reply Sat 5 Apr, 2008 04:24 am
It seems to me the crux of the biscuit is twofold:

1) The need to prove a consistent demonstrable link between sample size versus causation.

2) Randomization in and of itself can mean uneven representation of a given variable within a group thus discounting the causation claim.
0 Replies
 
 

Related Topics

Evolution 101 - Discussion by gungasnake
Typing Equations on a PC - Discussion by Brandon9000
The Future of Artificial Intelligence - Discussion by Brandon9000
The well known Mind vs Brain. - Discussion by crayon851
Scientists Offer Proof of 'Dark Matter' - Discussion by oralloy
Blue Saturn - Discussion by oralloy
Bald Eagle-DDT Myth Still Flying High - Discussion by gungasnake
DDT: A Weapon of Mass Survival - Discussion by gungasnake
 
  1. Forums
  2. » Statistical significance
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.04 seconds on 05/01/2024 at 07:32:26