0

# Alpha level (aka: p-value) & what it might indicate in research

Mon 28 Apr, 2014 07:36 am
This question might not have one specific answer but I'm hoping to have at least some clarification on the matter -- I'm a psychology student so my statistical knowledge is applied mainly to research and I'm wondering if having an alpha level (aka: p-value) of .001 (as opposed to .05) has greater meaning behind it. I've heard a few different things between a professor and my math stat friends and am unsure what's the truth or at least the middle ground of the two. My professor explained that having an alpha level of .001 is something a research does when they didn't have significance at the .05 level which to me sounded like one uses an alpha level of .001 when they really didn't find anything in their research but had to try and find some significance so their entire study wasn't a waste. On the other hand, my math stat friends have told me that an alpha level of .001 is used to obtain findings that are extremely accurate and of more significance than one would find using a .05 alpha level.

So, which is it? I know with .001 you have a greater risk of making a type 2 error which in many cases (regarding research) is horrible, so is that why we typically stay away from .001? I'm so confused! There must be one true answer here...
• Topic Stats
• Top Replies
Type: Question • Score: 0 • Views: 1,929 • Replies: 1
No top replies

Stats fan

1
Thu 7 Dec, 2017 04:28 pm
@alh2790,
Hello,
The significance level alpha is the probability of rejecting the null hypothesis, given that the null hypothesis is true (Type I error). That is, the probability of falsely rejecting the null hypothesis. In general, this probability (alpha) is set to 0.05. In this way, we know that in the long run (over many tests), we will only falsely reject the null hypothesis 5% of the time.

Of course, we may want this probability to be even smaller, like 0.001. Decreasing alpha to 0.001 ensures that we will falsely reject the null hypothesis only 0.1% of the time. That is, we will make a Type I error less often. However, other things being equal, this will increase the probability of a Type II error, like you said.

Thus far I have only talked about the situation where we perform a single test, and we have to decide upon the level of alpha we want to use. Now imagine that we want to perform a lot of tests, for whatever reason (e.g., because of the reason your professor mentioned). And suppose we use an alpha of 0.05 for each of these tests (i.e., per test, the probability of making a Type I error is 0.05). Then the probability of making a Type I error in at least one of these tests (overall alpha) will become much larger than 0.05. Hence, it may be a good idea to lower the alpha per test, in order to keep the overall alpha at an acceptable level.
0 Replies

### Related Topics

1. Forums
2. » Alpha level (aka: p-value) & what it might indicate in research