0
   

Reporting results p > .05 justified?

 
 
DvdR
 
Reply Thu 19 Oct, 2017 04:11 am
Hi all,

Recently I did a field study where I investigated recycling behaviour in the office. We did a baseline and post-intervention measurement of the respons rate (i.e. how much of the total amount of a certain type of waste ends up in the right bin?). For 2 x 2 weeks we collected and analyzed all the trash on a daily base, resulting in 10 + 10 (baseline + post) datapoints.

On four floors we tested interventions to improve reclycling and we compared these to two control conditions. On a small N=20 we found strong effects (Cohen's d > .80). For some effects the p-value was marginally significant (.05-.10) but we still reported those effects. I was taught that if you find strong effects on a small N that are close to being significant, you can assume the effect is actually there. After all, the p-value will automatically decrease when N increases.

We also reported effects with a p-value between .10-.15 with an explicit diclaimer that these effects should be interpreted with great caution and that a follow-up study is required.

Someone, however, heavilly criticized us for reporting effects that have a p-value above .05. I personally think that this person is too rigid concerning the p-value considering the small N and the large effects. But maybe I'm wrong and I shouldn't have reported these results.

Can anybody tell me whether it was justified to report these results or not (also taking into account the disclaimer we used)? Thanks!

Best,
Danny
  • Topic Stats
  • Top Replies
  • Link to this Topic
Type: Question • Score: 0 • Views: 781 • Replies: 2
No top replies

 
engineer
 
  2  
Reply Thu 19 Oct, 2017 06:29 am
@DvdR,
Quote:
After all, the p-value will automatically decrease when N increases.


This statement is not correct. If the data continues to come in the way it has, then the p-value will decrease, but if the weak correlation is due to random noise, then more data will make the p-value increase. Imagine if you flip a coin 20 times and get 14 heads. That's unusual since 94% of the time you would get less than 14 heads, but not outside the realm of possibility for a fair coin. If you flip the fair coin forty times, it is unlikely you would end up with 28 heads total. If you did, you would absolutely say the coin is not fair.

Your statement about p-values between .05 and .10 is equivalent to saying "I got 14 heads out of 20 flips therefore the coin is not honest and I believe that if you continue to flip, you will see that." Could be, but it also could be you are just responding to statistical variation.

Quote:
I was taught that if you find strong effects on a small N that are close to being significant, you can assume the effect is actually there.

I disagree strongly with this statement. If you flip a coin four times and get four heads, would you say the coin is unfair? That happens occasionally just by random chance. You need to collect more data.
0 Replies
 
LADave
 
  1  
Reply Thu 19 Oct, 2017 05:48 pm
@DvdR,
It's much better to report your results with the marginal p-value than not to report at all. Your estimate of the effect is probably the best that can be done with the amount of data you had. It's up to readers to use appropriate caution, or repeat your study with a larger sample. In the latter case, you have provided useful information from which they can estimate a sample size to more definitively accept or reject a null hypothesis.
0 Replies
 
 

Related Topics

 
  1. Forums
  2. » Reporting results p > .05 justified?
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.06 seconds on 11/17/2024 at 04:47:56