Reply
Wed 7 Oct, 2015 09:38 am
Hi,
My question is:
I have a very large data file with many scales which were filled in by hand and then copied into SPSS/Excel by a first rater. Every tenth case (so 10% of the cases) was entered again by a second rater to check whether the first did it correctly. Can I use Cohen's kappa to test the inter-rater reliability between the first and second rater on the 10% of re-entered cases? And also: Can I use Cohen's kappa on the means of the scales, or do I have to use it for each separate question? (There are a LOT of individual questions and only about 20 scales)
Thank you,
Hannah
@Hannah51789,
Edit: I have realized in the meantime that I believe I should be using Intra-Class Correlations instead of Cohen's kappa's to calculate the IRR between the raters on 10% of the cases, because it is ordinal and not categorical data. And I believe this can be done using the averages of the scales instead of each individual item. I'd love to hear from someone whether I'm on the right path here... it would be greatly appreciated!