okay, I'm gonna make this short.
yesterday, Kak Fairuz and I went to meet Dr.Faezah Majid, 'The Person' for content analysis for qualitative research. Kak Fairuz has set an appointment with her to ask about cohen's kappa thingy.
to be frank, I seriously have no idea about cohen's kappa and what more pearson's product moment correlation. well, Dr.D said they are the instruments to calculate the statistics for inter-rater agreement's reliability.
so this is the formula:
and this is how we can calculate the kappa's value, considering the value for the chances of agree and disagree:
Alhamdulillah that Allah made me there with Kak Fairuz and Dr. Faezah.
Since I have this mathematical and anti-number problems, it seems that cohen's kappa is not that complicated to be applied in my study.
I just need 2 raters at least to confirm and rate my analysis so that they are considered consistent. A layman is better to be a rater for if he/she is clear with my coding and analysis then my manual is good enough to be understood by a layman.
Although some researchers mentioned that an expert is the best person to rate, but somehow Dr.Faezah's argument is resonable.
According to Dr.Faezah, 0.65 is considered okay for such in-depth content analysis study.
I must the revise how do I do my sampling, the population and what not to get a comfortable figure to generalize and validate my final findings.
What I need to do are:
1. Get clear picture of my unit of analysis, categories, sub-categories and coding. 2. Develop a reader-friendly manual
include 4 columns of:
- unit of analysis / excerpts
- coding
- agree
- disagree
3. Get rater to tick on the agree - disagree parts
4. Count the cohen's kappa value
note:
for the reliability value for the real study, use pilot study's kappa's value. it's enough for reliability and trustworthiness purposes.
so, this is the kappa:
183/189 - 6/189 = 0.937 (kappa's value)
No comments:
Post a Comment