Rater training, percent agreement, & importance of context

Through a cough syrup-induced haze, I saw “23” in an Excel spreadsheet cell and panicked thinking it as the percentage agreement value. I’d been dreading calculating inter-rater reliability between my alternate rater’s and my own coding for practice notes all weekend day today for the fear of just such a result. Such is the anxiety one can only feel when working with one’s thesis data! Fortunately, I had glanced at entirely the wrong cell, even the wrong column. The actual percentage agreement on this first pass at practice coding was 78%, a much more acceptable figure, and one that I think we can improve on with practice.

The process of selecting data for my alternate rater to code for practice and for actual inter-rater reliability coding has been quite useful for promoting coding stability in my own coding and for clarifying some categories in the protocol. The practice coding itself is also helpful, because it brought to light the difficulty of reliably coding segments of online discussion data to metacognitive categories. There are content analysis studies (e.g. Hara, Bonk & Angeli, 2000; McDonald, 1998) that report reliability indices for metacognitive categories, but some researchers have reported difficulty achieving reliability for metacognition (need to find the article I have on this).

What was evident, however, was the importance of context of the discussions in coding for metacognition. A detailed description of the context of production of the text, the online discourse being coded, would provide the alternate coder, who was not a participant in the original discussion, with a historical sense of the notes that came before and after. This may enable the alternate coder to distinguish between cognition and metacognitionmore easily in the current context of use of this textual data. So, I’ll attempt to characterize the instances of progressive discourse for the alternate coder in the actual inter-rater reliability coding. I know these passages well, having been a participant in the original discourse; having read and re-read the Knowledge Forum notes that comprise these instances many times; and having coded and re-coded the discussion over these last few months!


Comments

One response to “Rater training, percent agreement, & importance of context”

  1. That’s a great start Nobuko, 78%! Another baby step closer…

    Wendy

Leave a Reply

Your email address will not be published. Required fields are marked *