Kappa, a statistic that gauges inter-observer agreement corrected for chance, is generally regarded as superior to the percentage agreement sometimes reported. But when is kappa big enough? No one value of kappa can be regarded as universally acceptable. Any published article that claims otherwise is misinformed. Whether a given value of kappa is acceptable depends on the percentage accuracy researchers require of their observers. How accurate observers need to be to achieve a particular value of kappa depends on several factors, the most important of which is the number of codes or ratings observers are asked to assign to events.
We have prepared a computer program, KappaAcc.exe, that computes various kappa statistics and estimates observer percentage accuracy. Whenever a kappa is reported, estimated observer accuracy should be reported as well, as a way to judge the adequacy of the computed value. It is available for download at no charge.
We have also prepared a technical report, Deciding Whether Kappa is Big Enough by Estimating Observer Accuracy (DevLabTechReport28.pdf), that discusses kappa generally and the KappAcc program and observer accuracy specifically. It is likewise available for download.
Click here to download KappaAcc.zip. The ZIP file contains the executable KappaAccI.exe and and a PDF file that describes the program.
KappaAcc is written in Embarcadero® Delphi® XE2 Version 16 Pascal. It runs on a PC under Microsoft Windows or a Mac running a PC simulation program.