Step 3: For each pair, put a „1“ for the chord and „0“ for the chord. For example, participant 4, Judge 1/Judge 2 disagrees (0), Judge 1/Judge 3 disagrees (0) and Judge 2 /Judge 3 agreed (1). One of the problems with the percentage agreement is that people sometimes only agree by chance. Imagine z.B. your coding system has only two options (z.B „level 0“ or „level 1“). Where there are two options, but by chance, we would expect your agreement as a percentage to be about 50%. Imagine, for example, that each participant pours a coin for each participant and encodes the answer as „level 0“ when the coin lands heads, and „level 1“ when it lands in the tail. 25% of the time both coins will come heads, and 25% of the time the two coins would come dicks. In 50% of cases, councillors would therefore only agree by chance. So a 50% deal is not very impressive if there are two options. In this competition, the judges agreed on 3 out of 5 points. The approval percentage is 3/5 – 60%. A serious error in this type of reliability between boards is that the random agreement does not take into account and overestimates the level of agreement.
This is the main reason why the percentage of consent should not be used for scientific work (i.e. doctoral theses or scientific publications). We can now use the „Agreement“ command to establish a percentage agreement. The agree order is part of the irr package (short for Inter-Rater Reliability), which is why we must first load this package: for example you gave the same answer for four of the five participants. So you accepted 80% of the opportunities. Your approval percentage in this example was 80%. The number of your pair of workshops may be higher or lower. In the example above, there is therefore a significant convergence between the two councillors. Multiply the quotient value by 100 to get the percentage parity for the equation.
You can also move the decimal place to the right two places, which offers the same value as multiplying by 100. The basic measure for Inter-Rater`s reliability is a percentage agreement between advisors. So on a scale from zero (chance) to a (perfect), your approval in this example was about 0.75 – not bad! There are a few words that psychologists sometimes use to describe the degree of agreement between counselors, based on the Kappa value they obtain.
