Meade, A. W., and Eby, L. T. (2007). Use group agreement cues in validating construction in several stages. Organ. Res. Methoden 10, 75-96. doi: 10.1177/1094428106289390 Harvey, R. J., and Hollander, E. (2004, April).
“Benchmarking rWG interrater agreement indices: let`s drop the.70 rule-of-thumb,” in Paper Presented at the Meeting of the Society for Industrial Organizational Psychology (Chicago, IL). Dunlap, W. P., Burke, M. J., and Smith-Crowe, K. (2003). Accurate tests of statistical significance for average difference indices to interrater agree. J. Appl.
Psychol. 88, 356-362. doi: 10.1037/0021-9010.88.356 Cohen, A., Doveh, E., and Eick, U. (2001). Statistical characteristics of the agreement index rwg(j) Psychol. Methods 6, 297-310. doi: 10.1037/1082-989X.6.3.297 Kozlowski, S. W. J., and Hattrup, K. (1992).
Disagreement on the intergroup agreement: unravelling issues of coherence with consensus. J. Appl. Psychol. 77, 161-167. doi: 10.1037/0021-9010.77.2.161 Allen, N. J., Stanley, D. J., Williams, H.M., and Ross, S. J. (2007). To assess the impact of non-response on the impact on the diversity of working groups. Organ.
Res. Methods 10, 262-286. doi: 10.1177/1094428106/294731 Lindell, M. K., and Brandt, C. J. (1999). Evaluation of the interdisciplinary agreement on the importance of test commands: a comparison of the CVI, T, rwg and rwg (j) indices. J. Appl.
Psychol. 84, 640-647. doi: 10.1037/0021-9010.84.4.640 George, J.M. and James, L. R. (1993). Personality, affect and behavior in revisited groups: Comment on aggregation, analysis levels and a common application of analysis and analysis. J.
Appl. Psychol. 78, 798-804. doi: 10.1037/0021-9010.78.5.798 It is instructive to point out that on a five-point scale, `eu2 is 2 and `mv2 is 4. Thus, the use of maximum dissent for the essentially James and al.es (1984) is modulated in such a way that all Sx2 values lead to rwg values of 0 and 1.0. This index avoids the problem of non-linearity and the corresponding inflation potential of rwg (j) and solves the problem of unacceptable values. James, L. R., Demaree, R. G., and Wolf, G.
(1984). Assess reliability within the interrater group, with or without a reaction distortion. J. Appl. Psychol. 69, 85-98. doi: 10.1037/0021-9010.69.1.85 Ludtke, O., and Robitzsch, A. (2009). Evaluation of a group agreement: a critical review of a resampling approach for random groups. Organ.
Res. Methods 461-487. doi: 10.1177/1094428108317406 Pasisz, D. J., and Hurtz, G.M. (2009). Test the group differences in the interrate agreement more within the group. Organ. Res. Methoden 12, 590-613.
doi: 10.1177/1094428108319128 Newman, D.A., and Sin, H.-P. 2009. How does the lack of data bias estimates of group agreements? The sensitivity of SDwg, CVwg, rwg (j), CCI and CCI to systematic non-response. Organ. Res. Methods 12, 113-147. doi: 10.1177/1094428106298969 Schmidt, A.M., and DeShon, R. P. (2003, April).
“Problems in the use of assess interrater agreement,” in Paper Presented at the 18th Annual Conference of the Society for Industrial and Organizational Psychology (Orlando, FL). Lindell, M. K., and Brandt, C. J. (1997). Measure of interdisciplinary agreement for the evaluation of a single objective. Appl. Psychol. Meas. 21, 271-278. doi: 10.1177/01466216970213006 Brown, R. D., and Hauenstein, N.M.
A. (2005). The Interrater agreement is envisaged: an alternative to rwg indices. Organ. Res. Methods 8, 165-184. doi: 10.1177/1094428105275376 Lindell, M. (2001). Assess and test an interdisciplinary agreement on a single objective using evaluation scales with several individual values. Appl.
Psychol. Meas. 25, 89-99. doi: 10.1177/01466216010251007 Bliese, P. D., and Halverson, R. (2002). Using random re-stamping groups in multi-step search.