Oslin, J. L., Mitchell, S. A., & Griffin, L. L. (1998). The Game Performance Assessment Instrument (GPAI): Development and preliminary validation. Journal of Teaching in Physical Education, 17, 231-243.
Background: There is a growing interest in using authentic assessment to evaluate student performance in physical education settings. Purposes: Thus, the purpose of this study was to determine the validity and reliability of the Game Performance Assessment Instrument (GPAI). Methods: The GPAI was developed to help researchers and teachers code behaviors that tried to answer tactical problems. The instrument was field tested with undergraduate physical education majors (N = 18) in four sports; a) soccer, b) soccer, c) volleyball and d) basketball. These students reported all dimensions (decision making, skill execution and support in soccer) of the GPAI were observable using a simple tally coding system. Inter Observer Agreement (IOA) between coders was also high in this section of the study; 0.66 to 1.0 for soccer; 0.78 to 1.0 for softball and 0.56 to 0.86 for basketball in the field testing. After initial field testing authors then established validity using four methods: face, content, construct and ecological. Face validity was established (using the undergraduate PE majors mentioned above) by 95% of these participants responding favorably to being assessed in a game situation on a modified version of a Wiggins questionnaire. A panel of experts determined content validity and terms/definitions were revised until a consensus is reached. Construct validity was measured by the GPAI’s ability to distinguish between individuals who been previously classed as high or low in game performance in 6 v 6 soccer (N = 32), 3 v 3 volleyball (N = 31) and basketball (N = 25). Ecological validity was established by the ability of the instrument to measure what is taught. Reliability was established using test-retest stability coefficients and observer reliability coefficients. Results: Students responses on the Wiggins questionnaire to assess face validity proved 95% favorable. The construct validity data showed that the GPAI was able to distinguish between high and low ability players with independent t-tests showing significant differences in skill execution for all three sports, and for decision making in volleyball and soccer. Stability-reliability coefficients for GPAI components found scores for soccer, basketball and volleyball to range from .84 to .97, .84 to .99 and .85 to .97 respectively, and all coefficients were above the .80 cut-off point using 30% of the data. Finally, observer reliability was .73 to .97 with only one pre-test averaging below the .80 level, using 15% of the data. Conclusions: Findings from this article demonstrate that the GPAI allows players to be credited with off-the-ball movement in game play situations. In addition, these preliminary results show that GPAI can now be considered a reliable and valid authentic game play assessment tool. However, more studies should attempt to further assess its reliability and validity across different settings and ability levels.
Background: There is a growing interest in using authentic assessment to evaluate student performance in physical education settings. Purposes: Thus, the purpose of this study was to determine the validity and reliability of the Game Performance Assessment Instrument (GPAI). Methods: The GPAI was developed to help researchers and teachers code behaviors that tried to answer tactical problems. The instrument was field tested with undergraduate physical education majors (N = 18) in four sports; a) soccer, b) soccer, c) volleyball and d) basketball. These students reported all dimensions (decision making, skill execution and support in soccer) of the GPAI were observable using a simple tally coding system. Inter Observer Agreement (IOA) between coders was also high in this section of the study; 0.66 to 1.0 for soccer; 0.78 to 1.0 for softball and 0.56 to 0.86 for basketball in the field testing. After initial field testing authors then established validity using four methods: face, content, construct and ecological. Face validity was established (using the undergraduate PE majors mentioned above) by 95% of these participants responding favorably to being assessed in a game situation on a modified version of a Wiggins questionnaire. A panel of experts determined content validity and terms/definitions were revised until a consensus is reached. Construct validity was measured by the GPAI’s ability to distinguish between individuals who been previously classed as high or low in game performance in 6 v 6 soccer (N = 32), 3 v 3 volleyball (N = 31) and basketball (N = 25). Ecological validity was established by the ability of the instrument to measure what is taught. Reliability was established using test-retest stability coefficients and observer reliability coefficients. Results: Students responses on the Wiggins questionnaire to assess face validity proved 95% favorable. The construct validity data showed that the GPAI was able to distinguish between high and low ability players with independent t-tests showing significant differences in skill execution for all three sports, and for decision making in volleyball and soccer. Stability-reliability coefficients for GPAI components found scores for soccer, basketball and volleyball to range from .84 to .97, .84 to .99 and .85 to .97 respectively, and all coefficients were above the .80 cut-off point using 30% of the data. Finally, observer reliability was .73 to .97 with only one pre-test averaging below the .80 level, using 15% of the data. Conclusions: Findings from this article demonstrate that the GPAI allows players to be credited with off-the-ball movement in game play situations. In addition, these preliminary results show that GPAI can now be considered a reliable and valid authentic game play assessment tool. However, more studies should attempt to further assess its reliability and validity across different settings and ability levels.