Presentation Type
Poster
Abstract
Spatial ability is associated with performance in science, technology, engineering, and mathematics (STEM) disciplines and has been used to predict the likelihood of success in these fields (Wai, Lubinski, & Benbow, 2009). Classically, spatial ability has been assessed by tests that measure general factors of spatial ability. However, these factors may be limited in that they were not developed with individual differences or cognitive theories in mind (Cohen & Hegarty, 2012). Although traditional measures of spatial ability give insight into a person’s general spatial processing, Cohen and Hegarty (2012) point out the need for theoretically motivated spatial ability tests that specifically relate to STEM performance. There are numerous spatial ability measures in use by researchers, yet there is a need for reliable and valid spatial ability measures that are directly applicable to STEM fields.
One new measure of spatial ability developed theoretically with individual differences in mind is the Santa Barbara Solids Test (SBST; Cohen & Hegarty, 2012). In the SBST, participants must imagine what the bisection of three-dimensional forms will be when cut by a two-dimensional plane. This bisection can be horizontal, vertical, or oblique, and the shape can be a simple or complex three-dimensional form. The spatial skills involved in imagining a cross-section of a form have been linked with performance in STEM courses, such as anatomy (Rochford, 1985), biology (Russell-Gebbett, 1985), geology (Kali & Orion, 1996), geometry (Pittalis & Christou, 2010), engineering (Duesbury & O’Neil, 1996), and skills such as reading x-rays and MRIs (Hegarty, Keehner, Cohen, Montello, & Lippa, 2007).
The SBST has been validated with undergraduate students with a range of spatial ability scores (Cohen & Hegarty, 2012), but additional studies of the SBST are needed to replicate and expand on the findings of this promising new measure. For example, it is important to determine the effects of testing modality on performance to highlight a potential confound in future spatial ability studies. Although computerized assessments are common and offer many conveniences (e.g., fast scoring, fewer resources) compared to other testing modalities (e.g., paper-based testing), participants may experience higher perceived workload in computer-based assessments (Mayes, Sims, & Koonce, 2001) or perform differently on the same test in another modality (c.f. Noyes & Garland, 2008). The current study (n = 241) compares the SBST with a traditional measure of spatial ability, the Paper Folding Test (PFT; Ekstrom, French, Harman, & Dermen, 1976), in two testing modalities: 1) computer-based, and 2) paper-based. Results showed there was a correlation between the spatial ability measures, indicating both were tapping the same underlying construct. There was not a difference in performance between testing modalities for the PFT. However, there was a difference in performance based on testing modality for the SBST such that participants in the paper-based condition performed better than those in the computerized condition. The implications of these results are that testing modality should be a consideration for future studies involving the SBST.
Included in
Testing Modality Affects Performance on the Santa Barbara Solids Test
Spatial ability is associated with performance in science, technology, engineering, and mathematics (STEM) disciplines and has been used to predict the likelihood of success in these fields (Wai, Lubinski, & Benbow, 2009). Classically, spatial ability has been assessed by tests that measure general factors of spatial ability. However, these factors may be limited in that they were not developed with individual differences or cognitive theories in mind (Cohen & Hegarty, 2012). Although traditional measures of spatial ability give insight into a person’s general spatial processing, Cohen and Hegarty (2012) point out the need for theoretically motivated spatial ability tests that specifically relate to STEM performance. There are numerous spatial ability measures in use by researchers, yet there is a need for reliable and valid spatial ability measures that are directly applicable to STEM fields.
One new measure of spatial ability developed theoretically with individual differences in mind is the Santa Barbara Solids Test (SBST; Cohen & Hegarty, 2012). In the SBST, participants must imagine what the bisection of three-dimensional forms will be when cut by a two-dimensional plane. This bisection can be horizontal, vertical, or oblique, and the shape can be a simple or complex three-dimensional form. The spatial skills involved in imagining a cross-section of a form have been linked with performance in STEM courses, such as anatomy (Rochford, 1985), biology (Russell-Gebbett, 1985), geology (Kali & Orion, 1996), geometry (Pittalis & Christou, 2010), engineering (Duesbury & O’Neil, 1996), and skills such as reading x-rays and MRIs (Hegarty, Keehner, Cohen, Montello, & Lippa, 2007).
The SBST has been validated with undergraduate students with a range of spatial ability scores (Cohen & Hegarty, 2012), but additional studies of the SBST are needed to replicate and expand on the findings of this promising new measure. For example, it is important to determine the effects of testing modality on performance to highlight a potential confound in future spatial ability studies. Although computerized assessments are common and offer many conveniences (e.g., fast scoring, fewer resources) compared to other testing modalities (e.g., paper-based testing), participants may experience higher perceived workload in computer-based assessments (Mayes, Sims, & Koonce, 2001) or perform differently on the same test in another modality (c.f. Noyes & Garland, 2008). The current study (n = 241) compares the SBST with a traditional measure of spatial ability, the Paper Folding Test (PFT; Ekstrom, French, Harman, & Dermen, 1976), in two testing modalities: 1) computer-based, and 2) paper-based. Results showed there was a correlation between the spatial ability measures, indicating both were tapping the same underlying construct. There was not a difference in performance between testing modalities for the PFT. However, there was a difference in performance based on testing modality for the SBST such that participants in the paper-based condition performed better than those in the computerized condition. The implications of these results are that testing modality should be a consideration for future studies involving the SBST.