Introduction
Metacognition was initially coined by John Flavell (1979) in which he described it as thinking about one’s own thinking. It has also been described as the awareness and understanding of one's own thought processes, and it has garnered significant attention in various fields such as education, psychology, and cognitive science. Understanding metacognition is essential for its role in learning, problem-solving, decision-making, and self-regulation. Thus, metacognitive individuals are aware of their knowledge and learning experiences (Schraw & Moshman, 1995). This literature review aims to explore the theoretical foundations of metacognition, the different measurement instruments used to assess it, and the psychometric properties of these instruments.
Metacognitive Theories
There are several known theoretical frameworks of metacognition (e.g., Flavell, 1979; Gutierrez et al., 2016; Nelson & Narens, 1990; Schraw & Moshman, 1995). Flavell (1979) proposed a widely accepted model of metacognition, distinguishing between metacognitive knowledge (knowledge about one’s cognitive processes and strategies) and metacognitive regulation (the ability to control and adapt cognitive processes). Schraw and Moshman (1995) refined the previous theoretical frameworks of metacognition and integrated metacognitive knowledge and metacognitive regulation in their framework. They divided metacognition into knowledge of cognition and regulation of cognition. Specifically, knowledge of cognition describes learners’ declarative, procedural, and conditional knowledge about their learning and cognitive processes and regulation of cognition describes learners’ planning, monitoring, and evaluation of their learning. Schraw and Moshman posited that knowledge of cognition and regulation of cognition share a dynamic and cyclical relation to assist learners’ metacognition and learning.
Overall, metacognition can be understood through two lenses. First, from the information processing lens, metacognition is a higher-level process that requires individuals’ deliberate and effortful examination and evaluation of their cognitive processes and incoming and exiting information. Second, from the structural lens, metacognition is bifurcated into two key components: knowledge of cognition and regulation of cognition (Schraw & Moshman, 1995). Knowledge of cognition and regulation of cognition interact with each other to support the learner’s metacognition. Research focusing on this lens often investigates the application of metacognition across various academic domains and teaching of metacognitive strategies (e.g., Händel et al., 2020; Jaeger & Wiley, 2014). A recent meta-analysis on the effect of learning strategy interventions on metacognitive monitoring accuracy indicated a moderate enhancing effect of learning strategy interventions on metacognitive monitoring accuracy (Gutierrez de Blume, 2022).
Nelson and Narens (1990) argued that metacognition can be understood as a cyclical, reciprocal interaction between monitoring and control processes that assist the learner in transforming information from the environment (what they dubbed as the object level) into mental representations in their minds (what they dubbed as the meta level), which they can then use to make decisions on how best to act and behave. The criticisms of this model, however, are that it treated metacognition as unidimensional and it allowed only for the use of the gamma coefficient as the metric of metacognitive monitoring. Furthermore, while metacognitive monitoring error was acknowledged as existing, this model did not have a way to empirically capture said error (Schraw et al., 2014).
Finally, Gutierrez et al. (2016) developed and empirically tested a metacognitive monitoring framework-the general monitoring model-that incorporated the domain-specific and domain-general nature of metacognition and it also supported the conclusion that, while metacognitive monitoring accuracy was domain specific, metacognitive monitoring errors (i.e., underconfidence, expressed as a correct performance that is judged as incorrect and overconfidence, expressed as incorrect performance that is judged as correct) were domain general in nature, and that these two aspects of error are inversely related. Additionally, the second-order factors of accuracy and error were subsumed by a third-order general monitoring factor.Metacognitive Assessments
Metacognitive assessments can be defined as the evaluation of learners’ knowledge about and regulation of cognition (Ozturk, 2017). Various approaches have been explored throughout time to assess individuals’ metacognition, such as self-reports, interviews, observation, and accuracy ratings. Among these, the most common approach in assessing metacognition is self-report surveys (Dinsmore et al., 2008).
Adult Learners
A variety of self-report instruments have been developed to measure adult learners’ metacognition, such as the MAI (Schraw & Dennison, 1994), which is one of the most adopted metacognitive assessments and has been widely administered in different settings (e.g., everyday decision-making: Lee et al., 2009; teaching: Balcikanli, 2011; learning: Young & Fry, 2008). In their original study, Schraw and Dennison (1994) determined a two-factor structure of the measure, corresponding to the two components of metacognition (i.e., knowledge of cognition and regulation of cognition). Moreover, students’ scores on the MAI showed appropriate internal consistency, with Cronbach’s alpha reaching .91 for each factor and .94 for all items included. The MAI has been subsequently translated into different languages and administered among students in various countries (e.g., Argentina: Favieri, 2013; Brazil: Lima Filho & Bruni, 2015; Colombia: Gutierrez de Blume & Montoya Londoño, 2021; Turkey, Turan et al., 2011; United States: Young & Fry, 2008).
Children and Adolescents
Research on metacognitive phenomena in children and adolescents is not as robust as that of adults, primarily due to difficulty in creating developmentally appropriate measurements, especially among young children. Nevertheless, the growing body of research on metacognition in children is almost exclusively devoted to the investigation of executive functions (e.g., attention, inhibitory control, working memory, visual-spatial reasoning, etc.), such as the work of Roebers and her colleagues (Roebers 2017; Spiess et al., 2016). While objective in nature and useful for diagnostic purposes, this method of assessing metacognitive-related skills is time consuming because it requires individual assessment. This situation led some metacognitive researchers to contemplate the possibility of developing faster methods of measurement at larger scales. It was not until Sperling et al. (2002) developed, piloted, and validated the MAI, Jr., adapted from the original MAI for adults, that a self-report measure of metacognition was available. The MAI, Jr., contained two versions, a 12-item scale developed and validated for children in grades 3-5, and an 18-item scale that was developed and validated for adolescents in grades 6-9 Exploratory factor analyses conducted on both versions of the MAI, Jr. suggested a two-factor model of knowledge of cognition and regulation of cognition (Sperling et al., 2002).
In sum, metacognition plays a crucial role in various cognitive processes and has significant implications for learning, problem-solving, and decision-making. Measurement instruments such as the MAI and MAI, Jr., provide valuable tools for assessing individuals' metacognitive abilities. While these instruments have shown promise in terms of reliability and validity, further research is needed to examine whether their reliability and validity still hold today. Understanding metacognition and its measurement is essential for advancing research and practice in fields such as education, psychology, and cognitive science. Thus, while the original MAI, Jr., was shorter than the adult version, Sperling et al. (2002) never examined whether a version with fewer items than 18 would appropriately measure metacognitive awareness in children and adolescents, especially given the shorter attention span of children (Roebers, 2017). Thus, the present study sought to investigate the feasibility of a shorter version.
The Present Study
Predicated on the literature we surveyed, the present investigation sought to address the following research objectives and their associated hypotheses.
1. Examine whether the Metacognitive Awareness Inventory, Jr. version (MAI, Jr.), can be reduced in length from 18 items to fewer items to mitigate survey fatigue, especially among its intended population of children and adolescents.
Hypothesis: We expected that the original MAI, Jr. could, in fact, be significantly reduced regarding number of items.
2. Investigate the internal consistency reliability coefficients and the construct validity of the MAI, Jr.-S (shortened version).
Hypothesis: We hypothesized that our proposed shortened version, the MAI, Jr.-S, would have not only adequate internal consistency reliability, but also sound construct validity, when compared to its original 18-item counterpart.
Method
This study represents an empirical investigation that employs quantitative construct validation procedures.
Participants and Sampling
District. The present study was conducted over two years and employed a convenience sampling procedure as a psychometric validation study to confirm the results of the original validation study (Sperling et al., 2002). Participants within both years were drawn from a large suburban school district located on the West Coast. District enrollment was around 18,000 students per year spread over approximately 30 schools. Students within the district identified as female (48%), male (52%), White (43.6%), Hispanic or Latino (37.6%), Black (6.4%), Asian (3.7%), and two or more races (2.6%) (Ed-data, 2022). Participants’ age ranged from 11 to 13 years (M = 11.90; SD = 0.63). Within this, it is worth noting that the number of students who identified as White is inclusive of students identifying as Middle Eastern. Given that the district contained a large population of students identifying as Middle Eastern, students’ self-reported demographics are also included below.
Schools. Two middle schools from within the district were identified by district personnel for participation in the study within year 1. These schools will be referred to herein as Beach View Middle School and Ocean Side Middle School. Two additional schools, Pacific Middle School and Coral Middle School joined the study in year 2. Demographic data for each of these schools is reported in Table 1 below (Ed-data, 2022).
Approximate Enrollment | Female | Male | White | Hispanic or Latino | Black | Asian | Two or More Races | |
---|---|---|---|---|---|---|---|---|
Beach View | 950 | 47.3% | 52.7% | 37% | 40% | 13% | 4.5% | 1.1% |
Ocean Side | 700 | 49% | 51% | 31% | 50% | 10% | 2.2% | 2.5% |
Pacific | 500 | 44.3% | 55.7% | 38.7% | 38.7% | 12.8% | 2.9% | 2.5% |
Coral | 800 | 45.6% | 54.4% | 50.4% | 35.1% | 6.2% | 2.7% | 1.5% |
Year 1 Demographic Data. In year 1 of the study, 361 students had complete data on the MAI, Jr. All of the students were drawn from Beach View or Ocean Side Middle Schools. Of these students, 194 were in 6th grade, 144 were in 7th grade, and 23 were in 8th grade. These students identified as female (46%), male (50%), Other/Non-Binary (3.2%), and less than 1% preferred not to specify a gender. These students also identified as Hispanic/Latinx (29.8%), White (11.3%), African American/Black (11.3%), Two or More Races (11.3%), Middle Eastern (7.3%), Asian (5.6%), Other (8.9%), and 13.7% of students preferred not to say.
Year 2 Demographic Data. In year 2 of the study, 240 total students across the four participating schools had complete data when administered the shortened 9-item version of the MAI, Jr. scale that resulted from analyses of year 1 data. Of these students, 111 were in 6th grade, 102 were in 7th grade, and 27 were in 8th grade. These students identified as female (54.2%), male (40.4%), Other/Non-Binary (3%), and 2.5% preferred not to specify a gender. These students also identified as Middle Eastern (30.7%), Hispanic, Latinx or Mexican (28%), Two or More Races (16.3%), Asian or Pacific Islander (6.1%), Black or African American (3.9%), White (2.5%), and 12.5% of students preferred not to say.
Instruments
Metacognitive Awareness Inventory, Jr. The original 18-item MAI, Jr. was utilized in year 1 of the study. To collect continuous data, the original 5-point Likert scale was replaced with a more continuous scale of 0-100, with 0 representing “never true of me” and 100 representing “always true of me”. Sample items included, “I know when I understand something” (KoC) and “I can make myself learn when I need to” (RoC). Scores were calculated by taking the average of the items of each of the two dimensions, thus producing two composite scores per individual. Table 2 displays the internal consistency reliability coefficients of the original MAI, Jr.
Procedure
All ethical considerations were followed during the conduct of this study. The university’s IRB approved the present study (Approval # H240551). In year 1, the full MAI, Jr. (Sperling et al., 2002) was administered to all study participants across grades 6 and 7, resulting in a final sample of 124 students, as noted in the demographics section above. Data from year 1 was screened and analyzed, as described below, resulting in a shortened, 9-item measure. In year 2, this 9-item measure was then administered to 361 students in grades 6-8. These data were screened and analyzed in the same way as year 1, with the data suggesting that the final 7-item scale can be employed for students in grades 6-8 in lieu of the original 18-item scale. The final 7-item scale is referred to herein as the Mai, Jr.-S. The final set of items of the 9-item shortened version of the MAI, Jr. are found in the Appendix 1, and the 7-item Mai Jr.-S is found in Appendix 2.
Data Analysis
All data were tested for requisite statistical assumptions prior to data analysis, including univariate and multivariate normality, collinearity, reproducibility of the correlation matrix, univariate and multivariate outliers, and the Kaiser-Meyer-Olkin (KMO) Test of Sampling Adequacy (Tabachnick & Fidell, 2013). Data were normally distributed at the univariate (all skewness and kurtosis values were less than the absolute value of 2; George & Mallery, 2019) and multivariate levels (all standardized residuals were less than 2 standard deviations of their, respective means), with no collinearity present in the data (all zero-order correlations were ≤ 0.74). Further, outlier analyses revealed no extreme outliers at the univariate (via box-and-whisker plots) or multivariate level (via Mahalanobis Distance).
Descriptive statistics were computed for all measures utilizing IBM SPSS 27 software. Exploratory factor analyses (EFAs) with common factor extractions (principal axis factoring [PAF]) and oblique rotations (promax) were conducted for both the original and shortened MAI, Jr. We chose this approach for two reasons. First, our analyses were all grounded in theoretical assumptions regarding the relations among these indicators of metacognitive awareness, and hence, justifying the EFA rather than the principal components analysis (PCA), which is atheoretical and purely statistical. Second, we selected PAF as our extraction method because, unlike PCA, which assumes all communalities to be 1, PAF employs the multiple squared correlation coefficient, R 2 , to determine communalities after extraction. Also, unlike maximum likelihood extraction, which attempts to maximize variance of the solution and may overestimate the explained variance, PAF is a more conservative solution. Finally, we employed an oblique rotation because we assumed, based on theoretical considerations, that the factors would, in fact, be correlated (Schraw & Dennison, 1994). The overall model fit, the standardized factor loadings, and the explained variance each factor contributed to its indicators were analyzed for this purpose for the original MAI, Jr., the nine-item shortened MAI, Jr., and the MAI, Jr.-S.
Our modeling procedure began by including all 18 of the original items. Subsequently, the model was trimmed and administered in Year 2. Finally, we used data at the end of Year 2 to make additional adjustments to the model. We chose standardized factor loadings ≥ 0.35 because, as a measure of effect, this indicates that ~12% of the item’s variability is attributable to the latent variable (Tabacknick & Fidell, 2019).
Data were de-identified to protect the anonymity of the participants, and all participants assented to participate along with parental permission for their children to participate.
Results
This study was part of a multiyear research project intended to evaluate the feasibility of the CueThinkEF+ artificial intelligence (AI) platform. This AI platform was developed to enhance adolescents’ self-regulation of learning, metacognition, and executive functions. The present study employed data from the first two years of the project.
Year 1 and Year 2 Results
Descriptive and Internal Consistency Reliability. Descriptive statistics, internal consistency reliability coefficients (Cronbach’s alpha), and the zero-order correlation matrix for the original MAI, Jr. are presented in Table 2. Table 3 and Table 4 display the same information for the shortened nine-item scale and for the MAI, Jr.-S, in Year 2, respectively. Interestingly, the sample of participants reported lower regulation of cognition (comprised of planning, information management, comprehension monitoring, debugging, and evaluation of learning) than knowledge of cognition (comprised of declarative, procedural, and conditional knowledge) for both the original and shortened MAI, Jr. Further, the correlation between the two dimensions of metacognitive awareness was slightly higher for the original MAI, Jr. than the shortened form. Finally, whereas the internal consistency reliability coefficients remained similar for the RoC across both versions of the instrument, the coefficient was lower for KoC for the shorter version, which was expected, due to the reduction in items. Nevertheless, as expected, the reliability of both dimensions of metacognitive awareness was adequate, meeting the threshold for the minimally acceptable value of .70 (Tabacknick & Fidell, 2019).
Variable | M 1 | M 2 | SD 1 | SD 2 | α1 | α2 | 1 | 2 |
---|---|---|---|---|---|---|---|---|
1. Knowledge of Cognition (9 items) | 71.22 | 74.44 | 12.21 | 16.23 | .87 | .85 | - | .61* |
2. Regulation of Cognition (9 items) | 61.54 | 57.99 | 13.56 | 17.36 | .76 | .74 | 55* | - |
* p < .01
Note. Subscript “1” represents Year 1 statistics and subscript “2” represents Year 2 statistics. The correlation above the diagonal is for Year 1 and that below the diagonal is for Year 2.
N = 361
Variable | M | SD | α | 1 | 2 |
---|---|---|---|---|---|
1. Knowledge of Cognition (4 items) | 75.69 | 13.18 | .74 | - | .55* |
2. Regulation of Cognition (5 items) | 58.71 | 16.57 | .76 | .63* | - |
* p < .01
Note. The correlation above the diagonal is for Year 1 and that below the diagonal is for Year 2.
N = 240
Variable | M | SD | α | 1 | 2 |
---|---|---|---|---|---|
1. Knowledge of Cognition (3 items) | 69.98 | 15.52 | .74 | .66* | |
2. Regulation of Cognition (4 items) | 68.01 | 16.54 | .75 | .62* |
* p < .01
Note. The correlation above the diagonal is for Year 1 and that below the diagonal is for Year 2.
N = 240
The EFA results with common factor extraction-PAF-and an oblique rotation (promax) were interpreted next. Inspection of preliminary analyses revealed no difficulties in the data to reproduce a correlation matrix. Finally, the KMO Tests of Sampling adequacy was appropriate for both original MAI, Jr. (KMO = .88, χ2 (153) = 1,366.91, p < .001) and the 9-item shortened version (KMO = .80, χ2 (36) = 543.28, p < .001), thereby permitting the factor analyzes to be conducted.
Factor Analyses
Rather than allowing the default solution of factors with eigenvalues greater than 1, we instead hypothesized a two-factor solution, per the original validation study (Sperling et al., 2002), for both EFAs. We first report the findings of the original MAI, Jr., followed by the shortened MAI, Jr.-S.
Original MAI, Jr. The EFA with a PAF common extraction and a promax oblique rotation for the original 18-item MAI, Jr. yielded a two-factor solution with 18 items which explained 34.99% of cumulative variance. The correlation among factors was r = .67. Descriptive statistics, communalities after extraction, and standardized factor loadings for this solution are presented in Table 5.
Item | M | SD | Com. | RoC | KoC |
---|---|---|---|---|---|
MAI12 | 87.93 | 19.79 | .35 | .72 | |
MAI18 | 69.75 | 28.46 | .52 | .65 | |
MAI5 | 83.56 | 20.61 | .26 | .62 | |
MAI11 | 77.29 | 24.00 | .47 | .60 | |
MAI4 | 74.92 | 24.22 | .38 | .60 | |
MAI2 | 69.98 | 25.82 | .34 | .57 | |
MAI1 | 78.31 | 21.05 | .32 | .57 | |
MAI3 | 70.88 | 25.10 | .35 | .54 | |
MAI15 | 68.58 | 28.34 | .43 | .50 | |
MAI13 | 63.07 | 27.96 | .47 | .71 | |
MAI7 | 38.10 | 31.27 | .38 | .66 | |
MAI9 | 57.33 | 30.93 | .49 | .62 | |
MAI17 | 64.83 | 30.30 | .27 | .57 | |
MAI8 | 65.81 | 28.42 | .36 | .56 | |
MAI10 | 56.01 | 29.78 | .37 | .50 | |
MAI6 | 48.73 | 30.20 | .10 | .36 | |
MAI14 | 68.59 | 24.62 | .35 | .35 | |
MAI16 | 63.93 | 27.44 | .08 | .35 |
Key: Com. = Communality after extraction; KoC = Knowledge of cognition factor, subsuming declarative, procedural, and conditional metacognitive knowledge; RoC = Regulation of cognition factor, subsuming planning, information management, comprehension monitoring, debugging, and evaluation of learning.
MAI, Jr.-S. The EFA with a PAF common extraction and a promax oblique rotation for the shortened nine-item MAI, Jr.-S for Year 1 yielded a two-factor solution with nine items which explained 48.73% of cumulative variance. The correlation among factors was r = .55. Descriptive statistics, communalities after extraction, and standardized factor loadings for this solution are presented in Table 6.
Item | M | SD | Com. | KoC | RoC |
---|---|---|---|---|---|
MAI9 | 57.33 | 30.93 | .50 | .89 | |
MAI7 | 38.10 | 31.27 | .38 | .56 | |
MAI10 | 56.01 | 29.78 | .38 | .47 | |
MAI8 | 65.81 | 28.42 | .33 | .44 | |
MAI12 | 87.93 | 19.79 | .54 | .70 | |
MAI17 | 68.58 | 28.34 | .63 | .69 | |
MAI5 | 83.56 | 20.61 | .58 | .59 | |
MAI18 | 69.75 | 28.46 | .44 | .56 | |
MAI11 | 77.29 | 24.00 | .41 | .50 |
Key: Com. = Communality after extraction; KoC = Knowledge of cognition factor, subsuming declarative, procedural, and conditional metacognitive knowledge; RoC = Regulation of cognition factor, subsuming planning, information management, comprehension monitoring, debugging, and evaluation of learning.
Results of the shorter seven-item final version of the MAI, Jr.-S can be found in Table 7. This solution also produced two-factors that explain 52.30% of the variability in the seven items. The correlation, Pearson’s r, between the KoC and RoC factors was r = .63.
Item | M | SD | Com. | KoC | RoC |
---|---|---|---|---|---|
MAI9 | 56.56 | 31.83 | .41 | .83 | |
MAI7 | 66.72 | 26.72 | .55 | .68 | |
MAI10 | 69.67 | 24.35 | .50 | .63 | |
MAI8 | 86.85 | 22.03 | .63 | .59 | |
MAI5 | 64.33 | 31.30 | .49 | .86 | |
MAI18 | 67.81 | 30.43 | .57 | .64 | |
MAI11 | 77.49 | 24.68 | .53 | .56 |
Key: Com. = Communality after extraction; KoC = Knowledge of cognition factor, subsuming declarative, procedural, and conditional metacognitive knowledge; RoC = Regulation of cognition factor, subsuming planning, information management, comprehension monitoring, debugging, and evaluation of learning.
Comparison of the Factor Solutions of the Original and Shortened MAI, Jr.
Inspection of both final solutions yields some interesting findings. For the sample of 361 participants recruited for the present study for Year 1, the original 18-item MAI, Jr. is not only more than twice as long as our proposed shortened version, but, evidently, it also leads to a degraded solution with appreciably lower explained variance. Whereas our proposed shortened nine-item version explains over 40% of variability in the items, the original 18-item version devised by Sperling et al. (2002) explains only approximately 35% in the present sample. This is even more impressive given the shorter seven-item version of the measure, which accounts for over 52% of the variance in the items. Comparison of the standardized factor loadings for the solutions of original version and the shortened versions leads us to conclude that factor loadings are higher for our proposed shortened MAI, Jr.-S, as some of the items in the original longer version not only manifested lower factor loadings, but also lower communalities after extraction. Further, Schraw (2009), in his chapter on measurement of metacognitive concepts, urges researchers to adopt more parsimonious measures with fewer items to avoid survey fatigue if the internal consistency and factor loadings of the shorter measures are adequate. Indeed, it was Schraw and Dennison (1994) who developed the original MAI for adults, the gold standard across the world for measuring metacognitive awareness in adult populations. Thus, even though our proposed shortened MAI, Jr.-S leads to a significant reduction in the reliability of the KoC dimension, both dimensions remain at or above the minimally acceptable level of reliability. This, along with the more parsimonious measure than its original counterpart, supports our conclusion that our proposed shortened version, MAI, Jr.-S, is the better option, especially when combined with other measures in a longer survey.
Discussion
Our research aims for the present study were to investigate if it was feasible to reduce the number of items in the original MAI, Jr., originally developed by Sperling and colleagues (2002), from 18 to fewer items. Additionally, we sought to examine the psychometric properties of said shortened version of the Jr. MAI, particularly within a diverse sample of middle school students. Results supported both our hypotheses, as they revealed that while the original 18-item MAI, Jr. had higher internal consistency reliability coefficients, Cronbach’s alpha, especially for the KoC dimension, the shortened seven-item MAI, Jr.-S we propose not only evinced adequate internal consistency, but the final factor solution of the MAI, Jr.-S also explained more of the variability in the seven items. Comparing the two solutions of the longer and shorter versions, standardized factor loadings were higher for our proposed shorter version than the original 18-item version. Moreover, the original longer version explained a little over 13% less variability in the items compared to the shorter seven-item version. This evidence suggests that the shorter, more parsimonious version-MAI, Jr.-S-can be employed in lieu of the longer version, especially when researchers are combining this metacognitive awareness measure with other related constructs, which typically occurs in educational research. This conclusion is supported by the suggestions provided by Schraw (2009), in which he recommends shorter measures over longer measures to avoid survey fatigue in cases where shorter measures are shown to have adequate psychometric properties compared to longer ones. In the present study, we have demonstrated exactly that-our proposed MAI, Jr.-S, with just half the number of items compared to the original-can be employed in lieu of the longer original version.
Implications for Theory and Educational Practice
The primary aim of educational research is to provide accurate and precise information to make more informed decisions based on accurate and precise construct measurement. The MAI, Jr.- S is not only significantly shorter than its original counterpart, but factor analytic results showed superior fit to the observed data of the shorter version when compared to the longer version. The MAI, Jr-S not only explained more variability in the nine items, but the standardized factor loadings were higher, with adequate internal consistency reliability coefficients, Cronbach’s alphas. Thus, classroom teachers and researchers can easily administer the shorter version in a matter of 5-10 minutes, making the measure of metacognitive awareness in children and adolescents more efficient without sacrificing quality of construct validity and scale reliability. From a theoretical perspective, this study demonstrates the need to continually revisit, revise, and refine self-report measures to ensure they align better with theoretical guidelines and expectations.
Avenues of Future Research
Future research should replicate these findings with different, robust samples of children and adolescents to ensure they are stable and generalizable. In particular, given the small sample of 8th grade students included in the present study, it is critical to ensure that the present findings are generalizable to larger samples of 8th grade students. Future studies should also correlate metacognitive awareness constructs with other constructs subsumed under the theory of self-regulated learning such as motivation and affect to better understand how well the MAI, Jr.-S performs. Finally, the MAI, intended for adult samples, should also undergo a theoretical and practical revision. As the gold standard for measurement of metacognitive awareness in various languages and cultures (e.g., Argentina: Favieri, 2013; Brazil: Lima Filho & Bruni, 2015; Colombia: Gutierrez de Blume & Montoya Londoño, 2021; Turkey, Turan et al., 2011; United States: Young & Fry, 2008), it remains at 52 items in length. Future research should investigate its theoretical alignment with current theory and research and make necessary adjustments for more precise measurement of metacognitive awareness in adult samples.
Methodological Reflections and Limitations
It is important to note the limitations of the present study. First, is the employment of purely self-report data in the present study to assess metacognitive awareness, knowledge of cognition and regulation of cognition. As with any self-report measure, there is the omnipresent threat of social desirability bias, in which participants may be overestimating their metacognitive skills. Secondly, given that most of the students surveyed in both years were in 6th or 7th grade, the findings are limited in their generalizability to 8th and 9th grade students.
Despite these limitations, we believe that the present validation study of a shorter MAI, Jr., provides a significant contribution to the literature on self-report metacognitive and self-regulation of learning skills. Given how survey fatigue influences participants’ motivation and engagement in self-report measure-completion, more parsimonious measures should always be privileged over longer versions.
The purpose of the present investigation was to examine the feasibility of piloting and validating a shortened version of the MAI, Jr. Our rigorous validation approach revealed that a 7-item MAI, Jr.-S, does not only exhibit appropriate construct validity, but adequate internal consistency reliability as well. In sum, the validation of a self-report measure for metacognition in adolescents is an important step in understanding the development of metacognitive skills during this critical period of cognitive and emotional growth. The study provides evidence of the validity and reliability of the measure and suggests that it can be a useful tool for researchers and clinicians working with adolescents. The findings highlight the importance of assessing metacognition in this population, as it has implications for academic achievement, mental health, and overall well-being. With further research and refinement, this measure may ultimately contribute to better understanding and support of adolescent development.