SciELO - Scientific Electronic Library Online

 
vol.18 issue2Experiences of sexuality of women diagnosed with gynecological cancer: a holon theory approachCorrelation between bullying, cyberbullying, and emotional disorders in schoolchildren from public schools in Floridablanca author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Psychologia. Avances de la Disciplina

On-line version ISSN 1900-2386

Psychol. av. discip. vol.18 no.2 Bogotá July/Dec. 2024  Epub Oct 08, 2024

https://doi.org/10.21500/19002386.7034 

Artículo de investigación

Metacognitive Awareness among Middle School Adolescents: Development and Validation of a Shortened Version of the MAI, Jr.

Conciencia metacognitiva entre adolescentes de secundaria: Desarrollo y validación de una versión abreviada del MAI, Jr.

1 Georgia Southern University; Georgia

2 Landmark College; Vermont; Usa


Abstract

Precise and accurate measurement of metacognitive phenomena has never been more necessary than in today’s fast-paced world in which vast quantities of information are readily available to the learner. The MAI, Jr. (see Sperling et al., 2002) is a widely used, 18-question, self-report measure of metacognitive awareness. However, this measure has not been re-examined for construct validity and internal consistency since its inception in 2002. In this manuscript we report on our findings of a 2-year study in which we worked to validate a shortened version of the MAI, Jr. Over the course of 2 years, 601 students in grades 6-8 participated in our study. In each year, data was examined using exploratory factor analysis with common factor extractions (principal axis factoring [PAF]) and oblique rotations (promax). The results of this study support the validation of a shortened, 7-item, scale. We discuss why shorter measures with appropriate construct validity and internal consistency are preferred.

Key words: Metacognitive awareness; MAI, Jr.-S; Metacognition; Measurement; Factor analysis.

Resumen

La medición precisa y exacta de los fenómenos metacognitivos nunca ha sido más necesaria que en el acelerado mundo actual, en el que el alumno dispone de grandes cantidades de información. El MAI, Jr. (ver Sperling et al., 2002) es una medida de autoinforme de conciencia metacognitiva de 18 preguntas ampliamente utilizada. Sin embargo, esta medida no ha sido reexaminada en cuanto a validez de constructo y consistencia interna desde su inicio en 2002. En este documento se informa sobre los hallazgos de un estudio de 2 años en el que trabajamos para validar una versión abreviada del MAI, Jr. En el transcurso de 2 años, 601 estudiantes de 6.º a 8.º grado participaron en nuestro estudio. En cada año, los datos se examinaron mediante análisis factorial exploratorio con extracciones de factores comunes (factorización del eje principal [PAF]) y rotaciones oblicuas (promax). Los resultados de este estudio respaldan la validación de una escala abreviada de 7 ítems. Se discute por qué se prefieren medidas más cortas con validez de constructo y consistencia interna apropiadas.

Palabras clave: Conciencia metacognitiva; MAI, Jr.-S; Metacognición; Medición; Análisis factorial.

Introduction

Metacognition was initially coined by John Flavell (1979) in which he described it as thinking about one’s own thinking. It has also been described as the awareness and understanding of one's own thought processes, and it has garnered significant attention in various fields such as education, psychology, and cognitive science. Understanding metacognition is essential for its role in learning, problem-solving, decision-making, and self-regulation. Thus, metacognitive individuals are aware of their knowledge and learning experiences (Schraw & Moshman, 1995). This literature review aims to explore the theoretical foundations of metacognition, the different measurement instruments used to assess it, and the psychometric properties of these instruments.

Metacognitive Theories

There are several known theoretical frameworks of metacognition (e.g., Flavell, 1979; Gutierrez et al., 2016; Nelson & Narens, 1990; Schraw & Moshman, 1995). Flavell (1979) proposed a widely accepted model of metacognition, distinguishing between metacognitive knowledge (knowledge about one’s cognitive processes and strategies) and metacognitive regulation (the ability to control and adapt cognitive processes). Schraw and Moshman (1995) refined the previous theoretical frameworks of metacognition and integrated metacognitive knowledge and metacognitive regulation in their framework. They divided metacognition into knowledge of cognition and regulation of cognition. Specifically, knowledge of cognition describes learners’ declarative, procedural, and conditional knowledge about their learning and cognitive processes and regulation of cognition describes learners’ planning, monitoring, and evaluation of their learning. Schraw and Moshman posited that knowledge of cognition and regulation of cognition share a dynamic and cyclical relation to assist learners’ metacognition and learning.

Overall, metacognition can be understood through two lenses. First, from the information processing lens, metacognition is a higher-level process that requires individuals’ deliberate and effortful examination and evaluation of their cognitive processes and incoming and exiting information. Second, from the structural lens, metacognition is bifurcated into two key components: knowledge of cognition and regulation of cognition (Schraw & Moshman, 1995). Knowledge of cognition and regulation of cognition interact with each other to support the learner’s metacognition. Research focusing on this lens often investigates the application of metacognition across various academic domains and teaching of metacognitive strategies (e.g., Händel et al., 2020; Jaeger & Wiley, 2014). A recent meta-analysis on the effect of learning strategy interventions on metacognitive monitoring accuracy indicated a moderate enhancing effect of learning strategy interventions on metacognitive monitoring accuracy (Gutierrez de Blume, 2022).

Nelson and Narens (1990) argued that metacognition can be understood as a cyclical, reciprocal interaction between monitoring and control processes that assist the learner in transforming information from the environment (what they dubbed as the object level) into mental representations in their minds (what they dubbed as the meta level), which they can then use to make decisions on how best to act and behave. The criticisms of this model, however, are that it treated metacognition as unidimensional and it allowed only for the use of the gamma coefficient as the metric of metacognitive monitoring. Furthermore, while metacognitive monitoring error was acknowledged as existing, this model did not have a way to empirically capture said error (Schraw et al., 2014).

Finally, Gutierrez et al. (2016) developed and empirically tested a metacognitive monitoring framework-the general monitoring model-that incorporated the domain-specific and domain-general nature of metacognition and it also supported the conclusion that, while metacognitive monitoring accuracy was domain specific, metacognitive monitoring errors (i.e., underconfidence, expressed as a correct performance that is judged as incorrect and overconfidence, expressed as incorrect performance that is judged as correct) were domain general in nature, and that these two aspects of error are inversely related. Additionally, the second-order factors of accuracy and error were subsumed by a third-order general monitoring factor.Metacognitive Assessments

Metacognitive assessments can be defined as the evaluation of learners’ knowledge about and regulation of cognition (Ozturk, 2017). Various approaches have been explored throughout time to assess individuals’ metacognition, such as self-reports, interviews, observation, and accuracy ratings. Among these, the most common approach in assessing metacognition is self-report surveys (Dinsmore et al., 2008).

Adult Learners

A variety of self-report instruments have been developed to measure adult learners’ metacognition, such as the MAI (Schraw & Dennison, 1994), which is one of the most adopted metacognitive assessments and has been widely administered in different settings (e.g., everyday decision-making: Lee et al., 2009; teaching: Balcikanli, 2011; learning: Young & Fry, 2008). In their original study, Schraw and Dennison (1994) determined a two-factor structure of the measure, corresponding to the two components of metacognition (i.e., knowledge of cognition and regulation of cognition). Moreover, students’ scores on the MAI showed appropriate internal consistency, with Cronbach’s alpha reaching .91 for each factor and .94 for all items included. The MAI has been subsequently translated into different languages and administered among students in various countries (e.g., Argentina: Favieri, 2013; Brazil: Lima Filho & Bruni, 2015; Colombia: Gutierrez de Blume & Montoya Londoño, 2021; Turkey, Turan et al., 2011; United States: Young & Fry, 2008).

Children and Adolescents

Research on metacognitive phenomena in children and adolescents is not as robust as that of adults, primarily due to difficulty in creating developmentally appropriate measurements, especially among young children. Nevertheless, the growing body of research on metacognition in children is almost exclusively devoted to the investigation of executive functions (e.g., attention, inhibitory control, working memory, visual-spatial reasoning, etc.), such as the work of Roebers and her colleagues (Roebers 2017; Spiess et al., 2016). While objective in nature and useful for diagnostic purposes, this method of assessing metacognitive-related skills is time consuming because it requires individual assessment. This situation led some metacognitive researchers to contemplate the possibility of developing faster methods of measurement at larger scales. It was not until Sperling et al. (2002) developed, piloted, and validated the MAI, Jr., adapted from the original MAI for adults, that a self-report measure of metacognition was available. The MAI, Jr., contained two versions, a 12-item scale developed and validated for children in grades 3-5, and an 18-item scale that was developed and validated for adolescents in grades 6-9 Exploratory factor analyses conducted on both versions of the MAI, Jr. suggested a two-factor model of knowledge of cognition and regulation of cognition (Sperling et al., 2002).

In sum, metacognition plays a crucial role in various cognitive processes and has significant implications for learning, problem-solving, and decision-making. Measurement instruments such as the MAI and MAI, Jr., provide valuable tools for assessing individuals' metacognitive abilities. While these instruments have shown promise in terms of reliability and validity, further research is needed to examine whether their reliability and validity still hold today. Understanding metacognition and its measurement is essential for advancing research and practice in fields such as education, psychology, and cognitive science. Thus, while the original MAI, Jr., was shorter than the adult version, Sperling et al. (2002) never examined whether a version with fewer items than 18 would appropriately measure metacognitive awareness in children and adolescents, especially given the shorter attention span of children (Roebers, 2017). Thus, the present study sought to investigate the feasibility of a shorter version.

The Present Study

Predicated on the literature we surveyed, the present investigation sought to address the following research objectives and their associated hypotheses.

1. Examine whether the Metacognitive Awareness Inventory, Jr. version (MAI, Jr.), can be reduced in length from 18 items to fewer items to mitigate survey fatigue, especially among its intended population of children and adolescents.

Hypothesis: We expected that the original MAI, Jr. could, in fact, be significantly reduced regarding number of items.

2. Investigate the internal consistency reliability coefficients and the construct validity of the MAI, Jr.-S (shortened version).

Hypothesis: We hypothesized that our proposed shortened version, the MAI, Jr.-S, would have not only adequate internal consistency reliability, but also sound construct validity, when compared to its original 18-item counterpart.

Method

This study represents an empirical investigation that employs quantitative construct validation procedures.

Participants and Sampling

District. The present study was conducted over two years and employed a convenience sampling procedure as a psychometric validation study to confirm the results of the original validation study (Sperling et al., 2002). Participants within both years were drawn from a large suburban school district located on the West Coast. District enrollment was around 18,000 students per year spread over approximately 30 schools. Students within the district identified as female (48%), male (52%), White (43.6%), Hispanic or Latino (37.6%), Black (6.4%), Asian (3.7%), and two or more races (2.6%) (Ed-data, 2022). Participants’ age ranged from 11 to 13 years (M = 11.90; SD = 0.63). Within this, it is worth noting that the number of students who identified as White is inclusive of students identifying as Middle Eastern. Given that the district contained a large population of students identifying as Middle Eastern, students’ self-reported demographics are also included below.

Schools. Two middle schools from within the district were identified by district personnel for participation in the study within year 1. These schools will be referred to herein as Beach View Middle School and Ocean Side Middle School. Two additional schools, Pacific Middle School and Coral Middle School joined the study in year 2. Demographic data for each of these schools is reported in Table 1 below (Ed-data, 2022).

Table 1 Demographic Characteristics of the Sample of Participants 

Approximate Enrollment Female Male White Hispanic or Latino Black Asian Two or More Races
Beach View 950 47.3% 52.7% 37% 40% 13% 4.5% 1.1%
Ocean Side 700 49% 51% 31% 50% 10% 2.2% 2.5%
Pacific 500 44.3% 55.7% 38.7% 38.7% 12.8% 2.9% 2.5%
Coral 800 45.6% 54.4% 50.4% 35.1% 6.2% 2.7% 1.5%

Year 1 Demographic Data. In year 1 of the study, 361 students had complete data on the MAI, Jr. All of the students were drawn from Beach View or Ocean Side Middle Schools. Of these students, 194 were in 6th grade, 144 were in 7th grade, and 23 were in 8th grade. These students identified as female (46%), male (50%), Other/Non-Binary (3.2%), and less than 1% preferred not to specify a gender. These students also identified as Hispanic/Latinx (29.8%), White (11.3%), African American/Black (11.3%), Two or More Races (11.3%), Middle Eastern (7.3%), Asian (5.6%), Other (8.9%), and 13.7% of students preferred not to say.

Year 2 Demographic Data. In year 2 of the study, 240 total students across the four participating schools had complete data when administered the shortened 9-item version of the MAI, Jr. scale that resulted from analyses of year 1 data. Of these students, 111 were in 6th grade, 102 were in 7th grade, and 27 were in 8th grade. These students identified as female (54.2%), male (40.4%), Other/Non-Binary (3%), and 2.5% preferred not to specify a gender. These students also identified as Middle Eastern (30.7%), Hispanic, Latinx or Mexican (28%), Two or More Races (16.3%), Asian or Pacific Islander (6.1%), Black or African American (3.9%), White (2.5%), and 12.5% of students preferred not to say.

Instruments

Metacognitive Awareness Inventory, Jr. The original 18-item MAI, Jr. was utilized in year 1 of the study. To collect continuous data, the original 5-point Likert scale was replaced with a more continuous scale of 0-100, with 0 representing “never true of me” and 100 representing “always true of me”. Sample items included, “I know when I understand something” (KoC) and “I can make myself learn when I need to” (RoC). Scores were calculated by taking the average of the items of each of the two dimensions, thus producing two composite scores per individual. Table 2 displays the internal consistency reliability coefficients of the original MAI, Jr.

Procedure

All ethical considerations were followed during the conduct of this study. The university’s IRB approved the present study (Approval # H240551). In year 1, the full MAI, Jr. (Sperling et al., 2002) was administered to all study participants across grades 6 and 7, resulting in a final sample of 124 students, as noted in the demographics section above. Data from year 1 was screened and analyzed, as described below, resulting in a shortened, 9-item measure. In year 2, this 9-item measure was then administered to 361 students in grades 6-8. These data were screened and analyzed in the same way as year 1, with the data suggesting that the final 7-item scale can be employed for students in grades 6-8 in lieu of the original 18-item scale. The final 7-item scale is referred to herein as the Mai, Jr.-S. The final set of items of the 9-item shortened version of the MAI, Jr. are found in the Appendix 1, and the 7-item Mai Jr.-S is found in Appendix 2.

Data Analysis

All data were tested for requisite statistical assumptions prior to data analysis, including univariate and multivariate normality, collinearity, reproducibility of the correlation matrix, univariate and multivariate outliers, and the Kaiser-Meyer-Olkin (KMO) Test of Sampling Adequacy (Tabachnick & Fidell, 2013). Data were normally distributed at the univariate (all skewness and kurtosis values were less than the absolute value of 2; George & Mallery, 2019) and multivariate levels (all standardized residuals were less than 2 standard deviations of their, respective means), with no collinearity present in the data (all zero-order correlations were ≤ 0.74). Further, outlier analyses revealed no extreme outliers at the univariate (via box-and-whisker plots) or multivariate level (via Mahalanobis Distance).

Descriptive statistics were computed for all measures utilizing IBM SPSS 27 software. Exploratory factor analyses (EFAs) with common factor extractions (principal axis factoring [PAF]) and oblique rotations (promax) were conducted for both the original and shortened MAI, Jr. We chose this approach for two reasons. First, our analyses were all grounded in theoretical assumptions regarding the relations among these indicators of metacognitive awareness, and hence, justifying the EFA rather than the principal components analysis (PCA), which is atheoretical and purely statistical. Second, we selected PAF as our extraction method because, unlike PCA, which assumes all communalities to be 1, PAF employs the multiple squared correlation coefficient, R 2 , to determine communalities after extraction. Also, unlike maximum likelihood extraction, which attempts to maximize variance of the solution and may overestimate the explained variance, PAF is a more conservative solution. Finally, we employed an oblique rotation because we assumed, based on theoretical considerations, that the factors would, in fact, be correlated (Schraw & Dennison, 1994). The overall model fit, the standardized factor loadings, and the explained variance each factor contributed to its indicators were analyzed for this purpose for the original MAI, Jr., the nine-item shortened MAI, Jr., and the MAI, Jr.-S.

Our modeling procedure began by including all 18 of the original items. Subsequently, the model was trimmed and administered in Year 2. Finally, we used data at the end of Year 2 to make additional adjustments to the model. We chose standardized factor loadings ≥ 0.35 because, as a measure of effect, this indicates that ~12% of the item’s variability is attributable to the latent variable (Tabacknick & Fidell, 2019).

Data were de-identified to protect the anonymity of the participants, and all participants assented to participate along with parental permission for their children to participate.

Results

This study was part of a multiyear research project intended to evaluate the feasibility of the CueThinkEF+ artificial intelligence (AI) platform. This AI platform was developed to enhance adolescents’ self-regulation of learning, metacognition, and executive functions. The present study employed data from the first two years of the project.

Year 1 and Year 2 Results

Descriptive and Internal Consistency Reliability. Descriptive statistics, internal consistency reliability coefficients (Cronbach’s alpha), and the zero-order correlation matrix for the original MAI, Jr. are presented in Table 2. Table 3 and Table 4 display the same information for the shortened nine-item scale and for the MAI, Jr.-S, in Year 2, respectively. Interestingly, the sample of participants reported lower regulation of cognition (comprised of planning, information management, comprehension monitoring, debugging, and evaluation of learning) than knowledge of cognition (comprised of declarative, procedural, and conditional knowledge) for both the original and shortened MAI, Jr. Further, the correlation between the two dimensions of metacognitive awareness was slightly higher for the original MAI, Jr. than the shortened form. Finally, whereas the internal consistency reliability coefficients remained similar for the RoC across both versions of the instrument, the coefficient was lower for KoC for the shorter version, which was expected, due to the reduction in items. Nevertheless, as expected, the reliability of both dimensions of metacognitive awareness was adequate, meeting the threshold for the minimally acceptable value of .70 (Tabacknick & Fidell, 2019).

Table 2 Descriptive Statistics and Zero-Order Correlation Matrix for the Two Dimensions of Metacognitive Awareness for the Original 18-Item MAI, Jr. 

Variable M 1 M 2 SD 1 SD 2 α1 α2 1 2
1. Knowledge of Cognition (9 items) 71.22 74.44 12.21 16.23 .87 .85 - .61*
2. Regulation of Cognition (9 items) 61.54 57.99 13.56 17.36 .76 .74 55* -

* p < .01

Note. Subscript “1” represents Year 1 statistics and subscript “2” represents Year 2 statistics. The correlation above the diagonal is for Year 1 and that below the diagonal is for Year 2.

N = 361

Table 3 Descriptive Statistics and Zero-Order Correlation Matrix for the Two Dimensions of Metacognitive Awareness for the Shortened 9-Item MAI, Jr.-S 

Variable M SD α 1 2
1. Knowledge of Cognition (4 items) 75.69 13.18 .74 - .55*
2. Regulation of Cognition (5 items) 58.71 16.57 .76 .63* -

* p < .01

Note. The correlation above the diagonal is for Year 1 and that below the diagonal is for Year 2.

N = 240

Table 4 Descriptive Statistics and Zero-Order Correlation Matrix for the Two Dimensions of Metacognitive Awareness for the Shortened 7-Item MAI, Jr.-S 

Variable M SD α 1 2
1. Knowledge of Cognition (3 items) 69.98 15.52 .74 .66*
2. Regulation of Cognition (4 items) 68.01 16.54 .75 .62*

* p < .01

Note. The correlation above the diagonal is for Year 1 and that below the diagonal is for Year 2.

N = 240

The EFA results with common factor extraction-PAF-and an oblique rotation (promax) were interpreted next. Inspection of preliminary analyses revealed no difficulties in the data to reproduce a correlation matrix. Finally, the KMO Tests of Sampling adequacy was appropriate for both original MAI, Jr. (KMO = .88, χ2 (153) = 1,366.91, p < .001) and the 9-item shortened version (KMO = .80, χ2 (36) = 543.28, p < .001), thereby permitting the factor analyzes to be conducted.

Factor Analyses

Rather than allowing the default solution of factors with eigenvalues greater than 1, we instead hypothesized a two-factor solution, per the original validation study (Sperling et al., 2002), for both EFAs. We first report the findings of the original MAI, Jr., followed by the shortened MAI, Jr.-S.

Original MAI, Jr. The EFA with a PAF common extraction and a promax oblique rotation for the original 18-item MAI, Jr. yielded a two-factor solution with 18 items which explained 34.99% of cumulative variance. The correlation among factors was r = .67. Descriptive statistics, communalities after extraction, and standardized factor loadings for this solution are presented in Table 5.

Table 5 Descriptive Statistics, Communalities, and Standardized Factor Loadings of the Final Model for the Original 18-Item MAI, Jr. 

Item M SD Com. RoC KoC
MAI12 87.93 19.79 .35 .72
MAI18 69.75 28.46 .52 .65
MAI5 83.56 20.61 .26 .62
MAI11 77.29 24.00 .47 .60
MAI4 74.92 24.22 .38 .60
MAI2 69.98 25.82 .34 .57
MAI1 78.31 21.05 .32 .57
MAI3 70.88 25.10 .35 .54
MAI15 68.58 28.34 .43 .50
MAI13 63.07 27.96 .47 .71
MAI7 38.10 31.27 .38 .66
MAI9 57.33 30.93 .49 .62
MAI17 64.83 30.30 .27 .57
MAI8 65.81 28.42 .36 .56
MAI10 56.01 29.78 .37 .50
MAI6 48.73 30.20 .10 .36
MAI14 68.59 24.62 .35 .35
MAI16 63.93 27.44 .08 .35

Key: Com. = Communality after extraction; KoC = Knowledge of cognition factor, subsuming declarative, procedural, and conditional metacognitive knowledge; RoC = Regulation of cognition factor, subsuming planning, information management, comprehension monitoring, debugging, and evaluation of learning.

MAI, Jr.-S. The EFA with a PAF common extraction and a promax oblique rotation for the shortened nine-item MAI, Jr.-S for Year 1 yielded a two-factor solution with nine items which explained 48.73% of cumulative variance. The correlation among factors was r = .55. Descriptive statistics, communalities after extraction, and standardized factor loadings for this solution are presented in Table 6.

Table 6 Descriptive Statistics, Communalities, and Standardized Factor Loadings of the Final Model for the Shortened Nine-Item MAI, Jr. for the end of Year 1 

Item M SD Com. KoC RoC
MAI9 57.33 30.93 .50 .89
MAI7 38.10 31.27 .38 .56
MAI10 56.01 29.78 .38 .47
MAI8 65.81 28.42 .33 .44
MAI12 87.93 19.79 .54 .70
MAI17 68.58 28.34 .63 .69
MAI5 83.56 20.61 .58 .59
MAI18 69.75 28.46 .44 .56
MAI11 77.29 24.00 .41 .50

Key: Com. = Communality after extraction; KoC = Knowledge of cognition factor, subsuming declarative, procedural, and conditional metacognitive knowledge; RoC = Regulation of cognition factor, subsuming planning, information management, comprehension monitoring, debugging, and evaluation of learning.

Results of the shorter seven-item final version of the MAI, Jr.-S can be found in Table 7. This solution also produced two-factors that explain 52.30% of the variability in the seven items. The correlation, Pearson’s r, between the KoC and RoC factors was r = .63.

Table 7.Descriptive Statistics, Communalities, and Standardized Factor Loadings of the Final Model for the Shortened Seven-Item MAI, Jr.-S for the end of Year 2 and Beyond 

Item M SD Com. KoC RoC
MAI9 56.56 31.83 .41 .83
MAI7 66.72 26.72 .55 .68
MAI10 69.67 24.35 .50 .63
MAI8 86.85 22.03 .63 .59
MAI5 64.33 31.30 .49 .86
MAI18 67.81 30.43 .57 .64
MAI11 77.49 24.68 .53 .56

Key: Com. = Communality after extraction; KoC = Knowledge of cognition factor, subsuming declarative, procedural, and conditional metacognitive knowledge; RoC = Regulation of cognition factor, subsuming planning, information management, comprehension monitoring, debugging, and evaluation of learning.

Comparison of the Factor Solutions of the Original and Shortened MAI, Jr.

Inspection of both final solutions yields some interesting findings. For the sample of 361 participants recruited for the present study for Year 1, the original 18-item MAI, Jr. is not only more than twice as long as our proposed shortened version, but, evidently, it also leads to a degraded solution with appreciably lower explained variance. Whereas our proposed shortened nine-item version explains over 40% of variability in the items, the original 18-item version devised by Sperling et al. (2002) explains only approximately 35% in the present sample. This is even more impressive given the shorter seven-item version of the measure, which accounts for over 52% of the variance in the items. Comparison of the standardized factor loadings for the solutions of original version and the shortened versions leads us to conclude that factor loadings are higher for our proposed shortened MAI, Jr.-S, as some of the items in the original longer version not only manifested lower factor loadings, but also lower communalities after extraction. Further, Schraw (2009), in his chapter on measurement of metacognitive concepts, urges researchers to adopt more parsimonious measures with fewer items to avoid survey fatigue if the internal consistency and factor loadings of the shorter measures are adequate. Indeed, it was Schraw and Dennison (1994) who developed the original MAI for adults, the gold standard across the world for measuring metacognitive awareness in adult populations. Thus, even though our proposed shortened MAI, Jr.-S leads to a significant reduction in the reliability of the KoC dimension, both dimensions remain at or above the minimally acceptable level of reliability. This, along with the more parsimonious measure than its original counterpart, supports our conclusion that our proposed shortened version, MAI, Jr.-S, is the better option, especially when combined with other measures in a longer survey.

Discussion

Our research aims for the present study were to investigate if it was feasible to reduce the number of items in the original MAI, Jr., originally developed by Sperling and colleagues (2002), from 18 to fewer items. Additionally, we sought to examine the psychometric properties of said shortened version of the Jr. MAI, particularly within a diverse sample of middle school students. Results supported both our hypotheses, as they revealed that while the original 18-item MAI, Jr. had higher internal consistency reliability coefficients, Cronbach’s alpha, especially for the KoC dimension, the shortened seven-item MAI, Jr.-S we propose not only evinced adequate internal consistency, but the final factor solution of the MAI, Jr.-S also explained more of the variability in the seven items. Comparing the two solutions of the longer and shorter versions, standardized factor loadings were higher for our proposed shorter version than the original 18-item version. Moreover, the original longer version explained a little over 13% less variability in the items compared to the shorter seven-item version. This evidence suggests that the shorter, more parsimonious version-MAI, Jr.-S-can be employed in lieu of the longer version, especially when researchers are combining this metacognitive awareness measure with other related constructs, which typically occurs in educational research. This conclusion is supported by the suggestions provided by Schraw (2009), in which he recommends shorter measures over longer measures to avoid survey fatigue in cases where shorter measures are shown to have adequate psychometric properties compared to longer ones. In the present study, we have demonstrated exactly that-our proposed MAI, Jr.-S, with just half the number of items compared to the original-can be employed in lieu of the longer original version.

Implications for Theory and Educational Practice

The primary aim of educational research is to provide accurate and precise information to make more informed decisions based on accurate and precise construct measurement. The MAI, Jr.- S is not only significantly shorter than its original counterpart, but factor analytic results showed superior fit to the observed data of the shorter version when compared to the longer version. The MAI, Jr-S not only explained more variability in the nine items, but the standardized factor loadings were higher, with adequate internal consistency reliability coefficients, Cronbach’s alphas. Thus, classroom teachers and researchers can easily administer the shorter version in a matter of 5-10 minutes, making the measure of metacognitive awareness in children and adolescents more efficient without sacrificing quality of construct validity and scale reliability. From a theoretical perspective, this study demonstrates the need to continually revisit, revise, and refine self-report measures to ensure they align better with theoretical guidelines and expectations.

Avenues of Future Research

Future research should replicate these findings with different, robust samples of children and adolescents to ensure they are stable and generalizable. In particular, given the small sample of 8th grade students included in the present study, it is critical to ensure that the present findings are generalizable to larger samples of 8th grade students. Future studies should also correlate metacognitive awareness constructs with other constructs subsumed under the theory of self-regulated learning such as motivation and affect to better understand how well the MAI, Jr.-S performs. Finally, the MAI, intended for adult samples, should also undergo a theoretical and practical revision. As the gold standard for measurement of metacognitive awareness in various languages and cultures (e.g., Argentina: Favieri, 2013; Brazil: Lima Filho & Bruni, 2015; Colombia: Gutierrez de Blume & Montoya Londoño, 2021; Turkey, Turan et al., 2011; United States: Young & Fry, 2008), it remains at 52 items in length. Future research should investigate its theoretical alignment with current theory and research and make necessary adjustments for more precise measurement of metacognitive awareness in adult samples.

Methodological Reflections and Limitations

It is important to note the limitations of the present study. First, is the employment of purely self-report data in the present study to assess metacognitive awareness, knowledge of cognition and regulation of cognition. As with any self-report measure, there is the omnipresent threat of social desirability bias, in which participants may be overestimating their metacognitive skills. Secondly, given that most of the students surveyed in both years were in 6th or 7th grade, the findings are limited in their generalizability to 8th and 9th grade students.

Despite these limitations, we believe that the present validation study of a shorter MAI, Jr., provides a significant contribution to the literature on self-report metacognitive and self-regulation of learning skills. Given how survey fatigue influences participants’ motivation and engagement in self-report measure-completion, more parsimonious measures should always be privileged over longer versions.

The purpose of the present investigation was to examine the feasibility of piloting and validating a shortened version of the MAI, Jr. Our rigorous validation approach revealed that a 7-item MAI, Jr.-S, does not only exhibit appropriate construct validity, but adequate internal consistency reliability as well. In sum, the validation of a self-report measure for metacognition in adolescents is an important step in understanding the development of metacognitive skills during this critical period of cognitive and emotional growth. The study provides evidence of the validity and reliability of the measure and suggests that it can be a useful tool for researchers and clinicians working with adolescents. The findings highlight the importance of assessing metacognition in this population, as it has implications for academic achievement, mental health, and overall well-being. With further research and refinement, this measure may ultimately contribute to better understanding and support of adolescent development.

References

Balcikanli, C. (2011). Metacognitive awareness inventory for teachers (MAIT). Retrieved from: http://repositorio.ual.es/bitstream/handle/10835/733/Art_25_563.pdf?sequence=1Links ]

Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation, and self-regulated learning. Educational Psychology Review, 20(4), 391-409. https://doi.org/10.1007/s10648-008-9083-6Links ]

Favieri, A. G. (2013). General metacognitive strategies inventory (GMSI) and the metacognitive integrals strategies inventory (MISI). Electronic Journal of Research in Educational Psychology, 11(3), 831-850. http://doi.org/10.14204/ejrep.31.13067Links ]

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34(10), 906-911. https://doi.org/10.1037/0003-066X.34.10.906 [ Links ]

George, D., & Mallery, P. (2019). IBM SPSS statistics 26 step by step. Taylor & Francis. https://doi.org/10.4324/9780429056765Links ]

Gutierrez, A. P., Schraw, G., Kuch, F., & Richmond, A. S. (2016). A two-process model of metacognitive monitoring: Evidence for distinct accuracy and error factors. Learning and Instruction, 44, 1-10. https://doi.org/10.1016/j.learninstruc.2016.02.006Links ]

Gutierrez de Blume, A. P. (2022). Calibrating calibration: A meta-analysis of learning strategy instruction interventions to improve metacognitive monitoring accuracy. Journal of Educational Psychology, 114(4), 681-700. https://doi.org/10.1037/edu0000674Links ]

Gutierrez de Blume, A. P., & Montoya Londoño, D. M. (2021). Validation and examination of the factor structure of the Metacognitive Awareness Inventory (MAI) in Colombian university students. Psicogente, 24(46), 1-29. https://doi.org/10.17081/psico.24.46.4881Links ]

Händel, M., Harder, B., & Dresel, M. (2020). Enhanced monitoring accuracy and test performance: Incremental effects of judgment training over and above repeated testing. Learning and Instruction, 65, 1-9. https://doi.org/10.1016/j.learninstruc.2019.101245Links ]

Jaeger, A. J., & Wiley, J. (2014). Do illustrations help or harm metacomprehension accuracy? Learning and Instruction, 34, 58-73. https://doi.org/10.1016/j.learninstruc.2014.08.002 [ Links ]

Lee, C. B., Teo, T., & Bergin, D. (2009). Children’s use of metacognition in solving everyday problems: An initial study from an Asian context. The Australian Educational Researcher, 36(3), 89-102. https://link.springer.com/content/pdf/10.1007/BF03216907.pdfLinks ]

Lima Filho, R. N., & Bruni, A. L. (2015). Metacognitive awareness inventory: Translation and validation from a confirmatory analysis. Psicologia: Ciência e Profissão, 35(4), 1275-1293. https://doi.org/10.1590/1982-3703002292013Links ]

Nelson, T.O., & Narens, L. (1990). Metamemory: A theoretical framework and some new findings. In G.H. Bower (Ed), The psychology of learning and motivation, Vol. 26 (pp. 125-173). Academic Press. [ Links ]

Ozturk, N. (2017). Assessing metacognition: Theory and practices. International Journal of Assessment Tools in Education, 4(2), 134-148. https://doi.org/10.21449/ijate.298299Links ]

Roebers, C. M. (2017). Executive function and metacognition: Towards a unifying framework of cognitive self-regulation. Developmental Review, 45, 31-51. https://doi.org/10.1016/j.dr.2017.04.001Links ]

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475. https://doi.org/10.1006/ceps.1994.1033 [ Links ]

Schraw, G., Kuch, F., Gutierrez, A. P., & Richmond, A. (2014). Exploring a three-level model of calibration accuracy. Journal of Educational Psychology, 106, 1192-1202. https://doi.org/10.1037/a0036653 [ Links ]

Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review, 7(4), 351-371. https://doi.org/10.1007/BF02212307Links ]

Speiss, M. A., Meier, B., & Roebers, C. M. (2016). Development and longitudinal relationships between children’s executive functions, prospective memory, and metacognition. Cognitive Development, 38, 99-113. https://doi.org/10.1016/j.cogdev.2016.02.003Links ]

Sperling, R. A., Howard, B. C., Miller, L. A., & Murphy, C. (2002). Measures of children's knowledge and regulation of cognition. Contemporary Educational Psychology, 27(1), 51-79. https://doi.org/10.1006/ceps.2001.1091Links ]

Tabachnick, B. G., & Fidell, L. S. (2019). Using multivariate statistics (7th ed.). Pearson. https://www.pearson.com/us/higher-education/program/Tabachnick-Using-Multivariate-Statistics-7th-Edition/PGM2458367.htmlLinks ]

Turan, S., Demirel, O., & Sayek, I. (2009). Metacognitive awareness and self-regulated learning skills of medical students in different medical curricula. Medical Teacher, 31(10), e477-e483. https://doi.org/10.3109/01421590903193521Links ]

Young, A., & Fry, J. D. (2008). Metacognitive awareness and academic achievement in college students. Journal of the Scholarship of Teaching and Learning, 8(2), 1-10. https://files.eric.ed.gov/fulltext/EJ854832.pdfLinks ]

Para citar este artículo: Gutiérrez de Blume, A. P., Rodes S. Bryck R.L. (2024). Metacognitive Awareness among Middle School Adolescents: Development and Validation of a Shortened Version of the MAI, Jr. Psychologia. Avances de la Disciplina 18(2) pp. 55-66. https://doi.org/10.21500/19002386.7034

Appendix 1

9 Items of the MAI, Jr. at the end of Year 1

5. I learn best when I already know something about the topic.

7. When I am done with my schoolwork, I ask myself if I learned what I wanted to learn.

8. I think of several ways to solve a problem and then choose the best one.

9. I think about what I need to learn before I start working.

10. I ask myself how well I am doing while I am learning something new.

11. I really pay attention to important information.

12. I learn more when I am interested in the topic.

17. I ask myself if there was an easier way to do things after I finish a task.

18. I decide what I need to get done before I start a task.

Appendix 2

Final Items of the MAI, Jr.-S, of Year 2 and Beyond

5. I learn best when I already know something about the topic.

7. When I am done with my schoolwork, I ask myself if I learned what I wanted to learn.

8. I think of several ways to solve a problem and then choose the best one.

9. I think about what I need to learn before I start working.

10. I ask myself how well I am doing while I am learning something new.

11. I really pay attention to important information.

18. I decide what I need to get done before I start a task.

Received: April 18, 2024; Accepted: May 17, 2024

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License