0.a. Goal

Goal 4: Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all

0.b. Target

Target 4.1: By 2030, ensure that all girls and boys complete free, equitable and quality primary and secondary education leading to relevant and effective learning outcomes

0.c. Indicator

Indicator 4.1.1: Proportion of children and young people (a) in grades 2/3; (b) at the end of primary; and (c) at the end of lower secondary achieving at least a minimum proficiency level in (i) reading and (ii) mathematics, by sex

0.d. Series

Not applicable

0.e. Metadata update

2022-03-31

0.g. International organisations(s) responsible for global monitoring

UNESCO Institute of Statistics (UIS)

1.a. Organisation

UNESCO Institute of Statistics (UIS)

2.a. Definition and concepts

Definition:

Percentage of children and young people achieving at least a minimum proficiency level in (i) reading and (ii) mathematics during primary education (Grade 2 or 3), at the end of primary education, and at the end of lower secondary education. The minimum proficiency level will be measured relative to new common reading and mathematics scales currently in development.

Concepts:

Minimum proficiency level (MPL) is the benchmark of basic knowledge in a domain (mathematics, reading, etc.) measured through learning assessments. In September 2018, an agreement was reached on a verbal definition of the global minimum proficiency level of reference for each of the areas and domains of Indicator 4.1.1 as described in the Minimum Proficiency Levels (MPLs): Outcomes of the consensus building meeting.

Minimum proficiency levels defined by each learning assessment

To ensure comparability across learning assessments, a verbal definition of MPL for each domain and levels between cross-national assessments (CNAs) was established by conducting an analysis of the performance level descriptors (PLDs)[1] of cross-national, regional, and community-led tests in reading and mathematics. The analysis was led and completed by the UIS and a consensus among experts on the proposed methodology was deemed adequate and pragmatic.

The global MPL definitions for the domains of reading and mathematics are presented in Table 1.

Table 1. Minimum proficiency levels defined by each learning assessment

Reading

Educational Level

Descriptor

Grade 2

They read and comprehend most of written words, particularly familiar ones, and extract explicit information from sentences.

Grade 3

Students read aloud written words accurately and fluently. They understand the overall meaning of sentences and short texts. Students identify the texts’ topic.

Grades 4 & 6

Students interpret and give some explanations about the main and secondary ideas in different types of texts. They establish connections between main ideas on a text and their personal experiences as well as general knowledge.

Grades 8 & 9

Students establish connections between main ideas on different text types and the author’s intentions. They reflect and draw conclusions based on the text.

Mathematics

Educational Level

Descriptor

Grades 2-3

Students demonstrate skills in number sense and computation, shape recognition and spatial orientation.

Grades 4-6

Students demonstrate skills in number sense and computation, basic measurement, reading, interpreting, and constructing graphs, spatial orientation, and number patterns.

Grades 8 & 9

Students demonstrate skills in computation, application problems, matching tables and graphs, and making use of algebraic representations.

1

PLD: Performance level descriptors are descriptions of the performance levels to express the knowledge and skills required to achieve each performance level, by domain.

2.b. Unit of measure

Percent (%)

2.c. Classifications

This indicator expresses a Minimum proficiency level (MPL) that is the benchmark of basic knowledge in a domain (mathematics, reading, etc.) measured through learning assessments. In September 2018, an agreement was reached on a verbal definition of the global minimum proficiency level of reference for each of the areas and domains of Indicator 4.1.1 as described in the Minimum Proficiency Levels (MPLs): Outcomes of the consensus building meeting.

3.a. Data sources

Type of data sources: In school and population-based learning assessments.

Table 2. How reporting is structured?

In-school based

Household Based Surveys

Grade

Cross-national

National

Grade 2 or 3

LLECE

Yes

MICS6

2/3 plus one year when primary lasts more than 4 years according to ISCED level of the country, except for TIMSS/PIRLS grade 4, which are mapped to the end of primary when primary lasts six or less years.

PASEC

EGRA

TIMSS

EGMA

PIRLS

PAL network

End of primary

LLECE

Yes

PAL network

plus or minus one year of last year of primary according to ISCED level of the country except for TIMSS/PIRLS grade 4, which are mapped to the end of primary when primary lasts six or less years.

PASEC

TIMSS

PIRLS

PILNA

SEAMEO

SACMEQ

End of lower secondary

PISA

Yes

Young Lives

plus two or minus one of last year of lower secondary according to ISCED level of the country

PISA-D

TIMSS

Definition of minimum level until 2018 release

Those defined by each assessment by point of measurement and domain.

Definition of minimum level from 2019

According to alignment as adopted by Global Alliance to Monitoring Learning (GAML) and Technical Cooperation Group (TCG)

Grade for end of primary and end of lower secondary

As defined by the ISCED levels in each country

Validation

Sent from UIS for countries’ approval

3.b. Data collection method

The UIS compiles information from data source providers at international level and from countries at the national level.

3.c. Data collection calendar

Data collection is rolling during the year.

3.d. Data release calendar

Biannual UIS data release (March and September)

3.e. Data providers

School-Based assessments

  • International Large-Scale Assessments are reported to the UIS by cross-national organisations (LLECE, PASEC, TIMSS, and PIRLS). Typically, Cross-National Large-Scale Assessment, either regional or international, define various performance levels, and report as well the mean and standard deviation. They choose as well one level as the cut-off point that defines what children/youth are below or above level.
  • Regional assessments: PASEC, SACMEQ, ERCE, PILNA, SEAMEO.
  • National Large-Scale Assessments either sample- or census-based. Countries should report the proportion of students by level of competency for each domain indicating as well the minimum proficiency level, when it is defined by the national assessment. EGRA and EGMA as reported by USAID or individual countries.

Household-Based surveys

  • MICS6: reported to the UIS by UNICEF
  • Pal Network: reported to the UIS by Pal Network

3.f. Data compilers

UNESCO Institute of Statistics (UIS)

3.g. Institutional mandate

The UNESCO Institute for Statistics (UIS) is the statistical branch of the United Nations Educational, Scientific and Cultural Organization (UNESCO). The Institute produces internationally comparable data and methodologies in the fields of education, science, culture and communication for countries at all stages of development.

The Education 2030 Framework for Action §100 has clearly stated that: “In recognition of the importance of harmonization of monitoring and reporting, the UIS will remain the official source of cross-nationally comparable data on education. It will continue to produce international monitoring indicators based on its annual education survey and on other data sources that guarantee international comparability for more than 200 countries and territories. In addition to collecting data, the UIS will work with partners to develop new indicators, statistical approaches and monitoring tools to better assess progress across the targets related to UNESCO’s mandate, working in coordination with the SDG-Education 2030 SC”.

4.a. Rationale

The indicator aims to measure the percentage of children and young people who have achieved the minimum learning outcomes in reading and mathematics during or at the end of the relevant stages of education.

The higher the figure, the higher the proportion of children and/or young people reaching at least minimum proficiency in the respective domain (reading or mathematic) with the limitations indicated under the “Comments and limitations” section.

4.b. Comment and limitations

Learning outcomes from cross-national learning assessment are directly comparable for all countries which participated in the same cross-national learning assessments. However, these outcomes are not comparable across different cross-national learning assessments or with national learning assessments. A level of comparability of learning outcomes across assessments could be achieved by using different methodologies, each with varying standard errors. The UIS has implemented a mechanism of comparability through a consensus on the definition of the skills and contents. The comparability of learning outcomes over time has additional complications, which require, ideally, to design and implement a set of comparable items as anchors in advance. Methodological developments are underway to address comparability of assessments outcomes over time.

4.c. Method of computation

The number of children and/or young people at the relevant stage of education n in year t achieving or exceeding the pre-defined proficiency level in subject s expressed as a percentage of the number of children and/or young people at stage of education n, in year t, in any proficiency level in subject s.

M P L t ,   n ,   s = M P   t , n , s P t ,   n

where:

MPt,n,s = the number of children and young people at stage of education n, in year t, who have achieved or exceeded the minimum proficiency level in subject s.

Pt,n = the total number of children and young people at stage of education n, in year t.

n = the stage of education that was assessed.

s = the subject that was assessed (reading or mathematics).

Harmonize various data sources

To address the challenges posed by the limited capacity of some countries to implement cross-national, regional, and national assessments, actions have been taken by the UIS and its partners. The strategies are used according to its level of precision and following a reporting protocol that includes the national assessments under specific circumstances.

Completion status

Combining completion rates with learning outcomes improves our understanding of progress towards Target 4.1. Almost all information regarding learning is school-based and does not take into account the completion of the level. The inclusion of completion in the global list offers to report according to completion status. The greatest differences between the SDG 4.1.1 on learning before completion and the disaggregation by completion are found in regions or countries with lower completion and enrolment rates because the adjusted (or children completing and learning) indicator is based on a quality-adjusted completion rate. This also explains why the largest differences occur at the lower-secondary level. Globally, 47% of lower-secondary students achieve minimum proficiency in reading according to the original SDG 4.1.1 Indicator, but the value for the adjusted indicator would fall to 34% of adolescents completing lower secondary and achieving minimum proficiency in mathematics. References here.

4.d. Validation

The quality control is granted by the setting of a Review Panel to discuss any problem/disagreement on implementation. The Review Panel is constituted by regionally representative experts on learning.

4.e. Adjustments

As currently measured, most learning assessments have different methodologies for establishing a Minimum Proficiency level (MPL). The UIS and GAM establish standardization guidelines to guide the choice of the minimum thresholds based on the frameworks of each assessment program. The most critical decision is to choose in each assessment a level for international reporting that is consistent with the international definition of MPL. In the case of some assessment program, it means choosing a different level than the one the assessment program had been using for reporting results.

4.f. Treatment of missing values (i) at country level and (ii) at regional level

• At country level

Missing values are not imputed.

• At regional and global levels

Missing values are not imputed.

4.g. Regional aggregations

Population weighted averages.

4.h. Methods and guidance available to countries for the compilation of the data at the national level

The UIS has elaborated guidance for the countries regarding the contents, the procedures and the reporting in the Global Alliance to monitor learning microsite.

In terms of selection of data sources, the Protocol for Reporting on SDG Global Indicator 4.1.1 is guiding the countries about the selection of the assessment program.

4.i. Quality management

The UIS maintains a global database on learning assessments in basic education. For transparency purposes, the inclusion

of a data point in the database is completed by following a protocol and is reviewed by UIS technical focal points to ensure consistency and overall data quality, based on objective criteria to ensure that only the most recent and reliable information are included in the database.

4.j. Quality assurance

Information produced by the cross-national and national assessment programs are described in their manuals.

4.k. Quality assessment

The criteria to ensure the quality and standardization of the data are: the data sources must include adequate documentation; data values should be representative at the national population level and should otherwise be included in a footnote; data values are based on a sufficiently large sample; the learning assessment framework covers the minimum set of content in the global content framework and the proficiency levels are aligned with the minimum proficiency level (MPL) as defined in the global proficiency framework; and the data are plausible and based on trends and consistency with previously published or reported estimates for the indicator.

5. Data availability and disaggregation

Data availability:

Data available at the national level.

Time series:

Data available since 2000.

Disaggregation:

Indicator is published disaggregated by sex and completion status (Global Indicator 4.1.2). Other disaggregation such as location, socio-economic status, immigrant status, ethnicity and language of the test at home are based on data produced by international organizations administering cross learning assessment detailed in the expanded metadata document and validated by countries. Parity indexes are estimated in the reporting of Indicator 4.5.1.

6. Comparability/deviation from international standards

Sources of discrepancies:

Not yet applicable. Data are reported at the national level only.

7. References and Documentation

Minimum Proficiency Levels

http://gaml.uis.unesco.org/wp-content/uploads/sites/2/2019/07/MPLs_revised_doc_20190506_v2.pdf

Costs and Benefits of Different Approaches to Measuring the Learning Proficiency of Students (SDG Indicator 4.1.1)

http://uis.unesco.org/sites/default/files/documents/ip53-costs-benefits-approaches-measuring-proficiency-2019-en.pdf

Protocol for Reporting on SDG Global Indicator 4.1.1

http://gaml.uis.unesco.org/wp-content/uploads/sites/2/2019/05/GAML6-WD-2-Protocol-for-reporting-4.1.1_v1.pdf

Global Proficiency Framework for Reading and Mathematics - Grade 2 to 6

http://gaml.uis.unesco.org/wp-content/uploads/sites/2/2019/05/Global-Proficiency-Framework-18Oct2019_KD.pdf