Educational Initiatives, Author at Ei Study https://ei.study/author/admin/ Thu, 28 Aug 2025 11:50:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://ei.study/wp-content/uploads/2022/10/edilogo.png Educational Initiatives, Author at Ei Study https://ei.study/author/admin/ 32 32 The state of preschool mathematics education in the Indian government school system https://ei.study/the-state-of-preschool-mathematics-education-in-the-indian-government-school-system/ Wed, 13 Dec 2023 09:25:01 +0000 https://ei.study/the-state-of-preschool-mathematics-education-in-the-indian-government-school-system/ A student’s mathematical knowledge is built on a basic understanding of numbers and basic arithmetic operations. We refer to these skills, which should be acquired by class 5, as ‘Foundational Numeracy’ skills. Several studies have shown that students who perform poorly on these skills at the end of kindergarten and class 1 are likely to perform poorly in mathematics through class 4.[1] If students are unable to attain basic numeracy skills by class 5, they tend to fall behind, creating wide learning gaps. Proficiency in foundational numeracy skills also predicts the presence of mathematics learning disabilities.[2] Attaining foundational skills in the early years also ensures that all students are given equal opportunity to perform well in schools by making it easier for them to gain new skills.[3] We have observed, through our Personalised Adaptive Learning Programme Mindspark, and through various student interactions, that class 1 students from government (public) schools tend to exhibit much lower performance and knowledge of foundational skills compared to students from high-fee private schools.[4] (While some consider such comparisons invalid because of the vastly different home resources between these two groups, many countries have shown that children from even the poorest backgrounds can acquire foundational skills well. For us, therefore, the high fee private school performance merely marks the learning level that all students can and must acheive.) A study conducted by EI on foundational numeracy levels across 3 Indian states (Himachal Pradesh, Rajasthan, and Telangana) showed that the performance of government school students lags significantly behind. This study also revealed skill-wise differences in numeracy tasks for students at the end of class 1. Students at the end of class 1 are expected to master some of the foundational numeracy skills shown below: ● Number recognition: While it may seem a very easy skill, the ability of children to quickly be able to recognise and read out 1 and 2 digit numbers turns out to be a very critical one. Indeed, only 33 percent of government school students in Himachal Pradesh (the best-performing state in the study) could accurately recognise 10 numbers shown compared to 92 percent students from private schools. Students were given 10 numbers to read. Responses for each were registered. ● Number writing: Only 26 percent of government school students in Himachal Pradesh could accurately write 10 numbers compared to 87 percent students from private schools. 10 number-words were recited to the students one by one and they were expected to write the numerals for the corresponding number-word. ● Number comparison: 76 percent of government school students in Himachal Pradesh could accurately compare 10 pairs of numbers as compared to 98 percent students from private schools. Students were presented with 10 different pairs of numbers and were asked to identify the bigger number, e.g. “Which is bigger, 17 or 71?” Similar trends are seen across all other skills and paint a disheartening picture. After a year spent in class 1, students in government schools are not even close to where they need them to be in order for them to gain new skills at later grade levels. Repeated practice is undoubtedly a key ingredient for the acquisition of the above skills. The absence of home support is a crucial advantage for private school students – and teaching in public schools must be planned accounting for this. Testing these foundational skills every quarter, for example, and providing teachers benchmarks for levels students must achieve are ways to make this a part of the school’s role. In order to master single digit addition facts, for example, students should be able to recognise numbers, understand cardinality,[5] employ addition strategies, and know how to write numbers. Writing needs fine motor skills to have been developed. Developmentally, acquiring fine motor skills is a time-taking process and requires practice to move from scribbling to forming numerals accurately. Probably the biggest challenge is when teachers do not understand the importance of these milestones and – for example – expect a child who is not able to fluently identify numbers to start learning addition. Catching them young? There are countries like Finland that believe that formal education should not be started before the age of seven. However, that may be true in a context where the family can provide basic educational support needed in those years. When these supports are missing – including in cases where the parents may be illiterate, pre-schooling to provide ‘learning readiness’ can play a major role in developing pre-numeracy skills. Repetitive, slow-paced exposure provided to children in the context of games or real-world activities can help build these foundational skills. The India Early Childhood Education Impact study, conducted by the ASER Centre and the Centre for Early Childhood Education and Development indicates that children who were exposed to high-quality preschooling were more ‘school ready’ than those who were not. The five-year longitudinal study tracked close to 13,000 children across 3 states in India and shows that building children’s cognitive, pre-literacy and pre-numeracy skills during the preschooling stage improves their learning outcomes in early primary classes. However, balwadis which are essentially government run child care centres for children before schooling age, focus on health and nutrition (which are definitely of primary importance) but do not have a curriculum that would help acquisition of foundational skills. A few changes here and inclusion of activities that would develop foundational skills (and the necessary training of balwadi staff) would go a long way in addressing this gap in foundational learning. The power of this is illustrated by a finding in the US that the average cost of prschooling a child may be $6,000–12,000, but the savings in terms of later services are estimated at about $30,000–120,000![6] A test of foundational skills conducted by us on 30 students across four balwadis in Ahmedabad revealed that many 4.5 to 6 year olds, (who were going to class 1) could not draw a straight line using a pencil. In contrast, across most private preschools, driven by admission criteria in private schools for...

The post The state of preschool mathematics education in the Indian government school system appeared first on Ei Study.

]]>
A student’s mathematical knowledge is built on a basic understanding of numbers and basic arithmetic operations. We refer to these skills, which should be acquired by class 5, as ‘Foundational Numeracy’ skills. Several studies have shown that students who perform poorly on these skills at the end of kindergarten and class 1 are likely to perform poorly in mathematics through class 4.[1] If students are unable to attain basic numeracy skills by class 5, they tend to fall behind, creating wide learning gaps.

Proficiency in foundational numeracy skills also predicts the presence of mathematics learning disabilities.[2] Attaining foundational skills in the early years also ensures that all students are given equal opportunity to perform well in schools by making it easier for them to gain new skills.[3]

We have observed, through our Personalised Adaptive Learning Programme Mindspark, and through various student interactions, that class 1 students from government (public) schools tend to exhibit much lower performance and knowledge of foundational skills compared to students from high-fee private schools.[4] (While some consider such comparisons invalid because of the vastly different home resources between these two groups, many countries have shown that children from even the poorest backgrounds can acquire foundational skills well. For us, therefore, the high fee private school performance merely marks the learning level that all students can and must acheive.)

A study conducted by EI on foundational numeracy levels across 3 Indian states (Himachal Pradesh, Rajasthan, and Telangana) showed that the performance of government school students lags significantly behind. This study also revealed skill-wise differences in numeracy tasks for students at the end of class 1. Students at the end of class 1 are expected to master some of the foundational numeracy skills shown below:

Number recognition: While it may seem a very easy skill, the ability of children to quickly be able to recognise and read out 1 and 2 digit numbers turns out to be a very critical one. Indeed, only 33 percent of government school students in Himachal Pradesh (the best-performing state in the study) could accurately recognise 10 numbers shown compared to 92 percent students from private schools. Students were given 10 numbers to read. Responses for each were registered.

Number writing: Only 26 percent of government school students in Himachal Pradesh could accurately write 10 numbers compared to 87 percent students from private schools. 10 number-words were recited to the students one by one and they were expected to write the numerals for the corresponding number-word.

Number comparison: 76 percent of government school students in Himachal Pradesh could accurately compare 10 pairs of numbers as compared to 98 percent students from private schools. Students were presented with 10 different pairs of numbers and were asked to identify the bigger number, e.g. “Which is bigger, 17 or 71?”

Similar trends are seen across all other skills and paint a disheartening picture. After a year spent in class 1, students in government schools are not even close to where they need them to be in order for them to gain new skills at later grade levels. Repeated practice is undoubtedly a key ingredient for the acquisition of the above skills. The absence of home support is a crucial advantage for private school students – and teaching in public schools must be planned accounting for this. Testing these foundational skills every quarter, for example, and providing teachers benchmarks for levels students must achieve are ways to make this a part of the school’s role.

In order to master single digit addition facts, for example, students should be able to recognise numbers, understand cardinality,[5] employ addition strategies, and know how to write numbers. Writing needs fine motor skills to have been developed. Developmentally, acquiring fine motor skills is a time-taking process and requires practice to move from scribbling to forming numerals accurately. Probably the biggest challenge is when teachers do not understand the importance of these milestones and – for example – expect a child who is not able to fluently identify numbers to start learning addition.

Catching them young? There are countries like Finland that believe that formal education should not be started before the age of seven. However, that may be true in a context where the family can provide basic educational support needed in those years. When these supports are missing – including in cases where the parents may be illiterate, pre-schooling to provide ‘learning readiness’ can play a major role in developing pre-numeracy skills. Repetitive, slow-paced exposure provided to children in the context of games or real-world activities can help build these foundational skills. The India Early Childhood Education Impact study, conducted by the ASER Centre and the Centre for Early Childhood Education and Development indicates that children who were exposed to high-quality preschooling were more ‘school ready’ than those who were not. The five-year longitudinal study tracked close to 13,000 children across 3 states in India and shows that building children’s cognitive, pre-literacy and pre-numeracy skills during the preschooling stage improves their learning outcomes in early primary classes.

However, balwadis which are essentially government run child care centres for children before schooling age, focus on health and nutrition (which are definitely of primary importance) but do not have a curriculum that would help acquisition of foundational skills. A few changes here and inclusion of activities that would develop foundational skills (and the necessary training of balwadi staff) would go a long way in addressing this gap in foundational learning. The power of this is illustrated by a finding in the US that the average cost of prschooling a child may be $6,000–12,000, but the savings in terms of later services are estimated at about $30,000–120,000![6]

A test of foundational skills conducted by us on 30 students across four balwadis in Ahmedabad revealed that many 4.5 to 6 year olds, (who were going to class 1) could not draw a straight line using a pencil. In contrast, across most private preschools, driven by admission criteria in private schools for class 1, students between 4.5 to 6 years are able to write numbers and sometimes even know number names up to 20. Similarly, while most private preschool students are able to recognise numerals, the test in balwadis reveals less than 20% of students are able to recognise numerals accurately.

It is therefore important to understand where the gaps exist and how they can be bridged effectively. We see the need for early interventions to ensure student proficiency in early number competencies. In the absence of such interventions in balwadis, we must ensure that necessary support and focus is given to foundational skills in class 1. This is even more important for students from low-income backgrounds because these children enter class 1 well behind their peers from middle-income families on numeracy indicators, and this gap increases during the course of the school year.[7]

We also believe that repetitive exposure to such skills provided with immediate feedback and based on the personalised performance of each child, possible through programmes like Mindspark, can help students acquire these foundational skills. Technology assisted classroom learning, even with exposure durations of 30-60 minutes a week seem to result in significant improvements. However, states tend to prefer the use of educational technology tools to improve board exam results than foundational learning and there is a need for larger implementations to verify and scale up these results.

References and footnotes:
1. Duncan et al., 2007, “School readiness and later achievment”; Jordan, Kaplan, Ramineni, & Locuniak, 2009, “Early Math Matters: Kindergarten Number Competence and Later Mathematics Outcomes”; Morgan, Farkas, & Wu, 2009, “Five-year Growth Trajectories of Kindergarten Children With Learning Difficulties in Mathematics”.
2. Mazzocco & Thompson, 2005, “Kindergarten Predictors of Math Learning Disability”, Learning Disabilities Research & Practice.
3. Jordan, Kaplan, Ramineni & Locuniak, 2009, “Early Math Matters: Kindergarten Number Competence and Later Mathematics Outcomes”, American Psychology Association.
4. All the references to ‘private schools’ in this article refer to high-fee private schools which can be considered schools charging above Rs 20,000 monthly fees.
5. Like most concepts in foundational learning, the idea of cardinality ‘the total number of objects in a group’ sounds simple to us but is not trivial for a child to acquire.
6. Abadzi, Helen, 2006, “Efficient Learning for the Poor: Insights from the Frontier of Cognitive Neuroscience”. Washington, DC : World Bank. © World Bank.
7. Jordan & Levine, 2009, “Socioeconomic variation, number competence, and mathematics learning difficulties in young children”, Developmental Disabilities Research Reviews.

The post The state of preschool mathematics education in the Indian government school system appeared first on Ei Study.

]]>
Can we use our participation in PISA to improve our maths learning levels? https://ei.study/can-we-use-our-participation-in-pisa-to-improve-our-maths-learning-levels-2/ Wed, 13 Dec 2023 08:23:59 +0000 https://ei.study/can-we-use-our-participation-in-pisa-to-improve-our-maths-learning-levels-2/ Can we use our participation in PISA to improve our maths learning levels? Sridhar Rajagopalan and Nishchal Shukla India has agreed to participate in PISA in 2021. A small number of randomly selected students from Chandigarh, the Kendriya Vidyalayas and Navodaya Vidyalayas will write the test next year. India has participated once in PISA before, in 2009-10, when students from Tamil Nadu and Himachal Pradesh wrote the test. We performed very poorly, ranking 73rd among 74 countries, finishing ahead of only Kazakhstan. This result was shocking and many people assume that it must have been an aberration. Maybe the students were not prepared for the test or had to take it in English. (Actually, all students were tested in their medium of instruction.) Maybe only government schools were tested – our private schools would have done much better. (A detailed study conducted by our organization in 2006 and repeated in 2012 established that students of our top private schools perform below international average in class 4.) Our organization regularly conducts assessment for private schools (called ASSET) as well as for various governments. These assessments (as well as tests like Pratham’s ASER) seem to suggest that we do indeed have a learning crisis in our system. Our learnings and conclusions based on doing such assessments for about 20 years in India are: From a learning perspective, India’s education system is very poor – in our analysis falling behind even some of the poorest performers in Africa. For India to improve its education system, it needs to a) ensure strong foundational language and arithmetic skills by class 5 and b) have a school leaving exam (board exam) focussed on learning and not simply recall. What is PISA and can it be more than just a harbinger of (bad) news? Can we learn from assessments like PISA (and good assessments we already have in India) to improve our assessment systems, including our board exams at which all teaching is targeted? In this article, we shall look at what PISA is, delve into its mathematics testing framework in some detail and examine how we can use it to actually improve our learning levels. Since the main problem with our board and school exams today in the poor quality of their questions, we shall take examples from PISA and some other questions and explain why good questions are so critical to attain good learning. PISA, the Programme for International Student Assessment, is an international benchmarking test that assesses random samples of 15-year old students (typically in class 10) on their reading (language), mathematical and scientific literacy skills. It has been conducted every three years since 2000. A country (or region) must voluntarily agree to write the PISA test. Another international assessment, TIMSS (Trends in International Mathematics and Science Studies) is similarly conducted once every four years for students of classes 4 and 8 since 1995. India has participated only once (2009-10 in PISA) among all these and a few other assessments. Assessments like PISA and TIMSS have shown that some countries like Singapore, Finland and South Korea have been among the top performers though ranks vary a bit between the years and the assessments. China has been a late entrant but has performed extremely well with Shanghai city topping the PISA in recent years. The US is an example of a country that has spent a lot of money but has performed only at average levels in benchmarching tests since the 1990s. PISA mathematics – skill framework and types of questions The PISA tests “are designed to gauge how well students master key subjects in order to be prepared for real-life situations in the adult world.” The test contains items from three core domains – reading, mathematics and science. In each round, one of these is the major domain. PISA 2021 will have mathematics as the major domain. This means that there will be more assessment items in the papers assessing mathematical literacy. The PISA Draft Framework defines mathematical literacy as: “An individual’s capacity to reason mathematically and to formulate, employ, and interpret mathematics to solve problems in a variety of real-world contexts. It includes concepts, procedures, facts and tools to describe, explain and predict phenomena. It assists individuals to know the role that mathematics plays in the world and to make the well-founded judgments and decisions needed by constructive, engaged and reflective 21st century citizens.” It assesses the following cognitive and content areas. So what differentiates a PISA question from other questions and why do students struggle on PISA questions? PISA questions require students to a) understand the information provided, b) find what needs to be determined or solved and then c) apply the appropriate procedure. The questions we use in our Indian tests (including board and school exams) tend to be based on a small number of familiar question types. Thus only c) in the above list is tested. That was probably okay for the well-defined world of the 1950s and 60s, but in today’s circumstances, the ability to absorb new information and define the problem and choose what needs to be solved, is even more important. Let us look at a few PISA and our own PISA-like questions from ASSET to understand this: Sample PISA Questions – Mathematics The purpose of the background information in such questions is to provide a real-life context to the student based on which the questions are asked. That information can be of different types and cover a variety of contexts. Different types of background information possible: Variety of contexts background information may cover: • passages • personal (individual, peer group, family) • tabular information/data • occupational (the world of work) • images • societal (local, national or global community) • graphs or charts • scientific (natural or technological world) • combinations of the above • some more advanced types of materials involve interactives that PISA uses for computer-based tests   While one may point out that the context used may be unfamiliar and may cause students to struggle to answer...

The post Can we use our participation in PISA to improve our maths learning levels? appeared first on Ei Study.

]]>
Can we use our participation in PISA to improve our maths learning levels?

Sridhar Rajagopalan and Nishchal Shukla

India has agreed to participate in PISA in 2021. A small number of randomly selected students from Chandigarh, the Kendriya Vidyalayas and Navodaya Vidyalayas will write the test next year.

India has participated once in PISA before, in 2009-10, when students from Tamil Nadu and Himachal Pradesh wrote the test. We performed very poorly, ranking 73rd among 74 countries, finishing ahead of only Kazakhstan.

This result was shocking and many people assume that it must have been an aberration. Maybe the students were not prepared for the test or had to take it in English. (Actually, all students were tested in their medium of instruction.) Maybe only government schools were tested – our private schools would have done much better. (A detailed study conducted by our organization in 2006 and repeated in 2012 established that students of our top private schools perform below international average in class 4.)

Our organization regularly conducts assessment for private schools (called ASSET) as well as for various governments. These assessments (as well as tests like Pratham’s ASER) seem to suggest that we do indeed have a learning crisis in our system. Our learnings and conclusions based on doing such assessments for about 20 years in India are:

  • From a learning perspective, India’s education system is very poor – in our analysis falling behind even some of the poorest performers in Africa.
  • For India to improve its education system, it needs to a) ensure strong foundational language and arithmetic skills by class 5 and b) have a school leaving exam (board exam) focussed on learning and not simply recall.

What is PISA and can it be more than just a harbinger of (bad) news? Can we learn from assessments like PISA (and good assessments we already have in India) to improve our assessment systems, including our board exams at which all teaching is targeted? In this article, we shall look at what PISA is, delve into its mathematics testing framework in some detail and examine how we can use it to actually improve our learning levels. Since the main problem with our board and school exams today in the poor quality of their questions, we shall take examples from PISA and some other questions and explain why good questions are so critical to attain good learning.

PISA, the Programme for International Student Assessment, is an international benchmarking test that assesses random samples of 15-year old students (typically in class 10) on their reading (language), mathematical and scientific literacy skills. It has been conducted every three years since 2000. A country (or region) must voluntarily agree to write the PISA test. Another international assessment, TIMSS (Trends in International Mathematics and Science Studies) is similarly conducted once every four years for students of classes 4 and 8 since 1995. India has participated only once (2009-10 in PISA) among all these and a few other assessments.

Assessments like PISA and TIMSS have shown that some countries like Singapore, Finland and South Korea have been among the top performers though ranks vary a bit between the years and the assessments. China has been a late entrant but has performed extremely well with Shanghai city topping the PISA in recent years. The US is an example of a country that has spent a lot of money but has performed only at average levels in benchmarching tests since the 1990s.

PISA mathematics – skill framework and types of questions
The PISA tests “are designed to gauge how well students master key subjects in order to be prepared for real-life situations in the adult world.”

The test contains items from three core domains – reading, mathematics and science. In each round, one of these is the major domain. PISA 2021 will have mathematics as the major domain. This means that there will be more assessment items in the papers assessing mathematical literacy.

The PISA Draft Framework defines mathematical literacy as:
“An individual’s capacity to reason mathematically and to formulate, employ, and interpret mathematics to solve problems in a variety of real-world contexts. It includes concepts, procedures, facts and tools to describe, explain and predict phenomena. It assists individuals to know the role that mathematics plays in the world and to make the well-founded judgments and decisions needed by constructive, engaged and reflective 21st century citizens.”

It assesses the following cognitive and content areas.

So what differentiates a PISA question from other questions and why do students struggle on PISA questions?

PISA questions require students to a) understand the information provided, b) find what needs to be determined or solved and then c) apply the appropriate procedure. The questions we use in our Indian tests (including board and school exams) tend to be based on a small number of familiar question types. Thus only c) in the above list is tested. That was probably okay for the well-defined world of the 1950s and 60s, but in today’s circumstances, the ability to absorb new information and define the problem and choose what needs to be solved, is even more important. Let us look at a few PISA and our own PISA-like questions from ASSET to understand this:

Sample PISA Questions – Mathematics

The purpose of the background information in such questions is to provide a real-life context to the student based on which the questions are asked. That information can be of different types and cover a variety of contexts.

Different types of background information possible: Variety of contexts background information may cover: • passages • personal (individual, peer group, family) • tabular information/data • occupational (the world of work) • images • societal (local, national or global community) • graphs or charts • scientific (natural or technological world) • combinations of the above • some more advanced types of materials involve interactives that PISA uses for computer-based tests

 

While one may point out that the context used may be unfamiliar and may cause students to struggle to answer the question, it is important to note that this is one of the characteristics of a good question. Many other international assessments including TIMSS and our own ASSET test use questions that are based on unfamiliar contexts or which appear as if they are not something students are used to answering.

For example, consider the following question from ASSET.

Hiten typed in a paragraph about octopuses for a project. His sister Nandita played a prank and changed all the numbers in the paragraph by multiplying each correct number by the same number, say ‘n’.

Now Hiten has to correct the numbers again. What should he put as the actual length of the Atlantic Pygmy octopus? (Hints are given in the visual of the octopus.)
A) 2 cm
B) 3 cm
C) 6 cm
D) 18 cm
Source: ASSET – https://www.ei-india.com/asset

The question uses a factual piece of information and gives the context of what happened when someone changed the numbers in the factual piece in a certain way. Students are expected to understand this context and use one piece of knowledge that they are familiar with regarding the correct number of arms an octopus has, to solve the problem and answer the question. In fact, even if they don’t know that fact, they can take hint from the image given. While a lot of students may find answering this kind of question interesting and the process enjoyable, the first reaction of many teachers is that this is ‘out of syllabus’ and so students cannot be expected to answer it. In our sample of around 14000 private English medium school students, only 21.5% students were able to answer this question correctly. 31.1% students selected option D, indicating that they didn’t really understand the context and the question and blindly selected the length (incorrect) mentioned in the passage.

It is important to note that the data for the two questions above are from students of private English medium schools whose performance tends to be higher than those of government school students. A large percentage of government school students often struggle in the first step itself, in reading the given text.

This argument that students fail to answer due to lack of familiar context fails when we look at their performance on some of the basic conceptual questions like the one that follows. This question was given to the same 14000 private English medium school students. Only 30.7% students were able to answer this question correctly. 27.7% students selected D indicating that they were just used to calculating area of such shapes given their dimensions and could not think beyond that procedural knowledge.

Two sheets of different sizes were shaded as shown.

Which of the following is equal for the two sheets?

1. Area of the shaded part.
2. Percentage of the sheet shaded.
3. Ratio of the shaded area to the unshaded area.

A. only 2
B. both 1 and 2
C. both 2 and 3
D. Can’t say unless the dimensions of the sheets are known.
Source: ASSET – https://www.ei-india.com/asset

As seen in the above two examples, a good question is one that challenges and stimulates a child to think deeply and to apply concepts learnt. A good question, correctly framed, can help a teacher understand the thought processes of students and how well a child has internalized a concept or mastered a skill.

Interestingly, a majority of the countries do the PISA test on the computer. The computer-based version of the test includes MCQs (Multiple Choice Questions) as well as other technology-enhanced item (TEI) types. TEIs typically help elicit responses that otherwise are difficult to do in a traditional MCQ or even a free response question format. Here is an example of one such question that allows students to interact with the map, select/deselect a route and check the time taken to travel through the selected route. It then asks a question that uses this interactivity. The advantage of such a question would be that it would capture not just the final answer (which an MCQ could also have) but also what the student selected, the sequence in which selected and the time taken before the student answered the question.

Due to the advantages listed above, the use of TEIs in assessments is increasing. Even ASSET’s computer-based version tests students on MCQs as well as TEIs.

Conclusion

Tests like PISA help at multiple levels. While the benchmarking data they provide may serve as a wake-up call, probably even more important is the learning from the type of questions these tests use. If used correctly, a good question can prove to be a highly effective tool in the process of learning and can even enable learning. In that sense, studying tests like PISA and our own tests like ASSET is something every teacher must do. While developing questions like these improve the capacity of teachers, analyzing the student performance shows how children are thinking and the errors they are making. Understanding these and modifying our teaching methods accordingly may be the best way to move towards improved student learning.

Sridhar Rajagopalan is co-founder and Chief Learning Officer of Educational Initiatives Pvt. Ltd. He can be reached at [email protected]

Nishchal Shukla is Associate Vice President, Pedagogical Research, at Educational Initiatives Pvt. Ltd. He can be reached at nishchal@@ei.study

The post Can we use our participation in PISA to improve our maths learning levels? appeared first on Ei Study.

]]>
Understanding by Design – A curriculum framework for a computer-based science learning program https://ei.study/understanding-by-design-a-curriculum-framework-for-a-computer-based-science-learning-program/ Wed, 13 Dec 2023 07:22:59 +0000 https://ei.study/understanding-by-design-a-curriculum-framework-for-a-computer-based-science-learning-program/ What is Understanding by Design? Understanding by Design (UbD) is a planning framework for instruction, assessment and outlining curriculum. The fundamental tenet of this framework is to start planning from the very end – the understanding that students should be left with. Typically, instructors and curriculum creators approach the teaching-learning dynamic in a “forward” fashion, designing learning activities and assessment materials while trying to extract goals for the overall lesson as a last step. In contrast, UbD starts from the overall goals of the lesson and places importance on defining learning transfer – the achievement of application of learning in different contexts. UbD in a tech-based product Research shows that the UbD framework is effective in elevating performance and understanding. A study conducted on grade 8 science students reveals that there is a significant difference in student achievement (measured through a test) after they had received instruction guided by the principles of UbD. This framework changes the role of the teacher from an agent who conveys scientific concepts to a person who promotes the development of scientific understanding and temper in students. This change is not only propounded by the UbD framework but increasingly in the educational models around the world. We are moving from a traditional classroom setup with a teacher as the primary “knowledge-giver” to a tech-based learning system where the teacher is a “facilitator”. By using a series of questions, experimental setups, digital interactive content, students embark on an inductive journey that leads them to construct their own mental models of scientific concepts and theories.  Considering the growing pace at which the country is moving towards the use of technology in education, this framework sets the precedent for constructivism as an alternate mode of learning in schools. So how to make use of UbD? Step 1: Identifying desired results The different ingredients of understanding form the core of the various products at EI.  “Understanding” redefines the meaning of education – acquisition of factual knowledge of concepts and skills is not the goal. It is to be able to use this learning in meaningful contexts. Therefore, the first step of the UbD framework is to identify the desired results. For any learning program and specifically a computer-based one in the science domain, the following questions should be taken into account: 1) Why is this topic important to know in the realm of science? 2) What transfer of learning can students achieve at the end of this lesson? In other words, which are the areas and ways in which students can apply this learning in a meaningful way? 3) What is the scientific understanding that students must retain in the long term (enduring understanding)? 4) What are the common misconceptions that students at this age have about these concepts? Landmark work in the construction of concept maps undertaken by the likes of Rosalind Driver, Page Keeley along with curriculum standards like Next Generation Science Standards, Benchmarks for Science Literacy, etc. can be used as guides to establish this first step. At this stage, care should be taken that the enduring understanding contains both concept goals as well as transfer goals. That is, besides instruction being means to build knowledge in a student, it should also be able to equip them to apply learning in meaningful scenarios. Let us take a look at an example to illustrate the application of the framework. Suppose a learning module on respiration in plants, at a grade 7 level is being created. For this particular topic, the enduring understanding that students should be left with at the end of the learning module includes: Plants require energy to perform life functions just like all other living organisms. Energy is released from the breakdown of food through a series of chemical reactions. These reactions are part of respiration. Based on these points, key concepts that need to be covered are put down. Here are some examples: Plants are living organisms, hence they also respire. Glucose is the food for the plant which in most plants is created by themselves. Glucose is broken down in the presence of oxygen to release energy. Plants do not have a separate mechanism for breathing. They have stomata that help with the exchange of gases. At this stage, common misconceptions around the topic should be mapped to specific key ideas. The article covers the methods of addressing misconceptions through a computer-based science learning program that we are creating. Step 2: Designing assessment items Once the long-term goals of the lesson have been established, effective assessment tools that provide true evidence of understanding need to be developed. For every goal listed in step 1 as well as the main ideas covered in the lesson, reliable indicators of understanding are created. Some questions that must be used as guidelines are given here: 1) By answering this assessment item, do we get a clear evidence that the concept has been understood? Do the assessment items validate the target that needs to be achieved? 2) What can we accept as evidence for understanding, application, interpretation and perspective (being able to display several viewpoints)? 3) How can we assess understanding in a consistent manner? Is the assessment foolproof and reliable? For the key ideas mentioned above, some of the assessment items may be: Q. Respiration is a process where glucose is broken down to release energy required for all the life processes and carbon dioxide is given out. Do plants respire? Yes, plants respire and it is called photosynthesis. No, plants do not respire for they give out oxygen. Yes, plants respire all the time because they need energy. Yes, plants respire but only during the night, not during the day. Q. We know that during respiration food is broken down into simpler substances so that energy can be made available. Which two of these substances are the reactants in the process of respiration in plants? glucose and oxygen, glucose and water, carbon dioxide and water carbon dioxide and oxygen] Q. Identify X in the...

The post Understanding by Design – A curriculum framework for a computer-based science learning program appeared first on Ei Study.

]]>

What is Understanding by Design?

Understanding by Design (UbD) is a planning framework for instruction, assessment and outlining curriculum. The fundamental tenet of this framework is to start planning from the very end – the understanding that students should be left with. Typically, instructors and curriculum creators approach the teaching-learning dynamic in a “forward” fashion, designing learning activities and assessment materials while trying to extract goals for the overall lesson as a last step. In contrast, UbD starts from the overall goals of the lesson and places importance on defining learning transfer – the achievement of application of learning in different contexts.

UbD in a tech-based product

Research shows that the UbD framework is effective in elevating performance and understanding. A study conducted on grade 8 science students reveals that there is a significant difference in student achievement (measured through a test) after they had received instruction guided by the principles of UbD.

This framework changes the role of the teacher from an agent who conveys scientific concepts to a person who promotes the development of scientific understanding and temper in students. This change is not only propounded by the UbD framework but increasingly in the educational models around the world. We are moving from a traditional classroom setup with a teacher as the primary “knowledge-giver” to a tech-based learning system where the teacher is a “facilitator”. By using a series of questions, experimental setups, digital interactive content, students embark on an inductive journey that leads them to construct their own mental models of scientific concepts and theories.  Considering the growing pace at which the country is moving towards the use of technology in education, this framework sets the precedent for constructivism as an alternate mode of learning in schools.

So how to make use of UbD?

Step 1: Identifying desired results

The different ingredients of understanding form the core of the various products at EI.  “Understanding” redefines the meaning of education – acquisition of factual knowledge of concepts and skills is not the goal. It is to be able to use this learning in meaningful contexts. Therefore, the first step of the UbD framework is to identify the desired results.

For any learning program and specifically a computer-based one in the science domain, the following questions should be taken into account:

1) Why is this topic important to know in the realm of science?

2) What transfer of learning can students achieve at the end of this lesson? In other words, which are the areas and ways in which students can apply this learning in a meaningful way?

3) What is the scientific understanding that students must retain in the long term (enduring understanding)?

4) What are the common misconceptions that students at this age have about these concepts?

Landmark work in the construction of concept maps undertaken by the likes of Rosalind Driver, Page Keeley along with curriculum standards like Next Generation Science Standards, Benchmarks for Science Literacy, etc. can be used as guides to establish this first step. At this stage, care should be taken that the enduring understanding contains both concept goals as well as transfer goals. That is, besides instruction being means to build knowledge in a student, it should also be able to equip them to apply learning in meaningful scenarios.

Let us take a look at an example to illustrate the application of the framework. Suppose a learning module on respiration in plants, at a grade 7 level is being created. For this particular topic, the enduring understanding that students should be left with at the end of the learning module includes:

  • Plants require energy to perform life functions just like all other living organisms.
  • Energy is released from the breakdown of food through a series of chemical reactions. These reactions are part of respiration.

Based on these points, key concepts that need to be covered are put down. Here are some examples:

  • Plants are living organisms, hence they also respire.
  • Glucose is the food for the plant which in most plants is created by themselves.
  • Glucose is broken down in the presence of oxygen to release energy.
  • Plants do not have a separate mechanism for breathing. They have stomata that help with the exchange of gases.

At this stage, common misconceptions around the topic should be mapped to specific key ideas. The article covers the methods of addressing misconceptions through a computer-based science learning program that we are creating.

Step 2: Designing assessment items

Once the long-term goals of the lesson have been established, effective assessment tools that provide true evidence of understanding need to be developed. For every goal listed in step 1 as well as the main ideas covered in the lesson, reliable indicators of understanding are created.

Some questions that must be used as guidelines are given here:

1) By answering this assessment item, do we get a clear evidence that the concept has been understood? Do the assessment items validate the target that needs to be achieved?

2) What can we accept as evidence for understanding, application, interpretation and perspective (being able to display several viewpoints)?

3) How can we assess understanding in a consistent manner? Is the assessment foolproof and reliable?

For the key ideas mentioned above, some of the assessment items may be:

Q. Respiration is a process where glucose is broken down to release energy required for all the life processes and carbon dioxide is given out. Do plants respire?

  1. Yes, plants respire and it is called photosynthesis.
  2. No, plants do not respire for they give out oxygen.
  3. Yes, plants respire all the time because they need energy.
  4. Yes, plants respire but only during the night, not during the day.

Q. We know that during respiration food is broken down into simpler substances so that energy can be made available. Which two of these substances are the reactants in the process of respiration in plants?

  1. glucose and oxygen,
  2. glucose and water,
  3. carbon dioxide and water
  4. carbon dioxide and oxygen]

Q. Identify X in the following analogy.

nose:humans :: gills:fish :: X:plants

  1. stomata
  2. leaves
  3. chloroplasts
  4. roots

 Step 3: Designing instructional materials

After developing assessment tools, the modes of delivery of instruction are finally put down. The ways in which the required support can be delivered to students so that they achieve the desired results is the crux of this step. In a traditional classroom environment, this includes the development of lesson plans, worksheets, in-class activities, projects, homework, etc.

At this stage, these questions may be helpful to answer:

1) What scientific concepts and skills should students be equipped with to to be ready to receive/learn new content?

2) What kind of support do students require so that they can transfer their learning to different contexts?

3) Are we preparing them to be assessed by giving them all necessary content knowledge?

Here, the learning module should be created, keeping in mind two things: a) content and skills that need to be covered, b) the appropriate method to deliver them on an online platform. This includes whether to deliver information through direct instruction as opposed to self-construction of the concept. It must also involve decisions revolving around using text/diagrammatic/data-based/interactive/gamified representations.

The first step can be revising key points covered in previous lessons – testing prerequisite knowledge through simple questions and activities. For this particular topic, the knowledge that students come with include:

  • Photosynthesis is the process by which most plants produce food.
  • Respiration is a process that releases energy.

Using these ideas as points of departure, appropriate questions can be framed, to provide students with necessary information leading up to the assessment questions. In some cases, when a new concept is being introduced, it is presented through direct instruction – the information is given upfront and students can be made to answer learning questions based on it. The thought process that guides this exercise is that one question leads to another, while students are simultaneously learning as well as being assessed at every stage.

Misconceptions

Research in science learning shows that students harbour misconceptions that are developed due to misinterpretation of ideas/phenomena, gaps in the curriculum or ineffective instruction, prior incorrect notions built based on observations in the surroundings, oversimplification of concepts, etc.

The desired goal of the science learning program would be to use the cleared misconception in a new setting. Are they able to apply the newly learned idea to a different context successfully? Assessment items should include questions that check whether the misconception has been cleared. These can take the forms of checking for the capacity to explain and apply. Finally, the instructional material should be created to break down the mental formation of a concept into simple steps to ensure that the misconception is not formed. These nuances contribute to the technical soundness of a good technological intervention in science education as well as its thoroughness in producing effective results.

For the above topic, one of the questions that are used to check whether the misconception has been cleared is as follows:

When do plants breathe?

i) only at night

ii) only during the day

iii) throughout the day and night

iv) depends on the type of plant

Conclusion

The UbD framework is a handy tool for educators in various contexts and is built on the principle of redefining what “understanding” means; similar to what we do at EI. As far as science education is concerned, using the framework to build a learning programme has not been seen before in the country. With months of research and conceptualization, EI’s computer-based science learning program has been christened Mindspark Science and is set to launch in 2021. Currently the product is being piloted in 127 schools.

Although UbD is a planning framework, Mindspark Science uses it simultaneously as a planning as well as creation framework. Since the learning product is built on the idea that students learn better by answering questions, at every point students are being taught as well as assessed.  The framework not only serves as a guide for content creation, but also forms an integral part of its objective – to ensure the development of scientific understanding in all students.

References and further reading

https://cft.vanderbilt.edu/guides-sub-pages/understanding-by-design/

https://www.ascd.org/ASCD/pdf/siteASCD/publications/UbD_WhitePaper0312.pdf

https://www.eujournal.org/index.php/esj/article/view/8853/8513

https://www.chalk.com/resources/what-understanding-by-design-administrators-care/

https://www.netlanguages.com/blog/index.php/2017/06/28/what-is-inductive-learning/

The post Understanding by Design – A curriculum framework for a computer-based science learning program appeared first on Ei Study.

]]>
5 key ideas behind successful teachers during COVID times https://ei.study/5-key-ideas-behind-successful-teachers-during-covid-times/ Wed, 13 Dec 2023 04:20:01 +0000 https://ei.study/5-key-ideas-behind-successful-teachers-during-covid-times/ As a resident of Chandigarh, I often receive compliments on the well-organized and planned layout of the city. Being a recipient of such praise in spite of zero contribution to the cause, I often wonder why certain things are the way they are. Sans the architectural element, I have concluded that the human tendency to take the path of least resistance can be a powerful explainer of where we are and how we do things. Indeed, the physical paths we tread seem to be an outcome of our ancestors and bovines moving through a topography as it was immediately easiest to move. When a cow saw a hill ahead, she did not say to herself, “Aha! I must navigate around it.” Rather, she just treads the easiest possible path, or the one with the smallest incline. Eventually the path becomes easier to walk on, a part of our working memory, and more clearly defined. No wonder, a lot of our city planning debates end up in a moo point, pun intended! I was trying to think of how this approach has affected me and my fellow educators in these times of COVID and extended lockdowns. Are we hard-wired to follow the path of least resistance, I wonder, and what are some of the successful educators doing? Privileged to have an exposure to hundreds of educators we work with at EI and being privy to their teaching-learning practices, I am sharing a few learnings from some successful educators who seem to have repeatedly managed to reimagine the teaching-learning process. Personalised learning is the key Students who were just benches apart (even if separated by larger learning distances) are now also physically distanced. Some of the successful educators, I have seen, have understood this fact and realized that one of the key elements during online-teaching is that there need to be specific checkpoints and catalysts to make the virtual environment ‘real’ and conducive for everyone. We can no more tread the same path as was (somewhat) successfully done in the classroom/lecture method. Not only the visual experiences but also the immediate external environment of the student has changed, and it is imperative for an educator to be aware of this. Thanks to technology like EI’s own Mindspark, and with learnings from research, we are more capable of personalizing education and make our learning student-centric instead of lecture-specific. Interestingly, the problem of achievement gap was responded to, in one of the seminal works by Professor Benjamin Bloom, in 1984. Prof Bloom concluded that students who were tutored (either 1:1 or in small groups) outperformed 98% of their non-tutored peers! Furthermore, 90% of tutored students reached the same level of summative achievement as the top 20% of students in traditional classrooms. This quantum leap in student performance has since been confirmed in a swarm of academic studies. In short, personalisation works, particularly when it is paired with an approach centred around mastery. 2. The problem is not physical, but transactional distance  One difference I have observed between successful and struggling teachers is an understanding of whether technology is supporting their teaching or their teaching is being dictated by technology. In other words, instruction design and pedagogy should determine how the technology is being used. According to the Transactional Distance Theory (TDT, Moore, 1996), reducing the psychological space between participants and instructors through pedagogy, results in higher learning outcomes in online learning. In other words, transactional distance measures the connectedness between the learner and the learning environment. There are 3 important factors which influence transactional distance: Dialogue – Communication between teacher and learner Structure – Elements of course design Learner autonomy – Control of students on their learning 3.  Parents are the facilitators Schools have often, rightfully, been seen as places of change. The role of a parent has been to first admit a child to the right school (I am purposely avoiding the word ‘good’) and then ensuring that the child learns the right moral values at home. All the academic learning is expected to happen in school. Yet, the present situation has made it clear that in addition to the teacher, the parent has to be a key facilitator in a child’s education. With the brick and mortar boundaries vanishing, synergy between parents and educators is critical for effective learning. I have seen lot of successful schools do this with great poise. A fellow educator and a parent herself, Ms. Nivvrati Rathore of Noida, Uttar Pradesh, says, “School closure has been hard for the students and harder yet for parents, from coping with anxious teenagers to facilitating the logistics of online schooling. Parents have doubled up as teachers in many ways. So our school has tried to focus on the strategies to bridge the divide and connect with students and the parents to achieve the desired result.” 4. Re-defining school and learning  In a funny way, I have realized that most of the people are consistent with the thought, that we are doing right now is a stop gap arrangement. Our attitudes, behaviours and perceptions have changed towards teachers, parents, and examinations because what we used to consider ‘schooling’ is not happening right now. To define this, I will share an idea. One of the playwrights I admire, Bertolt Brecht was once asked to describe theatre by an interviewer. He asked the interviewer to close her eyes and imagine theatre in action. Subsequently, he asked her to mentally remove each item one by one from the scene and share if she still feels that it is fit to call it theatre. Remove furniture, lights, costumes, curtains etc. and so on till the point where removing something breaks what we understand as theatre. I like to view our schools through the same thought experiment. Remove the white/black board, the bell, recess, notice board etc., can it still be called a school? I think, yes… until we remove one phenomenon which is the core of what we expect from school. And that one thing I...

The post 5 key ideas behind successful teachers during COVID times appeared first on Ei Study.

]]>
As a resident of Chandigarh, I often receive compliments on the well-organized and planned layout of the city. Being a recipient of such praise in spite of zero contribution to the cause, I often wonder why certain things are the way they are. Sans the architectural element, I have concluded that the human tendency to take the path of least resistance can be a powerful explainer of where we are and how we do things.

Indeed, the physical paths we tread seem to be an outcome of our ancestors and bovines moving through a topography as it was immediately easiest to move. When a cow saw a hill ahead, she did not say to herself, “Aha! I must navigate around it.” Rather, she just treads the easiest possible path, or the one with the smallest incline. Eventually the path becomes easier to walk on, a part of our working memory, and more clearly defined. No wonder, a lot of our city planning debates end up in a moo point, pun intended!

I was trying to think of how this approach has affected me and my fellow educators in these times of COVID and extended lockdowns. Are we hard-wired to follow the path of least resistance, I wonder, and what are some of the successful educators doing?

Privileged to have an exposure to hundreds of educators we work with at EI and being privy to their teaching-learning practices, I am sharing a few learnings from some successful educators who seem to have repeatedly managed to reimagine the teaching-learning process.

  1. Personalised learning is the key

Students who were just benches apart (even if separated by larger learning distances) are now also physically distanced. Some of the successful educators, I have seen, have understood this fact and realized that one of the key elements during online-teaching is that there need to be specific checkpoints and catalysts to make the virtual environment ‘real’ and conducive for everyone. We can no more tread the same path as was (somewhat) successfully done in the classroom/lecture method. Not only the visual experiences but also the immediate external environment of the student has changed, and it is imperative for an educator to be aware of this. Thanks to technology like EI’s own Mindspark, and with learnings from research, we are more capable of personalizing education and make our learning student-centric instead of lecture-specific.

Interestingly, the problem of achievement gap was responded to, in one of the seminal works by Professor Benjamin Bloom, in 1984. Prof Bloom concluded that students who were tutored (either 1:1 or in small groups) outperformed 98% of their non-tutored peers!

Furthermore, 90% of tutored students reached the same level of summative achievement as the top 20% of students in traditional classrooms. This quantum leap in student performance has since been confirmed in a swarm of academic studies.

In short, personalisation works, particularly when it is paired with an approach centred around mastery.

2. The problem is not physical, but transactional distance

 One difference I have observed between successful and struggling teachers is an understanding of whether technology is supporting their teaching or their teaching is being dictated by technology. In other words, instruction design and pedagogy should determine how the technology is being used.

According to the Transactional Distance Theory (TDT, Moore, 1996), reducing the psychological space between participants and instructors through pedagogy, results in higher learning outcomes in online learning. In other words, transactional distance measures the connectedness between the learner and the learning environment.

There are 3 important factors which influence transactional distance:

  • Dialogue – Communication between teacher and learner
  • Structure – Elements of course design
  • Learner autonomy – Control of students on their learning

3.  Parents are the facilitators

Schools have often, rightfully, been seen as places of change. The role of a parent has been to first admit a child to the right school (I am purposely avoiding the word ‘good’) and then ensuring that the child learns the right moral values at home. All the academic learning is expected to happen in school. Yet, the present situation has made it clear that in addition to the teacher, the parent has to be a key facilitator in a child’s education. With the brick and mortar boundaries vanishing, synergy between parents and educators is critical for effective learning. I have seen lot of successful schools do this with great poise.

A fellow educator and a parent herself, Ms. Nivvrati Rathore of Noida, Uttar Pradesh, says, “School closure has been hard for the students and harder yet for parents, from coping with anxious teenagers to facilitating the logistics of online schooling. Parents have doubled up as teachers in many ways. So our school has tried to focus on the strategies to bridge the divide and connect with students and the parents to achieve the desired result.”

4. Re-defining school and learning

 In a funny way, I have realized that most of the people are consistent with the thought, that we are doing right now is a stop gap arrangement. Our attitudes, behaviours and perceptions have changed towards teachers, parents, and examinations because what we used to consider ‘schooling’ is not happening right now. To define this, I will share an idea.

One of the playwrights I admire, Bertolt Brecht was once asked to describe theatre by an interviewer. He asked the interviewer to close her eyes and imagine theatre in action. Subsequently, he asked her to mentally remove each item one by one from the scene and share if she still feels that it is fit to call it theatre. Remove furniture, lights, costumes, curtains etc. and so on till the point where removing something breaks what we understand as theatre.

I like to view our schools through the same thought experiment. Remove the white/black board, the bell, recess, notice board etc., can it still be called a school? I think, yes… until we remove one phenomenon which is the core of what we expect from school. And that one thing I think is the idea that students are learning. So the next time when a child says that she is going to school, it should mean that she is going to learn.

5. Competition as positive re-enforcement

 Some of the brightest minds today from fields like social science, neuroscience, game design, UX-UI development, etc. spend their days trying to discover how a child can be better motivated to learn or be just a bit more engaged. Many of the models they experiment involve some element of competition.

Yet some educators seem to only see competition as bad. I feel competition can be a positive tool if used properly. It is not about simply wanting to achieve more than one’s neighbour, but opens up a plethora of ways for an individual to blossom.

I have seen schools channelize this inherent desire to win by nudging it into a growth/engagement mindset. This allows a student to be in a flow-zone of proximal development, competing in order to augment individual identity and embracing collaborative competition. Well-designed competition can boost engagement. In a recent international survey, educators voiced out their concern over plummeting engagement.

***

These are some learnings that I have absorbed from my fellow educators, thought about and often discussed and collaborated with schools to get some exciting results. I have seen leaders who have been doing some of these things and achieved measurable positive impact in learning outcomes. Our solutions and approaches to the scenario may be novel, but I believe we have agreement on one thing; schooling is not educating unless it leads to learning and understanding.

The post 5 key ideas behind successful teachers during COVID times appeared first on Ei Study.

]]>
ASSET SUPERTEST – Regular Assessments for Deeper Learning (without the pressure of exams) https://ei.study/asset-supertest-regular-assessments-for-deeper-learning-without-the-pressure-of-exams/ Wed, 13 Dec 2023 03:18:59 +0000 https://ei.study/asset-supertest-regular-assessments-for-deeper-learning-without-the-pressure-of-exams/ COVID-19 has changed many things, including how students learn. Online classes have replaced physical classes in school. Teachers who were once hesitant to try any form of digital learning are now embracing it and even picking up new skills themselves. Parents are stepping up to ensure that their children are not missing out – whether by helping them attend online classes or by enrolling them in other online courses. In addition to regular classes, there are courses on coding, data analysis and other topics which use videos, interactives, one to one mentoring and specialised learning modules.  The question that is missing in all this is “Are students really learning through these multiple modes of online learning?”  In 2001, our company Educational Initiatives (EI) created ASSET with a vision of a world where children everywhere are learning with understanding. The central principle is that effective learning cannot happen without periodic evaluation and feedback. A good diagnosis is a big part of a good treatment, just like a doctor needs to properly understand the problem to prescribe effective medication. In fact, for some students just the fact that the gaps are made visible is sufficient to trigger the process of learning. Tests are an important tool to help gauge how much has been learnt, and ‘how well’ children are doing. They also provide valuable feedback about the effectiveness of instructional methods. Over the past two decades, ASSET has created an ecosystem of learning where the assessments have been used to improve learning, identify and nurture giftedness.  What is the role of assessment in these times of a pandemic? To understand this, we need to understand the concept of real learning that happens by focusing on core learning. When regular classes are suspended, regular assessments can serve as a guide to help students and parents. What does your child need to learn today to succeed tomorrow? Even in regular times, we instinctively understand that children do not learn merely by attending classes or reading lessons. In some subjects like mathematics, it is clearer that understanding basic concepts serves as the foundation for later concepts. This is actually true in all subjects. So how well is your child really learning?  School marks provide an indication, but may not be completely reliable. Some parents find when they move cities or schools that a child who was scoring high previously does not score well any more. Studies also show that understanding may be more important than merely scoring marks. An India Today cover story based on a large study conducted by EI, ‘What’s Wrong With Our Teaching?’ showed that our children may be putting in a lot of effort but may not be learning well. In recent years, it has been seen that ‘inflation of marks’ in Board Exams create an illusion of enhanced learning while college admission cut-offs keep rising. Noted writer Yuval Noah Harari in his book ’21 Lessons for the 21st Century’ says that today’s students must be prepared to learn and relearn new skills many times over during their working careers. Are they truly developing the skills they need to be able to do that? Since children learn in different ways and at different paces, the only way to know what students have really learnt is through assessments. But all assessments are not created equal. Many assessments merely check for textbookish understanding or rote learning. Such information though easier to measure, tends to be facts that students quickly forget after the test. Table 1 compares the characteristics of good and bad assessments. Table 1. Comparison between good and bad assessments Good Assessments Bad Assessments Tests understanding of key ideas Tests recall of trivial details Tests conceptual understanding and application Tests superficial (textbookish) understanding Uses questions set in new and unfamiliar contexts Uses questions set in contexts directly from the textbooks Tests subject-specific important skills Does not test skills, tests only knowledge Is comprehensive in covering all key ideas and skills Focuses narrowly on a smaller set of ideas and skills Provides detailed, useful, actionable feedback to students in its report Provides only a summative score in its report Questions help identify common errors and misconceptions Questions only focus on the ‘correct answer’ Identifies key strengths and weaknesses of the student Identifies only the overall level of the student New challenges and opportunities in the digital world One of the key shifts in the last few decades has been the ubiquity of ever-improving digital technologies for work and personal use. This brings both challenges and opportunities. One challenge is that a focus on facts and processes alone – which is what our school education systems tend to do – will be insufficient preparation for the future. The focus has shifted to understanding, application and creative thinking. And merely repackaging textbooks into video lectures with animations is not going to change how our children are learning. The new digital world also brings big opportunities. One of them is the ability to learn new skills easily and cheaply. Another is the ability for one’s talents to be recognised and rewarded halfway across the globe as traditional barriers of distance and language disappear.  Clearly the advantage of a robust physique reduces when jobs change from manual / agricultural to clerical / managerial. In the same way, rote memorisation which served clerical and routine jobs well in the past will actually be a handicap in a world that will reward creativity and innovation. In other words, students need to have strong conceptual foundations and learning with understanding to be able to build the very talents the world of the future is likely to reward.   The following graph shows how different types of tasks have changed since the 1960s. Source: David H. Autor, Brendan Price. “The Changing Task Composition of the US Labor Market: An Update of Autor, Levy, and Murnane (2003)”. MIT Mimeograph, Massachusetts Institute of Technology (2013). In other words, non-routine analytical and interpersonal tasks have steadily risen in importance while all manual and routine cognitive tasks...

The post ASSET SUPERTEST – Regular Assessments for Deeper Learning (without the pressure of exams) appeared first on Ei Study.

]]>
COVID-19 has changed many things, including how students learn. Online classes have replaced physical classes in school. Teachers who were once hesitant to try any form of digital learning are now embracing it and even picking up new skills themselves. Parents are stepping up to ensure that their children are not missing out – whether by helping them attend online classes or by enrolling them in other online courses. In addition to regular classes, there are courses on coding, data analysis and other topics which use videos, interactives, one to one mentoring and specialised learning modules.

 The question that is missing in all this is “Are students really learning through these multiple modes of online learning?” 

In 2001, our company Educational Initiatives (EI) created ASSET with a vision of a world where children everywhere are learning with understanding. The central principle is that effective learning cannot happen without periodic evaluation and feedback. A good diagnosis is a big part of a good treatment, just like a doctor needs to properly understand the problem to prescribe effective medication. In fact, for some students just the fact that the gaps are made visible is sufficient to trigger the process of learning.

Tests are an important tool to help gauge how much has been learnt, and ‘how well’ children are doing. They also provide valuable feedback about the effectiveness of instructional methods. Over the past two decades, ASSET has created an ecosystem of learning where the assessments have been used to improve learning, identify and nurture giftedness. 

What is the role of assessment in these times of a pandemic? To understand this, we need to understand the concept of real learning that happens by focusing on core learning. When regular classes are suspended, regular assessments can serve as a guide to help students and parents.

What does your child need to learn today to succeed tomorrow?

Even in regular times, we instinctively understand that children do not learn merely by attending classes or reading lessons. In some subjects like mathematics, it is clearer that understanding basic concepts serves as the foundation for later concepts. This is actually true in all subjects. So how well is your child really learning?  School marks provide an indication, but may not be completely reliable. Some parents find when they move cities or schools that a child who was scoring high previously does not score well any more.

Studies also show that understanding may be more important than merely scoring marks. An India Today cover story based on a large study conducted by EI, ‘What’s Wrong With Our Teaching?’ showed that our children may be putting in a lot of effort but may not be learning well. In recent years, it has been seen that ‘inflation of marks’ in Board Exams create an illusion of enhanced learning while college admission cut-offs keep rising. Noted writer Yuval Noah Harari in his book ’21 Lessons for the 21st Century’ says that today’s students must be prepared to learn and relearn new skills many times over during their working careers. Are they truly developing the skills they need to be able to do that?

Since children learn in different ways and at different paces, the only way to know what students have really learnt is through assessments. But all assessments are not created equal. Many assessments merely check for textbookish understanding or rote learning. Such information though easier to measure, tends to be facts that students quickly forget after the test. Table 1 compares the characteristics of good and bad assessments.

Table 1. Comparison between good and bad assessments

Good Assessments Bad Assessments
Tests understanding of key ideas Tests recall of trivial details
Tests conceptual understanding and application Tests superficial (textbookish) understanding
Uses questions set in new and unfamiliar contexts Uses questions set in contexts directly from the textbooks
Tests subject-specific important skills Does not test skills, tests only knowledge
Is comprehensive in covering all key ideas and skills Focuses narrowly on a smaller set of ideas and skills
Provides detailed, useful, actionable feedback to students in its report Provides only a summative score in its report
Questions help identify common errors and misconceptions Questions only focus on the ‘correct answer’
Identifies key strengths and weaknesses of the student Identifies only the overall level of the student

New challenges and opportunities in the digital world

One of the key shifts in the last few decades has been the ubiquity of ever-improving digital technologies for work and personal use. This brings both challenges and opportunities. One challenge is that a focus on facts and processes alone – which is what our school education systems tend to do – will be insufficient preparation for the future. The focus has shifted to understanding, application and creative thinking. And merely repackaging textbooks into video lectures with animations is not going to change how our children are learning.

The new digital world also brings big opportunities. One of them is the ability to learn new skills easily and cheaply. Another is the ability for one’s talents to be recognised and rewarded halfway across the globe as traditional barriers of distance and language disappear.  Clearly the advantage of a robust physique reduces when jobs change from manual / agricultural to clerical / managerial. In the same way, rote memorisation which served clerical and routine jobs well in the past will actually be a handicap in a world that will reward creativity and innovation. In other words, students need to have strong conceptual foundations and learning with understanding to be able to build the very talents the world of the future is likely to reward.  

The following graph shows how different types of tasks have changed since the 1960s.

Source: David H. Autor, Brendan Price. “The Changing Task Composition of the US Labor Market: An Update of Autor, Levy, and Murnane (2003)”. MIT Mimeograph, Massachusetts Institute of Technology (2013).

In other words, non-routine analytical and interpersonal tasks have steadily risen in importance while all manual and routine cognitive tasks have declined. A 2018 study by ICRIER in India shows a similar trend in India too. This means that tasks which involve analysing and interpreting information and thinking creatively are becoming more important than routine or structured tasks. Digital technologies have reconfigured access to knowledge and thus made critical thinking, creativity and collaboration more important. Students need to shift their focus from mere acquisition of knowledge to nurturing these ’21st century’ skills. Given the nature of unstructured and creative task demands, attaining degrees and certificates has become less important than acquiring the ability to learn and re-learn.

Real learning is learning with understanding

While it is important to be able to recall important ideas and facts, and to be fluent in subject-specific procedures, it has to be complimented with other important aspects of learning so that students have a ‘real understanding’ of the content.

Fig 1: EI’s Tree of Learning

At EI, we use a framework for designing our learning and assessments offerings which places emphasis on real learning. While there are always multiple learning goals, focusing on aspects that fall under the category of ‘core learning’ is the most important. Core learning constitutes those big ideas in any discipline which form the critical basis for learning other importantreplaced physical classes in school. Teachers who were once hesitant to try any form of digital lea ideas and skills. These other important ideas and skills form what we call ‘supporting learning’. There are also some facts that are part of a learning unit but may not be very important, which we call ‘peripheral learning’.

 

This hierarchy of learning is represented by EI’s Tree of Learning. Imagine the loss of peripheral learning. That is like a tree losing a few leaves. Not only does the tree rarely feel the loss, it also happens in the regular course of events. The loss of a branch, while still not catastrophic, can be a significant loss though. These may be skills that are not core and yet are important and need effort to be learnt. At the very heart of learning, is core learning. This is like the trunk of the tree. This is its lifeline, and as long as its trunk is intact, new branches can grow. Core learning constitutes the most important concepts, basic skills like reading and understanding and key aspects of any discipline.

A good assessment would focus significantly on core learning while also covering some aspects of supporting learning. Peripheral learning – in today’s world – can be easily accessed or derived from other aspects of learning.

 Components and examples of real learning

 If students have a real understanding of the content, then along with demonstrating recall of important ideas and fluency with procedures, they should also be able to:

(a) demonstrate conceptual understanding: Here is an example of a question that checks only for recall-based learning.

What are the properties of a solid substance?

Compare this with the following question which tests for conceptual understanding.

Which of the following closed bottles DEFINITELY contains a solid substance?

(b) apply their understanding in unfamiliar contexts: Here is an example of a question that asks students to apply their understanding in a context which is different from what they might have seen in their textbooks

The length of this pencil is about:

A. 4 cm   B. 5 cm   C. 6 cm   D. 7 cm

79% of almost 10,000 class 4 students said the above pencil was 6cm long (probably because textbooks only show objects placed starting at the ruler’s 0 cm mark.)

(c) demonstrate competence in academic process skills specific to different disciplines: Here is an example of a question that expects students to apply the skill of designing a scientific investigation.

Mira wants to check if the same amount of sugar can be dissolved in salt solution as in plain water. She prepares a solution X of 1 teaspoon of salt in 500 mL of water and uses this solution to carry out four sets of experiments A, B, C and D. She checks the amount of sugar remaining undissolved in each beaker after stirring for five minutes.

Which of the sets of experiments will give a correct conclusion?

(d) integrate their understanding from across different areas to make decisions in real-life contexts: One of the tests for real understanding is to see if one can identify the relevant knowledge and skills one has learnt and then correctly apply in real-life situations. For example, the following question tests the students’ comprehension skills in the context of an actual infographic from a newspaper.

What does the need for a tracking system tell you about Alzheimer’s disease?

People with Alzheimer’s disease tend to ______________.

Need for good assessment for deeper learning

Feedback is one of the key components in any exercise of learning. A good assessment through good evaluation and feedback helps the students:

  • assess their current strengths and weaknesses
  • become aware of their current level of learning
  • understand the next useful steps they should take to improve their learning

Unfortunately, a discussion of educational assessment in India often gets reduced to attainment of marks in Board exams and therefore also to scores in individual class tests and term end exams in schools. High marks in school exams may be taken by many as an indicator of good learning. However, as discussed earlier, in a context where the learning is narrow and ‘textbookish’ and the tests only intend to measure the same, high scores in school tests can only be interpreted as competence in recall and mechanical understanding, and therefore such assessment has very little value.

There has been a longstanding belief and understanding among educators that the manner and design of assessment determines to a large extent what and how students learn in an educational setting. It is natural that the learning habits of students will be affected by what they see as the eventual tests of the whole teaching-learning activity. Therefore, changing the way learning is assessed can have a huge and long-term impact on how a student is learning.

What is  ASSET SUPERTEST?

The drill and practice provided by doing textbook questions has to be supplemented by questions that check for understanding. ASSET SUPERTEST includes both kind of questions. ASSET SUPERTEST is scientifically designed online test with objective type questions, covering various Indian and international curricula. They assess students’ level of proficiency in the core skills and key ideas underlying the school syllabi. The immediately generated test reports provide personalized feedback to the students about their strengths and weaknesses.

These tests have all the key characteristics of good assessment practices as discussed earlier here. These tests

  • focus on real understanding focus on key process skills
  • focus on common misconceptions
  • use insights from educational research
  • ensure high standards in the design of questions are based on EI’s student performance data from its large scale assessments

 Your child can take this tests online from home. The child will take one ASSET SUPERTEST every week in one of the three subjects – English, Maths, Science. The test would cover 1-3 topics from the grade your child is studying in.

 EI would suggest a timeline for the tests so that the child can work through these three subjects in rotation and have a manageable portion to work on every week.

 You will receive the test report after the testing window closes every week. The test report would include the following:

  • total score obtained by the child
  • answer to all questions from the test
  • personalised feedback highlighting the student’s strengths and weaknesses

ASSET SUPERTEST at a glance

Grades : 3 to 10

Subjects : English, Maths, Science

Medium : English

Duration : 40 minutes per test

Frequency : Weekly (1 subject test every week)

Test type : Objective type test – only multiple choice questions

Number of questions : 20 – 25 per test

Total number of tests : 52 per year

Venue : Can be taken from home

Mode : Online

 To know more about ASSET SUPERTEST, stay connected through our social media handles and visit our website www.ei-india.com.

The post ASSET SUPERTEST – Regular Assessments for Deeper Learning (without the pressure of exams) appeared first on Ei Study.

]]>
Word Problems – an Effective Tool to connect Maths to Real Life – if designed well https://ei.study/word-problems-an-effective-tool-to-connect-maths-to-real-life-if-designed-well/ Tue, 12 Dec 2023 22:14:00 +0000 https://ei.study/word-problems-an-effective-tool-to-connect-maths-to-real-life-if-designed-well/ Here is an arithmetic word problem. What percentage of Indian students would correctly answer option D? What percentage of students in classes 3, 5, and 7 would answer 130 years? Refer to the graph to see how close your guess is. Response data of English-medium private schools Note: The data is from about 750 – 2250 students in each of the classes 3 to 7 collected using Mindspark, a computer-based adaptive learning tool. Only about 50% – 60% of students in each of the classes 3-7 answered correctly. After class 5, the percentage of students selecting 130 years as the answer seems to be decreasing. On possible reason behind this drop could be that students start realising that the age of 130 years may not unrealistic. However, 20% of students answered the age of Raghu as 130 years even in class 7. Also, the data surprisingly shows an increase in the number of students answering 25 years (option C) from class 3 to 7! One may argue that students may be answering in haste. But based on our experience of conducting student interactions in three classes on a similar question, we saw that students did comprehend the question correctly and had rationale for the incorrect answer option picked by them. In a study conducted by French researchers in 1979, on a similar question, more than three-fourths of class 1 and 2 students arrived at their answers by manipulating the numbers, such as 125 + 5 = 130 (adding numbers in the word problem given). Two years back Nanchong Shunqing Primary School in south-west China posed such a problem as a free-response question to 5th graders to assess their critical thinking and it went viral in social media. Such studies have shown that it is useful to ask such non-traditional problems in open-ended manner. Though we don’t know the percentage of students who responded in different ways, some of the student responses received included: “I don’t know.”, “I cannot solve this.”, “We cannot be sure of the captain’s age. The number of the sheep and goats is irrelevant to the captain’s age.” The creative ones were: “The captain should be at least 18 years old because a minor is not allowed by law to operate a vessel.”, “The captain is 36 years old. He is quite narcissistic, so the number of animals corresponds to his age.”. Some people criticised the question makers for asking such a question. This inspired us to check out the extent of correct response among Indian students. We saw students mechanically doing operations on given numbers in the non-traditional word problem described earlier. What about the case of traditional word problems? The problem with keywords: We conduct student interviews to understand why students answer in the way they do. We go to a classroom in a school, pose a problem, invite student responses and ask them to articulate their reasoning to arrive at an answer.  This way of uncovering student’s thinking also helps us understand misconceptions students have and their extent. We conducted one such student interview in an English-medium private school in Goa. Gaurav (name changed) is a class 4 student in the school whose mother tongue is English. He is comfortable in English. The problem given to him was as follows. Ram had 187 marbles. Shyam had 245 marbles. How many more marbles did Shyam have than Ram? Gaurav’s answer was 332. When asked to explain his answer, Gaurav didn’t have to think twice. Confidently he answered, “When ‘less’ appears in the question one needs to subtract and when the word ‘more’ appears the numbers are added.” So, when asked a simple arithmetic word problem in English, Gaurav follows keyword-based rules instead of trying to understand the problem and choose an appropriate arithmetic operation. Our data on a large number of students on different types of addition and subtraction word problems shows that there are many students like Gaurav in classes 1-5 who tend to identify the operation (addition, subtraction etc.) to perform based on keywords like ‘more’, ‘less’ or ‘few’ in a word problem. The ability to apply operations in real-life situations shows students’ conceptual understanding of these operations. Different contexts and situations are encountered in real life. Hence, it is important that learners demonstrate their understanding by solving various types of word problems based on their structures, involving a good range real-life situations. Students might get the answer right simply by mechanically choosing the operation to perform based on keywords in some types of word problems. But they will arrive at the incorrect answer if they apply key word strategy without comprehending correctly what is given in the question and what needs to be found out, in the case of other types of word problems. Hence educators, curriculum designers, and teachers need to be aware of all the types of word problems and enrich the meanings of operations through a variety of real-life situations. The 4 major types of word problems in addition and subtraction are join, separate, combine, and compare as classified by researchers based on their structures. Here are examples of some of these types of word problems in the table below. Different types of word problems Word problem type and sub-type Example Semantic equation Join Result unknown Raju has 9 pencils. Geeta gives him 3 more. How many pencils does Raju have now? 9 + 3 = __ Change unknown Amit has 9 pencils. He gets some more pencils from his sister. He has 12 pencils now. How many pencils does he get from his sister? 9 + __ = 12 Start unknown Bholu has some pencils in a box. He puts 3 more pencils in the box. There are 12 pencils in the box now. How many pencils were there in the box in the beginning? __ + 3 = 12 Separate Change unknown Ram has 12 pencils. He gives some pencils to his sister. He has 9 pencils now. How many pencils did he give to his...

The post Word Problems – an Effective Tool to connect Maths to Real Life – if designed well appeared first on Ei Study.

]]>
Here is an arithmetic word problem.

What percentage of Indian students would correctly answer option D? What percentage of students in classes 3, 5, and 7 would answer 130 years?

Refer to the graph to see how close your guess is.

Response data of English-medium private schools

Note: The data is from about 750 – 2250 students in each of the classes 3 to 7 collected using Mindspark, a computer-based adaptive learning tool.

Only about 50% – 60% of students in each of the classes 3-7 answered correctly. After class 5, the percentage of students selecting 130 years as the answer seems to be decreasing. On possible reason behind this drop could be that students start realising that the age of 130 years may not unrealistic. However, 20% of students answered the age of Raghu as 130 years even in class 7. Also, the data surprisingly shows an increase in the number of students answering 25 years (option C) from class 3 to 7!

One may argue that students may be answering in haste. But based on our experience of conducting student interactions in three classes on a similar question, we saw that students did comprehend the question correctly and had rationale for the incorrect answer option picked by them.

In a study conducted by French researchers in 1979, on a similar question, more than three-fourths of class 1 and 2 students arrived at their answers by manipulating the numbers, such as 125 + 5 = 130 (adding numbers in the word problem given). Two years back Nanchong Shunqing Primary School in south-west China posed such a problem as a free-response question to 5th graders to assess their critical thinking and it went viral in social media. Such studies have shown that it is useful to ask such non-traditional problems in open-ended manner. Though we don’t know the percentage of students who responded in different ways, some of the student responses received included: “I don’t know.”, “I cannot solve this.”, “We cannot be sure of the captain’s age. The number of the sheep and goats is irrelevant to the captain’s age.” The creative ones were: “The captain should be at least 18 years old because a minor is not allowed by law to operate a vessel.”, “The captain is 36 years old. He is quite narcissistic, so the number of animals corresponds to his age.”. Some people criticised the question makers for asking such a question. This inspired us to check out the extent of correct response among Indian students.

We saw students mechanically doing operations on given numbers in the non-traditional word problem described earlier. What about the case of traditional word problems?

The problem with keywords: We conduct student interviews to understand why students answer in the way they do. We go to a classroom in a school, pose a problem, invite student responses and ask them to articulate their reasoning to arrive at an answer.  This way of uncovering student’s thinking also helps us understand misconceptions students have and their extent. We conducted one such student interview in an English-medium private school in Goa.

Gaurav (name changed) is a class 4 student in the school whose mother tongue is English. He is comfortable in English. The problem given to him was as follows.

Ram had 187 marbles. Shyam had 245 marbles. How many more marbles did Shyam have than Ram?

Gaurav’s answer was 332. When asked to explain his answer, Gaurav didn’t have to think twice. Confidently he answered, “When ‘less’ appears in the question one needs to subtract and when the word ‘more’ appears the numbers are added.” So, when asked a simple arithmetic word problem in English, Gaurav follows keyword-based rules instead of trying to understand the problem and choose an appropriate arithmetic operation.

Our data on a large number of students on different types of addition and subtraction word problems shows that there are many students like Gaurav in classes 1-5 who tend to identify the operation (addition, subtraction etc.) to perform based on keywords like ‘more’, ‘less’ or ‘few’ in a word problem.

The ability to apply operations in real-life situations shows students’ conceptual understanding of these operations. Different contexts and situations are encountered in real life. Hence, it is important that learners demonstrate their understanding by solving various types of word problems based on their structures, involving a good range real-life situations. Students might get the answer right simply by mechanically choosing the operation to perform based on keywords in some types of word problems. But they will arrive at the incorrect answer if they apply key word strategy without comprehending correctly what is given in the question and what needs to be found out, in the case of other types of word problems. Hence educators, curriculum designers, and teachers need to be aware of all the types of word problems and enrich the meanings of operations through a variety of real-life situations. The 4 major types of word problems in addition and subtraction are join, separate, combine, and compare as classified by researchers based on their structures. Here are examples of some of these types of word problems in the table below.

Different types of word problems

Word problem type and sub-type Example Semantic equation
Join Result unknown Raju has 9 pencils. Geeta gives him 3 more. How many pencils does Raju have now? 9 + 3 = __
Change unknown Amit has 9 pencils. He gets some more pencils from his sister. He has 12 pencils now. How many pencils does he get from his sister? 9 + __ = 12
Start unknown Bholu has some pencils in a box. He puts 3 more pencils in the box. There are 12 pencils in the box now. How many pencils were there in the box in the beginning? __ + 3 = 12
Separate Change unknown Ram has 12 pencils. He gives some pencils to his sister. He has 9 pencils now. How many pencils did he give to his sister? 12 – __ = 9
Combine

(part-part-whole)

Whole unknown Geeta has 3 red pencils and 9 blue pencils. How many pencils does she have? 3 + 9 = __
Part unknown Asha has 12 pens. 9 of her pens are red and the rest are blue. How many blue pens does Asha have? 9 + __ = 12
Compare Difference unknown Geeta has 12 pencils. Raju has 9 pencils. How many more pencils does Geeta have than Raju? 12 – 9 = ___
or 9 + __ = 12
Compared quantity unknown Raju has 3 fewer pencils than Geeta. Geeta has 12 pencils. How many pencils does Raju have? 12 – 3 = ___
Referent unknown Farah has 2 more pencils than Alex. Farah has 5 pencils. How many pencils does Alex have? ___ + 2 = 5

Note: The above table does not show all the 14 types of word problems. There are 2 more separate types of word problems like join and 3 more compare type based on keywords.

Once the real understanding of a concept is achieved, it is expected that a learner would also be able to apply the concept learnt  in both familiar and  unfamiliar situation. As per the National Curriculum Framework (NCF 2005), students are expected to learn word problems on addition and subtraction with numbers up to 1000 by class 3. As per our assessment of this skill, we found the performance to be varying on the 14 different word problem types in different classes. Among compare type problems there were some sub-types like ‘compare quantity unknown’ and ‘referent unknown’ where by the beginning of class 4, almost 50% of the private school students were unable to correctly solve such word problems. Here’s a graph showing performance of Indian private school students on all the compare type word problems.

As can be seen, there is a considerable variation in the performance on the different compare word problem types. The keyword strategy fails for the referent unknown problems. A teacher needs to be aware which types reinforce the ‘keyword strategy’ and which don’t. The types where students struggle, for her classroom instructions to be more effective.

Students should be proficient in solving word problems on basic operations as that is an important skill connecting Maths with real-life and characterizes the conceptual understanding of the operations. The student response data demonstrates that is not the case. Students are often mechanically solving word problems performing operations on given numbers. Often, they don’t see if their answer makes sense and relate to their real-life experience. This is apparent when they use keyword strategy to solve word problems. The strategy often works for typical problems, but breaks down in certain cases which are also important in real life situations. So, it is important that teachers and textbooks give students exposure to all the types of word problems. School mathematics should have connections to real life. Word problems are supposed to help with exactly that.

Our findings are in lines with the findings of the studies documented in the book Making Sense of Word Problems[1]. .The findings of the studies indicate that while solving arithmetic word problems in a school environment, a majority of students demonstrated the tendency to apply one or more arithmetic operations to the assigned data without a realistic consideration of the context. For many students, school mathematics has no connection with their real-life experiences. When solving an arithmetic word problem, they simply apply the arithmetic operations algorithmically with neither realistic consideration nor the use of common sense. The main reasons for such behaviour are the stereotyped way in which word problems are typically presented in school: they are often instructed to follow rules for the word problem solving as mentioned in the book.

As students progress to higher classes, the connection of school mathematics with their real-life experience seems to stagnate as seen from the response data of the ‘age of Raghu’ question being discussed. So, when students find word problems difficult, should it be attributed to their language ability alone? Or does this point to the need for more emphasis on real-life connections in school mathematics apart from developing problem solving skills? Further, an analysis of textbook content shows not all these types of word problems are covered and adequately. This would be fine if students were able to apply their learning to unfamiliar types of word problems. Since this is not the case, it calls for curricula to create learning opportunities by promoting adequate exposure to all the types of word problems, especially the difficult ones (compare and start-unknown problems).

Word problems must involve realistic and relatable contexts to enable students draw from their real-life experiences. Textbooks and classroom instructions must cover all the types of word problems and increase exposure to the more difficult types. It may be desirable to occasionally pose non-traditional and open-ended problems like the ‘age of Raghu problem’ without definite answers. Students should be encouraged to show the information given in the word problem visually (bar models or tape diagrams really help) and in their own words for better sense-making. The response to question of sheep and dog clearly demonstrate the focus of learner to calculate the answer rather than see if the question provides appropriate inputs to decode the question and derive at the correct response. So, instructions should focus a lot more on the process of decoding a word problem, guess the answer before actually arriving at the answer systematically. Whether it is a word problem or an abstract mathematical problem, more students must also develop a habit of reflecting on their answer obtained and check if it is sensible. Instructions in classrooms need to emphasize this.


[1] A range of findings have shown how students consistently answer them in ways that fail to take account of the reality of the situations described. This book (monograph) by Eric de Corte, Brian Greer and Lieven Verschaffel reports on studies carried out to investigate this “suspension of sense-making” in answering word problems.

The post Word Problems – an Effective Tool to connect Maths to Real Life – if designed well appeared first on Ei Study.

]]>
Personalized and Adaptive Learning with Mindspark https://ei.study/personalized-and-adaptive-learning-with-mindspark/ Tue, 12 Dec 2023 19:11:02 +0000 https://ei.study/personalized-and-adaptive-learning-with-mindspark/ The road is filled with the chatter of children, and Ha—No, let’s call the student ‘X’, and I’ll reveal why later—X joins the others as they pass through the gates of their school. X goes straight to the grade 9 classroom after the morning assembly. The first class is Mathematics. X has been at this school for only a month, and is trying to catch up with the others. As she does every day, the teacher writes a problem on the board and turns to the class to call on a student to help her solve the problem. X avoids her eyes and looks down, trying to look simultaneously calm and inconspicuous. The teacher calls out another student’s name, and X lets out a sigh of relief. X tries to concentrate on what is happening on the blackboard, but after the third step of the calculations, fails to understand how the numbers are transforming. X wonders, “What did they learn in the previous classes? I can’t seem to understand this… Oh no, the first problem’s done and it’s time for the second one now. Don’t look at the teacher, stay cool!” The teacher’s eyes fall on X’s bent head and she thinks to herself, “There’s X, trying to be invisible again. I know X’s family has moved to four different towns in the past three years, and X is behind on many topics. I’ll give some extra time to X, Y and Z this week. They all really need it. Oh wait, not this week, I need to make the arrangements for the scholarship exam. I’ll do it next week. I can’t ignore the other students right now – they need to prepare for their board exams next year after all. And our school needs to maintain its passing percentages.” The bell rings and it’s time for the students to head to the computer lab. X sits at the computer and logs into Mindspark, a learning software. The questions coming up on the screen are challenging and engaging, but not daunting, to X because they are actually from a grade 7 concept. X needs to catch up on various concepts, and Mindspark allows X to do this. But how does it identify what X needs? Mindspark is a personalized and adaptive program that uses a multi-pronged approach to solve this problem. Diagnosis: When a student logs into Mindspark for the very first time in a subject, they are given a diagnostic test that determines the overall level at which the student lies on the spectrum. This means that even if a student is in grade 7, they could be given grade 3 content, depending on the level determined by the diagnostic test. In this way, each student starts their journey at the level best suited to them. Checking for conceptual learning: Questions in Mindspark are specially designed to test understanding and to help students clear their misconceptions. When a student answers a specific question or combination of questions incorrectly, the system diagnoses the child’s misconceptions / weak areas. The child may be further provided with a simple or detailed explanation, or be redirected to questions that strengthen basic understanding. Thus, the system does not allow a student to move up to higher levels without a strong understanding of the basic concepts. This helps students below the average class level to come up to their class level. Similarly, it provides challenging problems to high performing students allowing them to stay engaged, and enabling them to learn more. These decisions are driven by the adaptive logic which is designed to improve over time with increased student usage. Pinpointing weak areas: Mindspark examines patterns of errors to target “differentiated remedial instruction.” For instance, if a student makes a mistake on which decimal is bigger (3.27 or 3.3), it may be due to “whole number thinking” (27 is bigger than 3) whereas if they make the same mistake with 3.27 or 3.18, it could be due to “reverse order thinking” (comparing 81 to 72 because the “hundredth place” should be bigger than the “tenth place”). A good teacher may catch this if most of the class is making the mistake, but the likelihood is low if only a few students are making the error. Thus, to tackle the need for specific and personalized learning paths for each student, the adaptive logic in Mindspark uses a student’s performance data as well as the conceptual framework for different topics to deliver relevant content. An example of a personalized learning path for a student is given below: Based on research at the University of Melbourne by Kaye Stacey, et al. Remediating learning gaps: In Mindspark, when a student gets 25% or more questions incorrect in a learning unit, the program allows the student to repeat the learning unit.  If the student got the questions incorrect due to a lack of understanding of a fundamental concept, then the adaptive logic takes the student to the immediate previous learning unit. If the student got the questions incorrect due to a misconception, then a remedial module is given to the student. Remedial modules are designed to resolve the student’s misconception based on cognitive dissonance theory. In the cognitive dissonance method, the system conflicts the student’s prior understanding (misconception), then gives the correct explanation to resolve the misconception. 1 Does this approach work? The Abdul Latif Jameel Poverty Action Lab (J-PAL), a Research Centre at Massachusetts Institute of Technology led by Professor Karthik Muralidharan (co-chair of education), conducted a pilot randomized evaluation of the Mindspark program, deployed through stand-alone centres in Delhi as well as another subsequent evaluation in Government schools of Rajasthan. Some of the key findings were: The program was equally effective for students at all levels of the achievement distribution. Treatment effects in the impact evaluation did not vary significantly by level of initial achievement, gender or wealth. Thus, the intervention was equally effective in teaching all students. However, the relative impact of the program was much...

The post Personalized and Adaptive Learning with Mindspark appeared first on Ei Study.

]]>
The road is filled with the chatter of children, and Ha—No, let’s call the student ‘X’, and I’ll reveal why later—X joins the others as they pass through the gates of their school. X goes straight to the grade 9 classroom after the morning assembly. The first class is Mathematics. X has been at this school for only a month, and is trying to catch up with the others. As she does every day, the teacher writes a problem on the board and turns to the class to call on a student to help her solve the problem. X avoids her eyes and looks down, trying to look simultaneously calm and inconspicuous. The teacher calls out another student’s name, and X lets out a sigh of relief. X tries to concentrate on what is happening on the blackboard, but after the third step of the calculations, fails to understand how the numbers are transforming. X wonders, “What did they learn in the previous classes? I can’t seem to understand this… Oh no, the first problem’s done and it’s time for the second one now. Don’t look at the teacher, stay cool!”

The teacher’s eyes fall on X’s bent head and she thinks to herself, “There’s X, trying to be invisible again. I know X’s family has moved to four different towns in the past three years, and X is behind on many topics. I’ll give some extra time to X, Y and Z this week. They all really need it. Oh wait, not this week, I need to make the arrangements for the scholarship exam. I’ll do it next week. I can’t ignore the other students right now – they need to prepare for their board exams next year after all. And our school needs to maintain its passing percentages.”

The bell rings and it’s time for the students to head to the computer lab. X sits at the computer and logs into Mindspark, a learning software. The questions coming up on the screen are challenging and engaging, but not daunting, to X because they are actually from a grade 7 concept. X needs to catch up on various concepts, and Mindspark allows X to do this. But how does it identify what X needs?

Mindspark is a personalized and adaptive program that uses a multi-pronged approach to solve this problem.

Diagnosis:

When a student logs into Mindspark for the very first time in a subject, they are given a diagnostic test that determines the overall level at which the student lies on the spectrum. This means that even if a student is in grade 7, they could be given grade 3 content, depending on the level determined by the diagnostic test.

In this way, each student starts their journey at the level best suited to them.

Checking for conceptual learning:

Questions in Mindspark are specially designed to test understanding and to help students clear their misconceptions. When a student answers a specific question or combination of questions incorrectly, the system diagnoses the child’s misconceptions / weak areas. The child may be further provided with a simple or detailed explanation, or be redirected to questions that strengthen basic understanding.

Thus, the system does not allow a student to move up to higher levels without a strong understanding of the basic concepts. This helps students below the average class level to come up to their class level. Similarly, it provides challenging problems to high performing students allowing them to stay engaged, and enabling them to learn more. These decisions are driven by the adaptive logic which is designed to improve over time with increased student usage.

Pinpointing weak areas:

Mindspark examines patterns of errors to target “differentiated remedial instruction.” For instance, if a student makes a mistake on which decimal is bigger (3.27 or 3.3), it may be due to “whole number thinking” (27 is bigger than 3) whereas if they make the same mistake with 3.27 or 3.18, it could be due to “reverse order thinking” (comparing 81 to 72 because the “hundredth place” should be bigger than the “tenth place”). A good teacher may catch this if most of the class is making the mistake, but the likelihood is low if only a few students are making the error.

Thus, to tackle the need for specific and personalized learning paths for each student, the adaptive logic in Mindspark uses a student’s performance data as well as the conceptual framework for different topics to deliver relevant content.

An example of a personalized learning path for a student is given below:

Based on research at the University of Melbourne by Kaye Stacey, et al.

Remediating learning gaps:

In Mindspark, when a student gets 25% or more questions incorrect in a learning unit, the program allows the student to repeat the learning unit.  If the student got the questions incorrect due to a lack of understanding of a fundamental concept, then the adaptive logic takes the student to the immediate previous learning unit. If the student got the questions incorrect due to a misconception, then a remedial module is given to the student. Remedial modules are designed to resolve the student’s misconception based on cognitive dissonance theory. In the cognitive dissonance method, the system conflicts the student’s prior understanding (misconception), then gives the correct explanation to resolve the misconception. 1

Does this approach work?

The Abdul Latif Jameel Poverty Action Lab (J-PAL), a Research Centre at Massachusetts Institute of Technology led by Professor Karthik Muralidharan (co-chair of education), conducted a pilot randomized evaluation of the Mindspark program, deployed through stand-alone centres in Delhi as well as another subsequent evaluation in Government schools of Rajasthan.

Some of the key findings were:

  • The program was equally effective for students at all levels of the achievement distribution. Treatment effects in the impact evaluation did not vary significantly by level of initial achievement, gender or wealth. Thus, the intervention was equally effective in teaching all students.
  • However, the relative impact of the program was much greater for low-achieving students, who were making no progress in school. While the absolute impact of Mindspark was similar at all parts of the initial test score distribution, the relative impact was much greater for weaker students because the “business as usual” rate of progress in the control group was close to zero for students in the lower third of the initial test score distribution.
  • The program had large effects on student achievement in Math and Hindi. The usage of the program led to doubling of student test scores in math and Hindi over a 4-5-month period. There was a linear correlation between usage and gain.

But why was the student named X?

But why was the student named X?

Student X’s anonymity at the start of this article was to highlight the fact that a name is often linked to a variety of characteristics that lead to potential discrimination within the classroom – characteristics like gender, religion, caste, etc. Such experiences compound the problems faced by students on their learning journey.

A learning software, like Mindspark, does not consider such characteristics, thereby empowering the students to focus on their journey. The only aspects that Mindspark focuses on are whether the student is learning, and what the student is struggling with. This becomes especially important when a student is not able to receive the required support from the home environment in order to overcome possible challenges at school.

Mindspark is built on a simple principle and vision – helping each student learn with understanding – and its adaptive logic creates personalized paths to help each student realize their potential. In doing so, it also helps the teacher deal with heterogeneity in the learning levels of students within a single classroom.

References:

1 Rajendran, Ramkumar & Muralidharan, Aarthi. (2013). Impact of Mindspark’s Adaptive Logic on Student Learning. Proceedings – 2013 IEEE 5th International Conference on Technology for Education, T4E 2013. 119-122. 10.1109/T4E.2013.36.

Muralidharan, Karthik, Abhijeet Singh, and Alejandro J. Ganimian. 2019. “Disrupting Education? Experimental Evidence on Technology-Aided Instruction in India.” American Economic Review, 109 (4): 1426-60. https://www.aeaweb.org/articles?id=10.1257/aer.20171112

Vincy Davis, Anupriya Singh et al. (2019) Report on Time Allocation and Work Perceptions of Teachers. Accountability Initiative, Centre for Policy Research, New Delhi. https://accountabilityindia.in/wp-content/uploads/2019/06/REPORT-ON-DELHI%E2%80%99S-GOVERNMENT-SCHOOL-TEACHERS-.pdf

The post Personalized and Adaptive Learning with Mindspark appeared first on Ei Study.

]]>
Development Impact Bonds: Target Setting and Outcome Evaluation https://ei.study/development-impact-bonds-target-setting-and-outcome-evaluation/ Tue, 12 Dec 2023 18:10:00 +0000 https://ei.study/development-impact-bonds-target-setting-and-outcome-evaluation/ What are Development Impact Bonds? A Development Impact Bond is a new way of financing in the social development sector. Primarily, there are five parties involved: the service provider (the one who provides with a product or service that will bring a positive impact to the recipients), the risk investor (the one who puts the money upfront betting on the service provider’s credentials and ability to deliver the outcomes), the outcome funder (the one who will pay money with interest to the risk investor but only if the service provider achieves a pre-decided quantum of impact), the outcome evaluator (the one who tells us if the service provider met the target), and the process manager (the one who helps run the show). In case the outcome funder is a government then it’s called a social impact bond (SIB) and behaves similarly. For the purpose of this article a reference to DIB can be to either a DIB or SIB. It’s called an impact bond because it pays for better social outcomes that create impact; otherwise not. The foundation of successful evaluation of performance in the DIB lies on two pillars: setting the right targets, and a rigorous process to evaluate the outcomes. This article will provide a brief on the role of the outcome evaluator in a DIB – in particular about setting the threshold targets which triggers the payment from an outcome funder to the risk investor. Credit: India Development Review What is an outcome that can be evaluated? Choosing the right metrics for evaluation is a key first step to determining targets – because evaluation on these metrics will define if the programme has had a sufficient positive outcome. Hence if the theory of change involves an increase in learning outcomes of reading and mathematics – then those need to be measured (as opposed to other outcomes – such as confidence of the child, etc). In past and ongoing DIBs the outcome metrics  include measuring an increase in student enrolment in schools, reduction in health risk of diabetes, reduced carbon emissions, increase in crop yield and even reducing recidivism (tendency of a convicted criminal to reoffend) among prison inmates. Naturally, the outcome must be beneficial for the consumer and denote progress towards a better society. For example, for a DIB focused on the quality of education, the improvement of scores of students in math and language (i.e. student learning outcomes) can be a metric for evaluation. This can be then further broken down into several areas such as solving a number puzzle involving multi-step number operations, or identifying the meaning of a word in a given sentence context. These granular learning objectives make the backdrop of the outcome evaluation which is done by a third party. This article further cites examples and references to an ongoing DIB in Haryana in India to understand the process of target setting, evaluation, and pay outs better. The Haryana Early Literacy intervention is the first-ever Development Impact Bond (DIB) project in India to leverage CSR (Corporate Social Responsibility) funding for outcomes payment exclusively focusing on early literacy. It involves the Haryana School Shiksha Pariyojna Parishad (HSSPP) and Language & Learning Foundation (LLF) in partnership with IndusInd Bank and SBI Capital Markets. This DIB will scale up the existing program of Language and Learning Foundation in the state of Haryana. Educational Initiatives is playing the role of an outcome evaluator in the DIB. Henceforth, in this article, this project will be cited as an example and referred as the “HEL-DIB”   How do we set the targets to evaluate these said outcomes? Setting targets is a multi-pronged approach. The below are insights from doing it for an early grade literacy program for government school students in the State of Haryana (HEL-DIB). Talk to experts Typically, a range of experts in the field are consulted – these include subject/technical experts, assessment experts, statisticians, and ideally beneficiaries of the programme. For HEL-DIB, the right numbers for the target were decided in consultation with a pedagogy expert (to know what are children expected to know that this age), an assessment expert (to know what kind of assessment will be conducted on children), a child psychologist (to know major development outcomes at different intervals), and interacting with some children. Refer past literature In the HEL-DIB, reference and study of past literature on similar experiments was done and the gains (improvement in learning of children understood through various metrics) through a pre-post-test which were documented in published research papers were studied. Previous studies[1] done in India which assesses students from grades 1 – 3 (identified as foundational learning linked to early literacy gains) were read and similarities and limitations were drawn from the study design in planning. These studies provide a current benchmark of where students are and what are the achievement levels we can hold them to. These helped to define the range of the targets. Set boundary conditions A maximum limit based on the technical or pedagogical aspects (based on field trials and published research)  can be decided for the target. For example, if we know that an average person speaks 125 words per minute with a standard deviation of 15, it might be not be prudent to set a target for more than 200 words per minute for a typical intervention. This is also linked to the pay out at the end of the project (see next section). In this particular case, the number of words in Oral Reading Fluency (ORF)[2] was benchmarked. Link the target to the duration of the project The targets are also set considering the duration of the project. In this case, a two year target was set for the evaluation period. However the target can be lowered if the intervention is for only one year.  How do we know if we have achieved the set targets? Identifying the study design is a crucial step to knowing how one can evaluate if the said targets are met. A decision must be taken on whether...

The post Development Impact Bonds: Target Setting and Outcome Evaluation appeared first on Ei Study.

]]>
What are Development Impact Bonds?

A Development Impact Bond is a new way of financing in the social development sector. Primarily, there are five parties involved: the service provider (the one who provides with a product or service that will bring a positive impact to the recipients), the risk investor (the one who puts the money upfront betting on the service provider’s credentials and ability to deliver the outcomes), the outcome funder (the one who will pay money with interest to the risk investor but only if the service provider achieves a pre-decided quantum of impact), the outcome evaluator (the one who tells us if the service provider met the target), and the process manager (the one who helps run the show). In case the outcome funder is a government then it’s called a social impact bond (SIB) and behaves similarly. For the purpose of this article a reference to DIB can be to either a DIB or SIB.

It’s called an impact bond because it pays for better social outcomes that create impact; otherwise not. The foundation of successful evaluation of performance in the DIB lies on two pillars: setting the right targets, and a rigorous process to evaluate the outcomes. This article will provide a brief on the role of the outcome evaluator in a DIB – in particular about setting the threshold targets which triggers the payment from an outcome funder to the risk investor.

Credit: India Development Review

What is an outcome that can be evaluated?

Choosing the right metrics for evaluation is a key first step to determining targets – because evaluation on these metrics will define if the programme has had a sufficient positive outcome. Hence if the theory of change involves an increase in learning outcomes of reading and mathematics – then those need to be measured (as opposed to other outcomes – such as confidence of the child, etc). In past and ongoing DIBs the outcome metrics  include measuring an increase in student enrolment in schools, reduction in health risk of diabetes, reduced carbon emissions, increase in crop yield and even reducing recidivism (tendency of a convicted criminal to reoffend) among prison inmates. Naturally, the outcome must be beneficial for the consumer and denote progress towards a better society.

For example, for a DIB focused on the quality of education, the improvement of scores of students in math and language (i.e. student learning outcomes) can be a metric for evaluation. This can be then further broken down into several areas such as solving a number puzzle involving multi-step number operations, or identifying the meaning of a word in a given sentence context. These granular learning objectives make the backdrop of the outcome evaluation which is done by a third party.

This article further cites examples and references to an ongoing DIB in Haryana in India to understand the process of target setting, evaluation, and pay outs better.

The Haryana Early Literacy intervention is the first-ever Development Impact Bond (DIB) project in India to leverage CSR (Corporate Social Responsibility) funding for outcomes payment exclusively focusing on early literacy. It involves the Haryana School Shiksha Pariyojna Parishad (HSSPP) and Language & Learning Foundation (LLF) in partnership with IndusInd Bank and SBI Capital Markets. This DIB will scale up the existing program of Language and Learning Foundation in the state of Haryana. Educational Initiatives is playing the role of an outcome evaluator in the DIB. Henceforth, in this article, this project will be cited as an example and referred as the “HEL-DIB”

 

How do we set the targets to evaluate these said outcomes?

Setting targets is a multi-pronged approach. The below are insights from doing it for an early grade literacy program for government school students in the State of Haryana (HEL-DIB).

  1. Talk to experts

Typically, a range of experts in the field are consulted – these include subject/technical experts, assessment experts, statisticians, and ideally beneficiaries of the programme. For HEL-DIB, the right numbers for the target were decided in consultation with a pedagogy expert (to know what are children expected to know that this age), an assessment expert (to know what kind of assessment will be conducted on children), a child psychologist (to know major development outcomes at different intervals), and interacting with some children.

  1. Refer past literature

In the HEL-DIB, reference and study of past literature on similar experiments was done and the gains (improvement in learning of children understood through various metrics) through a pre-post-test which were documented in published research papers were studied. Previous studies[1] done in India which assesses students from grades 1 – 3 (identified as foundational learning linked to early literacy gains) were read and similarities and limitations were drawn from the study design in planning. These studies provide a current benchmark of where students are and what are the achievement levels we can hold them to. These helped to define the range of the targets.

  1. Set boundary conditions

A maximum limit based on the technical or pedagogical aspects (based on field trials and published research)  can be decided for the target. For example, if we know that an average person speaks 125 words per minute with a standard deviation of 15, it might be not be prudent to set a target for more than 200 words per minute for a typical intervention. This is also linked to the pay out at the end of the project (see next section). In this particular case, the number of words in Oral Reading Fluency (ORF)[2] was benchmarked.

  1. Link the target to the duration of the project

The targets are also set considering the duration of the project. In this case, a two year target was set for the evaluation period. However the target can be lowered if the intervention is for only one year. 

How do we know if we have achieved the set targets?

Identifying the study design is a crucial step to knowing how one can evaluate if the said targets are met. A decision must be taken on whether the gains will be compared to only the intervention group gains or to net of a control or comparison group. There are four options in decreasing order of accuracy. The first two use a control/comparison group, and the next two measure the difference in performance of children at two different points in time.

  1. Randomized Control Trial[3] Students are assigned to a treatment group or a control group. The randomness in the assignment of subjects to groups reduces selection bias and allocation bias, balancing both known and unknown prognostic factors, in the assignment of treatments. This establishes an almost equal baseline between the two groups and any gains in the intervention group are netted out against gains in control group students to be able to solely attribute the net gains to the intervention.
  2. Comparative Study: selecting a group for comparison after a group has already been identified to receive the treatment. Since this is not randomized, the gains may not be as accurately attributed to the intervention as in an RCT.
  3. Comparing the 60th percentile of the group’s learning level to the 50th percentile: while the actual numbers could vary, this showcases by how much the score distribution curve needs to shift to the right.  This was done for the HEL-DIB study design.
  4. Independent Baseline and End-line Measurement: measuring the outcomes in students at the start and end of the programme to know the gain in learning. This may not provide accurate results because the cause and effect factors may vary and it is difficult to rule out the externalities. Also because there is no way to say whether the gain is what one would expect without any intervention or if it is because of the intervention. Hence the gains are compared to a meta-study of “business as usual” gains of similar programs.

Okay, but what if the outcome does not meet the targets exactly? Will the pay-out option be just a binary – yes or no?

The nature of the DIB is such that unless all parties agree, it won’t come to fruition, and naturally everyone wants to push for what’s favourable to them. Lower targets tend to be favourable for the service provider and the risk investor while higher targets tend to be favourable to the outcome funder because would gain maximum value from their grant ‘investment’.

The setting of targets is a consultative process that goes through multiple iterations. In multi-year DIBs, one can also find that the targets for each year are different. This is because knowing the nature of the programme and effects on the beneficiaries, there may be higher gains only in the third year of the implementation.

It is not necessary for the pay-outs to be binary – the targets may also be a range where the pay-out can be a function of the actual gain. In newer DIBs, where the ‘rate card’ is not yet established, all parties could agree for the pay-outs to have a cap on both sides – for e.g. 80% of the payment is made regardless of whether there is any gain and a bonus of an extra 20% can be paid when gains exceed targets.

In the Early Literacy DIB, the ability of students to do a list of 13 tasks (testing nine skills across reading, writing, speaking, and listening) was evaluated. However targets were set for only 6 sub-tasks out of the 13 sub-tasks that were linked to pay-outs.

About the organisation’s involvement in DIBs: While EI is an outcome evaluator in the HEL-DIB, EI is the service provider in the UBS Foundation’s QEI – DIB in India. In partnership with Pratham Infotech Foundation, EI has implemented Mindspark in 55 schools serving 11,000 students in Lucknow, Uttar Pradesh for improving learning outcomes in Language & Math

Additional Resources to understand more about DIBs:

  1. A short video on thoughts on RCTs, Assessments, and Impact Evaluationsby Dr. Lant Pritchett, Dr. Rukmini Banerji, Prachi Windlass, and Dr. Karthik Mudalidharan
  2. Understanding Development Impact Bonds and the need for evidence-based decision making by Prachi Windlass(8:03 to 21:45)
  3. A short article by IDR on basics of DIBs
  4. About the current ongoing Quality Education India DIB

[1] Includes Educational Initiatives’ research work on Foundational Literacy & Numeracy, Room to Read’s Scaling Up Early Reading Intervention Project, and RTI’s reports in Mali and South Africa

[2] Oral Reading Fluency is defined as the ability to read with speed, accuracy, and proper expression. In order to understand what they read, children must be able to read fluently whether they are reading aloud or silently.

[3] From Wikipedia: A randomized control trial is a type of scientific experiment that aims to reduce certain sources of bias when testing the effectiveness of new treatments; this is accomplished by randomly allocating subjects to two or more groups, treating them differently, and then comparing them with respect to a measured response. One group—the experimental group—receives the intervention being assessed, while the other—usually called the control group—receives an alternative treatment, such as a placebo or no intervention. The groups are monitored under conditions of the trial design to determine the effectiveness of the experimental intervention, and efficacy is assessed in comparison to the control.

The post Development Impact Bonds: Target Setting and Outcome Evaluation appeared first on Ei Study.

]]>
Student Learning Assessments- A key to reforming education? https://ei.study/student-learning-assessments-a-key-to-reforming-education/ Tue, 12 Dec 2023 17:09:00 +0000 https://ei.study/student-learning-assessments-a-key-to-reforming-education/ “It is important to identify the ailment correctly before prescribing, let alone administering, a medicine.”  The recently released National Education Policy document by the Indian government, advocates for assessment led-reform, and the word ‘assessment’ finds itself mentioned more than 40 times in the 40-page+ document.[1] Governments, international organizations, and other stakeholders are increasingly recognizing the importance of assessment for monitoring and improving student learning and achievement levels. However, many consider assessments a bad word, and their utility for education reform remains debated. Depending on the nature of the participation by different stakeholders in the assessment process, it can evoke ranging from- necessary, evil, anxiety, pressure, competition, success, failure, judgment, feedback, fairness, standards, accountability, bureaucracy and drudgery, to mention a few. This blog article is based on my experience and work with Educational Initiatives on Large Scale Assessments, it aims to provide counter-arguments to common criticisms[2] on assessments. Assessment causes anxiety for children, parents and teachers. There exist many forms of assessment to serve a wide variety of functions including grading, selection, diagnosis, mastery, guidance and prediction. All assessments are not necessarily high-stakes, stressful, or for selection and evaluation. Many assessments advocated for reform are assessment for learning and are low stakes for students (no direct implication on test-takers)/ These assessments are not to ‘stress’ the child but to meaningfully ‘care’ for the child’s learning by paying attention to it. For instance, low-stakes diagnostic assessments are an example of assessment for learning. These support teachers and administrators to measure student ‘understanding’ of concepts to be followed up with targeted instruction (and additional resources where necessary) to bridge learning gaps at an early stage. Assessments narrow the focus of education. Emphasis on measurable learning does not mean ignoring other outcomes of education, such as physical, civic, social-emotional, civic, or artistic development. Focusing on learning—and on the educational quality that drives it—is more likely to lead to other desirable outcomes. Conditions that allow children to spend two or three years in school without learning to read a single word or to reach the end of primary school without learning two-digit subtraction are not conducive to achieving the higher goals of education. A study in Andhra Pradesh, that rewarded teachers for gains in measured learning in math and language led to improved outcomes not just in those subjects, but also in science and social studies—even though there were no rewards for improvement in the latter two subjects.[3] We are over-testing our students with an increase in assessments. Contrary to popular notion, evidence points out there is being too little reliable measurement of learning in our schools as opposed to too much measurement. A lack of right measurement means that we would often be flying blind—and without even agreement on the destination. While it is true that excessive testing can narrow the intellectual development of high-achieving students, the opposite is true at low levels of learning. There is also evidence to suggest that testing helps with processing learned materials and even in the learning of untested materials Research[4] in cognitive science and psychology shows that testing, done right, can be an effective way to learn. Taking tests can produce better recall of facts and a deeper understanding than an education devoid of exams. High-quality tests that have been developed to assess how well students have met curricular expectations/goals often drive in-depth learning for students.  Assessments do not test real knowledge, understanding or application of what they have learnt. At the heart of a good assessment are “good questions” – questions that test for understanding and application rather than merely testing recall of facts or procedures mentioned in the textbook. Assessments with good questions can be powerful tools to inform pedagogical practices and policy. Educational assessment, which can look very simple on the surface but is a highly technical field with many dimensions. Building technical rigor and institutional capacity on designing assessments are the solution, not discarding assessments. Figure 1The question should test understanding of the concept taught and not focus on facts. (On the left is a straight-forward question, on the right is a question testing for understanding. Many assessments like PISA are not useful as they are out-of-context/unfamiliar. It is believed that Indian students are learning well, but because in tests like PISA they got questions of a type they were not familiar with, they could not perform well in the assessment.[5] This is not true. While familiarity with the questions may have played a small role, the real reason for poor performance is the extremely low learning levels in our larger school system. Several studies, including many conducted by EI[6], have shown similar results. As opposed to the regular assessments done in India, PISA doesn’t test a student’s memory or curriculum-based knowledge but aims to judge the student’s competency in reading, mathematics and science. A well-framed question in geometry is not merely a test of knowledge of geometrical concepts, but also tests a student’s ability to analyze, logically deduce and draw conclusions. Figure 2Mathematical Literacy, OECD, PISA 2003 Figure 3 Question from ASSET, Educational Initiatives “Just weighing the pig doesn’t make it fatter.” Assessing children is only a necessary condition but not a sufficient one. There is no guarantee that measuring learning outcomes will by itself lead to an improvement. However, it is almost certain that not measuring outcomes will encourage the system to continue on its current course with poor transformation of inputs into outcomes. Evidence[7] points to the fact that organizations (especially bureaucracies) are more likely to deliver on outcomes that get measured. The author acknowledges that while assessment is not the ultimate answer to all the issues in education, but it may truly hold they key to the answers. In our striving to provide quality education for all, assessments help in guiding two things –1. where we stand and 2. where we need to go. Knowing these things can provide focus and stimulate action, to achieve the right of our children to learn and an education system they truly...

The post Student Learning Assessments- A key to reforming education? appeared first on Ei Study.

]]>

“It is important to identify the ailment correctly before prescribing, let alone administering, a medicine.”

 The recently released National Education Policy document by the Indian government, advocates for assessment led-reform, and the word ‘assessment’ finds itself mentioned more than 40 times in the 40-page+ document.[1] Governments, international organizations, and other stakeholders are increasingly recognizing the importance of assessment for monitoring and improving student learning and achievement levels. However, many consider assessments a bad word, and their utility for education reform remains debated. Depending on the nature of the participation by different stakeholders in the assessment process, it can evoke ranging from- necessary, evil, anxiety, pressure, competition, success, failure, judgment, feedback, fairness, standards, accountability, bureaucracy and drudgery, to mention a few.

This blog article is based on my experience and work with Educational Initiatives on Large Scale Assessments, it aims to provide counter-arguments to common criticisms[2] on assessments.

  1. Assessment causes anxiety for children, parents and teachers.

There exist many forms of assessment to serve a wide variety of functions including grading, selection, diagnosis, mastery, guidance and prediction. All assessments are not necessarily high-stakes, stressful, or for selection and evaluation. Many assessments advocated for reform are assessment for learning and are low stakes for students (no direct implication on test-takers)/ These assessments are not to ‘stress’ the child but to meaningfully ‘care’ for the child’s learning by paying attention to it.

For instance, low-stakes diagnostic assessments are an example of assessment for learning. These support teachers and administrators to measure student ‘understanding’ of concepts to be followed up with targeted instruction (and additional resources where necessary) to bridge learning gaps at an early stage.

  1. Assessments narrow the focus of education.

Emphasis on measurable learning does not mean ignoring other outcomes of education, such as physical, civic, social-emotional, civic, or artistic development. Focusing on learning—and on the educational quality that drives it—is more likely to lead to other desirable outcomes. Conditions that allow children to spend two or three years in school without learning to read a single word or to reach the end of primary school without learning two-digit subtraction are not conducive to achieving the higher goals of education.

A study in Andhra Pradesh, that rewarded teachers for gains in measured learning in math and language led to improved outcomes not just in those subjects, but also in science and social studies—even though there were no rewards for improvement in the latter two subjects.[3]

  1. We are over-testing our students with an increase in assessments.

Contrary to popular notion, evidence points out there is being too little reliable measurement of learning in our schools as opposed to too much measurement. A lack of right measurement means that we would often be flying blind—and without even agreement on the destination.

While it is true that excessive testing can narrow the intellectual development of high-achieving students, the opposite is true at low levels of learning. There is also evidence to suggest that testing helps with processing learned materials and even in the learning of untested materials

Research[4] in cognitive science and psychology shows that testing, done right, can be an effective way to learn. Taking tests can produce better recall of facts and a deeper understanding than an education devoid of exams. High-quality tests that have been developed to assess how well students have met curricular expectations/goals often drive in-depth learning for students. 

  1. Assessments do not test real knowledge, understanding or application of what they have learnt.

At the heart of a good assessment are “good questions” – questions that test for understanding and application rather than merely testing recall of facts or procedures mentioned in the textbook. Assessments with good questions can be powerful tools to inform pedagogical practices and policy. Educational assessment, which can look very simple on the surface but is a highly technical field with many dimensions. Building technical rigor and institutional capacity on designing assessments are the solution, not discarding assessments.

Figure 1The question should test understanding of the concept taught and not focus on facts. (On the left is a straight-forward question, on the right is a question testing for understanding.

  1. Many assessments like PISA are not useful as they are out-of-context/unfamiliar.

It is believed that Indian students are learning well, but because in tests like PISA they got questions of a type they were not familiar with, they could not perform well in the assessment.[5] This is not true. While familiarity with the questions may have played a small role, the real reason for poor performance is the extremely low learning levels in our larger school system. Several studies, including many conducted by EI[6], have shown similar results.

As opposed to the regular assessments done in India, PISA doesn’t test a student’s memory or curriculum-based knowledge but aims to judge the student’s competency in reading, mathematics and science. A well-framed question in geometry is not merely a test of knowledge of geometrical concepts, but also tests a student’s ability to analyze, logically deduce and draw conclusions.

Figure 2Mathematical Literacy, OECD, PISA 2003

Figure 3 Question from ASSET, Educational Initiatives

  1. “Just weighing the pig doesn’t make it fatter.”

Assessing children is only a necessary condition but not a sufficient one. There is no guarantee that measuring learning outcomes will by itself lead to an improvement. However, it is almost certain that not measuring outcomes will encourage the system to continue on its current course with poor transformation of inputs into outcomes. Evidence[7] points to the fact that organizations (especially bureaucracies) are more likely to deliver on outcomes that get measured.

The author acknowledges that while assessment is not the ultimate answer to all the issues in education, but it may truly hold they key to the answers. In our striving to provide quality education for all, assessments help in guiding two things –1. where we stand and 2. where we need to go. Knowing these things can provide focus and stimulate action, to achieve the right of our children to learn and an education system they truly deserve.

[1] https://www.mhrd.gov.in/sites/upload_files/mhrd/files/NEP_Final_English_0.pdf

[2] Unpacked: The Black Box of Indian School Education-Karthik Dinne;

Priorities for Primary Education Policy in India’s 12th Five-year Plan, Karthik Muralidharan

[3] Muralidharan, K., & Sundararaman, V. (2011). Teacher Performance Pay: Experimental Evidence from India. Journal of Political Economy,

[4] A New Vision for Testing” in Scientific American 313, 2, 54-61 (August 2015) doi:10.1038/scientificamerican0815-54

[5] http://archive.indianexpress.com/news/poor-pisa-score-govt-blames–disconnect–with-india/996890

[6] EI conducts large scale assessments (including ASSET for private schools and state and national learning studies for government schools) which are diagnostic (providing insights on student learning gaps) and also benchmark student learning across schools, states and countries. Over the past two decades, EI has undertaken over 80+ projects with 50+ partners (16+ languages, 40+ detailed studies published) across geographies, socio-linguistic backgrounds in India and abroad, for more than 10 million students across different grades.

[7] Wilson, James Q. 1989. Bureaucracy. New York: Basic Books.

The post Student Learning Assessments- A key to reforming education? appeared first on Ei Study.

]]>
Board Exams, Rote Learning and the Learning Crisis – Root cause and solution https://ei.study/board-exams-rote-learning-and-the-learning-crisis-root-cause-and-solution/ Tue, 12 Dec 2023 14:06:01 +0000 https://ei.study/board-exams-rote-learning-and-the-learning-crisis-root-cause-and-solution/ In continuation to 7th September 2020 blog: Board Exams, Rote Learning and the  Learning Crisis- Understanding the problem In-depth. The article throws light on the cause and solutions to the problems discussed in the previous article.  The article appeared in the June 2018 edition of India Seminar. http://www.india-seminar.com/2018/706/706_sridhar_rajagopalan.htm There are serious problems at every stage of the Board Exam and all contribute to the learning crisis: The very validity of questions in the Board Exams: The NCF 2005 included position papers on many different topics. Here is an excerpt from the paper on examination reforms[1] 2004-05: The core of the exam system is the exam paper. This may seem almost a tautological assertion but, given the lack of attention paid by most boards to the quality of the actual exam paper, it is necessary to make it. The question papers remain seriously problematic. Question paper sets from the most recent (March 2004) 10th and 12th grade exams were collected for detailed study. Attention was focused on paper sets from five boards popularly perceived to be the best in the country. The exercise was an eye-opener: What is the weight of the pituitary gland? (non-essential, the gland should be studied for its crucial function, perhaps even for its structure, but hardly its weight! ) Who were the parents of Benito Mussolini? (irrelevant) How many members are there in the U.N.O.? (transient) Who was called Modern Messiah? (The term was employed by the textbook writer to describe Karl Marx but has no wide currency outside the textbook.) Describe the method of irrigation prevalent in India (takes as a given fact that there is only one such method in India, perhaps because the textbook has mentioned only one!) Our highest import is from (Hong Kong, Italy, Kuwait) (no correct answer provided—at least for any year in the last quarter century.) Unfortunately, those criticisms of 2005 are equally valid today. The problem of poor and invalid questions does not plague the board exams alone. As part of an NCTE committee examining the reasons for only 1% of candidates passing the Central Teacher Eligibility Test (CTET), I had the opportunity to analyse questions from the November 2012 CTET test along with the performance data. Note that CTET is set by the CBSE (and is considered a much better test than the state TETs). Here are two questions from the paper: The basis of the data for the first question (apart from how comparative warmth is measured) is unclear. In the second question, the correct answer was marked as (1) and experts commented that most of the other options seem better ones. Let me add that I have not cherry picked questions, a majority of questions had issues with validity or ambiguity or level of difficulty. Somehow this issue of quality and appropriateness of questions in exams has never got much attention, nationally or internationally. Internationally, this is not seen as a big problem in the developed world and researchers there probably assume that questions would be fine. However, recently a working paper by Dr. Newman Burdett compared Board Exam papers from India, Pakistan, Uganda, Nigeria and a Canadian province, Alberta and found that the extent of rote questioning in India and Pakistan was even higher than the African countries: “In India and Pakistan, higher-order skills were almost entirely lacking and the focus was very much on recall of very specific rote-learnt knowledge… In the two African countries, this rote-learning approach seemed less extreme” [2] B. The process to set and check the papers and keep them error-free: The process of making papers has remained a largely manual process where question makers submit papers in sealed covers and a paper is chosen at random for the actual test. These days, it is possible to use software to have questions solved and checked independently so that errors and even ambiguities can be all but eliminated. C.  The way Board Exams are scored. The graphs below from the work of Prashant Bhattarcharji[3] show how the marks of ISC (left) and CBSE (right) students in the Maths Class 12 exam in 2013 are distributed. This is data for every student who took the tests There are two points to note – each of which represents a serious problem if not a scam. These are not normal curves, which is very odd and would need explanation. Notably all spikes at marks like 36, 43 and around 96 in CBSE and 88, 95 and other scores in ISC are inexplicable and unfair. Both graphs show gaps meaning no student scored those marks. This is not possible under normal conditions and suggests grace marks, also unfair to other students. Why is this happening? It is routinely said  ‘Never attribute to malice that which is adequately explained by stupidity’. We see this more as a lack of capacity in the boards rather than a desire to manipulate. Yet, this is extremely unfair for a high stakes school leaving exam. It must be noted that nothing much changed based on these revelations in 2014; the author detected the same irregularities the next year. There is also the issue of comparability of board marks. One can understand that marks of two different Boards are not comparable. But for many years, even marks across CBSE zones were not comparable (students score higher in the Chennai region of CBSE compared to the Delhi region). These are apparently done to allow the students to compete with the local state boards but are unfair for students who may seek admission in a different region. Earlier, we have discussed in some depth why we feel there is a learning crisis, its original causes and why it persists in spite of efforts and resources. We have also discussed the key role of rote learning and many board exam related problems. In this section, we shall discuss some solutions. The first principle that we should remember is that deep problems usually do not have quick fixes. All the solutions proposed will...

The post Board Exams, Rote Learning and the Learning Crisis – Root cause and solution appeared first on Ei Study.

]]>
In continuation to 7th September 2020 blog: Board Exams, Rote Learning and the  Learning Crisis- Understanding the problem In-depth. The article throws light on the cause and solutions to the problems discussed in the previous article. 

The article appeared in the June 2018 edition of India Seminar.

http://www.india-seminar.com/2018/706/706_sridhar_rajagopalan.htm

There are serious problems at every stage of the Board Exam and all contribute to the learning crisis:

  1. The very validity of questions in the Board Exams:

The NCF 2005 included position papers on many different topics. Here is an excerpt from the paper on examination reforms[1] 2004-05:

The core of the exam system is the exam paper. This may seem almost a tautological assertion but, given the lack of attention paid by most boards to the quality of the actual exam paper, it is necessary to make it. The question papers remain seriously problematic.

Question paper sets from the most recent (March 2004) 10th and 12th grade exams were collected for detailed study. Attention was focused on paper sets from five boards popularly perceived to be the best in the country. The exercise was an eye-opener:

  • What is the weight of the pituitary gland? (non-essential, the gland should be studied for its crucial function, perhaps even for its structure, but hardly its weight! )
  • Who were the parents of Benito Mussolini? (irrelevant)
  • How many members are there in the U.N.O.? (transient)
  • Who was called Modern Messiah? (The term was employed by the textbook writer to describe Karl Marx but has no wide currency outside the textbook.)
  • Describe the method of irrigation prevalent in India (takes as a given fact that there is only one such method in India, perhaps because the textbook has mentioned only one!)
  • Our highest import is from (Hong Kong, Italy, Kuwait) (no correct answer provided—at least for any year in the last quarter century.)

Unfortunately, those criticisms of 2005 are equally valid today.

The problem of poor and invalid questions does not plague the board exams alone. As part of an NCTE committee examining the reasons for only 1% of candidates passing the Central Teacher Eligibility Test (CTET), I had the opportunity to analyse questions from the November 2012 CTET test along with the performance data. Note that CTET is set by the CBSE (and is considered a much better test than the state TETs). Here are two questions from the paper:

The basis of the data for the first question (apart from how comparative warmth is measured) is unclear. In the second question, the correct answer was marked as (1) and experts commented that most of the other options seem better ones. Let me add that I have not cherry picked questions, a majority of questions had issues with validity or ambiguity or level of difficulty.

Somehow this issue of quality and appropriateness of questions in exams has never got much attention, nationally or internationally. Internationally, this is not seen as a big problem in the developed world and researchers there probably assume that questions would be fine. However, recently a working paper by Dr. Newman Burdett compared Board Exam papers from India, Pakistan, Uganda, Nigeria and a Canadian province, Alberta and found that the extent of rote questioning in India and Pakistan was even higher than the African countries: “In India and Pakistan, higher-order skills were almost entirely lacking and the focus was very much on recall of very specific rote-learnt knowledge… In the two African countries, this rote-learning approach seemed less extreme” [2]

B. The process to set and check the papers and keep them error-free: The process of making papers has remained a largely manual process where question makers submit papers in sealed covers and a paper is chosen at random for the actual test. These days, it is possible to use software to have questions solved and checked independently so that errors and even ambiguities can be all but eliminated.

C.  The way Board Exams are scored. The graphs below from the work of Prashant Bhattarcharji[3] show how the marks of ISC (left) and CBSE (right) students in the Maths Class 12 exam in 2013 are distributed. This is data for every student who took the tests

There are two points to note – each of which represents a serious problem if not a scam.

  1. These are not normal curves, which is very odd and would need explanation. Notably all spikes at marks like 36, 43 and around 96 in CBSE and 88, 95 and other scores in ISC are inexplicable and unfair.
  2. Both graphs show gaps meaning no student scored those marks. This is not possible under normal conditions and suggests grace marks, also unfair to other students.

Why is this happening? It is routinely said  ‘Never attribute to malice that which is adequately explained by stupidity’. We see this more as a lack of capacity in the boards rather than a desire to manipulate. Yet, this is extremely unfair for a high stakes school leaving exam. It must be noted that nothing much changed based on these revelations in 2014; the author detected the same irregularities the next year.

There is also the issue of comparability of board marks. One can understand that marks of two different Boards are not comparable. But for many years, even marks across CBSE zones were not comparable (students score higher in the Chennai region of CBSE compared to the Delhi region). These are apparently done to allow the students to compete with the local state boards but are unfair for students who may seek admission in a different region.

Earlier, we have discussed in some depth why we feel there is a learning crisis, its original causes and why it persists in spite of efforts and resources. We have also discussed the key role of rote learning and many board exam related problems. In this section, we shall discuss some solutions.

The first principle that we should remember is that deep problems usually do not have quick fixes. All the solutions proposed will take at least three to five years to have an impact. These solutions focus on fundamentally addressing key issues like Rote Learning and low capacity in the system.

Focus on foundational learning: For any student, learning is like constructing a building – there is a foundation and then there are stages with each serving as the base for the next. The foundation for all learning is that by the age of 8 or 9, children must be able to read fluently, do arithmetic operations and develop basic critical thinking skills. The age deadline (corresponding to class 3 or 4) is critical. Either by law or through a campaign (like Swachch Bharat), this should be set as the common goal to be achieved by states and all types of schools in the next 5 years (with intermediate goals). A number of enablers in the form of resources for teachers, books and apps for students and information for parents and society are needed to make this happen. This will include research and assessments. But this will be the first (and most important) step in the battle to defuse the learning crisis.

Build systemic and institutional capacity: Capacity building is often used to refer to training of teachers and educational personnel. However, this ‘people capacity building’ has to be preceded by ‘institutional capacity building’. This essentially means that key educational institutions – research institutions like NCERT, teacher training institutions like colleges of education and DIETs, assessment institutions like the boards and others like the NCTE – must be strong and should continuously strengthen their expertise in their core functional area. They should evoke respect from the academic as well as the practitioner community. The steps for this are appropriate leadership and hiring, research as a key role and providing autonomy to institutions. Do we, for instance, see NCERT in the same light as an IIT or IIM? Is that not possible to imagine and make happen?

Systemic capacity is the ‘body of knowledge’ that gets created and advances a sector. For example, how does one get children in very low-income families to become effective readers? How does one create high quality computer-based assessments in the Indian context? While some of this knowledge can and should draw upon what is already known internationally, there will be a lot of knowledge that will have to be created (and hopefully contributed to the international community). If Google can develop techniques of working with low bandwidth connections and offline maps in India (and then spread that to the rest of the world), can we not discover educational approaches that work in our conditions and use that to advance the field internationally?

Build a new Science of Learning: The Science of Learning is a new interdisciplinary science which draws upon school subjects like mathematics, science and language as well as pedagogy, psychology, cognitive sciences, neuroscience, AI and related fields to answer the question, ‘How can human beings, especially children, learn better?’ Just like medical science studies disease, symptoms, diagnostic techniques and treatments and builds on data to provide answers that doctors can use to treat patients, the Science of Learning uses data on learning, techniques for learning and teaching insights to provide insights to teachers on a regular basis. It allows teachers to use the art of teaching they practice and draw upon the Science of Learning whenever they need, to lead to improved student learning. Some Science of Learning researchers may study ways to improve reading skills in children, while others may research issues with Algebra learning in children, while still others may focus on effectiveness of AI-based apps for students.

There is no doubt that we are in the midst of a learning crisis today. But we also have rich human resources that we can focus to not just solve this crisis for ourselves, but come up with techniques and solutions by strategically focussing on this area. Every country needs these solutions and there are, as yet, no ready answers. Can we convert this problem into an opportunity?

[1] http://www.ncert.nic.in/html/pdf/schoolcurriculum/position_papers/examination_reforms.pdf

[2] https://www.riseprogramme.org/sites/www.riseprogramme.org/files/publications/RISE_WP-018_Burdett.pdf

[3] http://www.thelearningpoint.net/home/examination-results-2013/exposing-cbse-and-icse

The post Board Exams, Rote Learning and the Learning Crisis – Root cause and solution appeared first on Ei Study.

]]>