– By Sukhpreet Sekhon, MME, ASER Centre
Pratham is one of India’s largest non-governmental organisations working in education. Annually since 2005, Pratham has been facilitating the ASER survey that provides information on schooling and basic learning from all rural districts in India. Since inception ASER has been highlighting the poor levels of reading and arithmetic skills of children aged 5 to 16. To address this problem, in 2007 Pratham launched the Read India program to demonstrate what could be done to improve children’s basic learning on scale.
In 2013, Pratham’s Read India program had been in the field for six years. The interventions had evolved across two phases of three years each. With every phase the model for how to support children’s basic learning changed and improved significantly. By 2013, we had designed and piloted the “Learning Camp” model. And now it was time to scale up. We urgently needed to put in place a robust, standardised and independent internal measurement system that could help us better assess what we were doing and provide feedback to the entire program to deliver better outcomes.To get this done, the MME unit was constituted. Our first project was to develop, manage and sustain a robust measurement system for Read India.
Understanding how many hotels and houses we need
Ever played Monopoly and wondered why the number of houses and hotels that you can build is limited? Well it’s fairly obvious. You don’t want more than you can build (on the properties you own) and you don’t want extra properties to clutter your gaming environment. It works the same way for data. You don’t want to collect more data than you can absorb, understand and utilise in a scaled-up program.
This was our first challenge – Identifying the key variables that would allow us to measure what we are doing keeping in mind the overarching constraints of time and cost.
To start off we literally locked ourselves in a room with all the possible information – documents and experiences – about the Read India program thus far. We started tearing apart every word in every Read India related documents and sifting through all the past experiences in Read India with an aim to understand every detail about the program and to prepare our list of questions that we would like to have answered by the leadership teams and program managers. This first step enabled us to get clarity on the following:
- What had been done in the program so far and how was it implemented?
- What assessments had worked and not worked? Had they given us the information we wanted?
- What were the location specific variables that were influencing the previous implementation models?
- What were the data related capabilities of the field staff across locations
- What challenges had been identified while measuring, storing and analysing data in the past
Once we had our discussions we thrashed out a rough theory of change map and prepared a draft list of indicators. With this background work done, we joined a meeting of the entire Pratham leadership team – this included all Read India leaders from each state as well as senior members of Pratham’s content and training teams. We spent five gruelling days together – discussing threadbare every step of the process, every tool, every format, every table that we wanted. We did mock trials and virtual exercises – using data to actually do everything that anyone at any level of the system would have to do. Together we needed to get a first-hand feel of every step of what the measurement would entail. These intense debates and arguments, exercises and discussions led to the ultimate list of indicators and processes.
We needed a nimble and flexible system. We debated the level at which data would be available – aggregate versus unit. We argued over how much data versus how little. What did we want to know about inputs, processes and outcomes? How fast could data be made available? An important thumb rule underlying these discussions was that we should have metrics, methods and mechanisms that helped us track our work, assess “success” and provide timely inputs to feedback into the system very quickly. This process should enable different levels in the program to tweak strategies and activities to improve performance. Where did we end up? Basic set of indicators – mostly focussed on participation (attendance) and outcomes (children’ reading and arithmetic progress) would be collected by every full time Pratham team member everywhere that Learning Camps were being conducted in the Read India intervention. Process observations would be done periodically by a smaller team.
Learning everything about the horses and their owners
6/5 – Did you ever put your hard-earned or easily-inherited money on a wager for a horse race? If you did then a 6/5 odd is a fairly decent odd to start your day with. Now consider this, as a person putting a wager you need to have a clear understanding of the odds while having a fair knowledge about the background of the horses, the race track and the results of recent races. However, if you were the one responsible for formulating the odds you would need to have a deeper understanding of many more variables that will impact the performance of the horse on the day of the race. Similarly, if you were implementing a program at scale you would want to minimize your odds to fail and maximize your odds for success. To improve your odds of success you will need to continuously obtain detailed information that allows you to learn, adapt and evolve your program design.
From Read India’s perspective what works in a set of villages in the desert of western India may be very different from what works in the foothills of the Himalayas. What works in big schools in crowded densely populated villages in Bihar may be different from the tribal hamlets in Chhattisgarh.
Here we stopped and asked ourselves – for the system as a whole we have been very frugal with data demands. The design decisions for the large scale system made sense given our needs, resources and capabilities. But are we short changing ourselves? Can we learn more than we will from the data that will flow into our big system? Should we create a space in every state from where we can collect more information and expand our understanding of what we are doing?
The concept of experiential learning is extremely important to factor in for scaled-up programs. Therefore, to understand what was working where we decided to choose one cluster of schools per state (40-50 schools) and here we collected data (on our systems) for each child. This meant that for 5% of our sample we will also have disaggregated data at the child level. Our aim was to use this detailed data set to gain an in-depth understanding of the program while testing program assumptions and enabling the program teams to tweak the design and implementation. Like other locations, the aggregated data from these schools goes into the common data base, while the disaggregated data is stored in a different database.
|A snapshot of the formats and assessment tool
By choosing to aggregate data at the school level for all locations and collect disaggregated child-wise information from a small sample we did the following:
- Avoided collecting too much data on scale – this is usually attached with higher costs and investment in time and often leads to a low utilisation and use of the detailed data.
- Formulated an architecture that served the needs of the program administrators and the program design teams while enabling them to learn from the varying design space that existed due to the scale.
- Developed a robust starting point from which we could build efficient data collection, data analysis and reporting systems (more on this below).
CHALLENGE – III
Constructing the mega LEGO set
What if you are on a vacation and decide to work with your kids on a much awaited LEGO project. And as you get started you find that the LEGO pieces in the box do not match with the ones mentioned in the construction manual? Or what if these pieces are sent to you a week later. You would not like that, will you?
Like the LEGO project, when you set up your measurement system you want the right pieces available at the right time to be successful. Having worked this hard on the MME structure and processes you will be very impatient and annoyed if you get information from incorrect data points coming onto your systems and clogging up or causing untimely delays in the data assembly line.
Our next challenge – How to ensure accuracy of data and timeliness so that different users get timely information about the Read India program in a systematic and sustainable manner? This evidence is critical for actual feedback for program implementation strategies?
Key elements in designing and building the measurement architecture:
- Provided standardised pen & paper data tools and recording formats to teams in all locations. This child wise hard data remained in the field with the team members who needed to return to each school for subsequent camps. The aggregate data from each camp was uploaded.
- Provided a data entry access point for a cluster of about 5-10 team members – a laptop and data card in each block (about 200 blocks) where Read India Learning Camps were running
- Tracked, supervised and monitored the process of recording and entering data
- Set a monthly timeline for all the data entry and internal verification of the month’s data
While appropriate and accessible technology was essential (like the ability for uploading, databases, data platforms) we also knew from all past experiences of running assessments and programs that the human interface is absolutely essential for the smooth functioning of the system. Perhaps the most valuable part of the entire measurement set up was the team of two or more people in each state – members of the state MME team who travelled constantly to meet Pratham team members in the field, explained the metrics, participated in the process of data collection, understood the challenges in the field and actively engaged to solve them. Within the first two months (July and August) of the setting up the MME unit for Read India. All team members (over a thousand) conducting Learning Camps had been met – once in a group and at least once in the field, onsite during the Learning Camp. The robustness and timeliness of the data depended crucially in the effectiveness of the interactions between MME team members and the Read India teams.
So where were we? By the beginning of September 2013, data began to flow in. At the beginning of every month we had information from the previous month about the progress of children in the Learning Camp schools – the improvement (or lack of) in children’s ability to read and do arithmetic.
This came in from over 2,000 schools through about 200 blocks spread across 15 states. This data had been verified and then updated on our online data portal.
We were on track but with bigdata came big responsibility to make sense of it all!
Preparing our 4-course dinner
Setting up a data reporting structure is like preparing a 4-course meal. You want to show certain numbers to get started (your starters), then some basic graphs or tables (the soup), then some core tables about performance (entrée/main course) and finally tables/graphs containing periphery information about implementation (dessert). Most importantly just as all your food has to be edible and delicious, all the data has to actionable! And like good chefs, we wanted our customers to demand more!!!
Our next challenge – To separate the signals from the noise
We developed a live online reporting system (similar to business intelligence portals that various corporates use) that allowed different users to choose what they wanted to see. Of course everyone could see performance at all levels from school to block, to district, state and across the country. As the data was uploaded in the field, the online reports got updated simultaneously.
Setting up reporting systems was not enough. Again our MME team was absolutely essential in making the data come alive. We continuously talked to Pratham team members up and down the system. In small groups, big groups, in stand-alone meetings, in joint meetings, we conducted evidence based monthly interactions across all states. The aim of these monthly meetings was to assist the program administrators in each region to apply the findings from the data and adapt their program strategies accordingly. To add more perspective to the data and to provide further insights into the program implementation our team members who were stationed in each state spent about 10-15 days visiting different schools to observe the intervention and reported back on key process indicators as well.
Historically, missionaries have played an important role in spreading a religion and in expanding the base of believers. Our measurement missionaries – the MME team members to us are as important as the metrics, measures, methods and mechanisms for data collection, storing and analysis. To sustain a culture where curiosity drives innovation and evidence is the key to action, you have to work hard to enable people to taste data and digest it.
|Reading performance comparison on the data portal
Evolving the reporting systems is key. To effectively sustain the reporting structures we took continuous feedback from users, leaders and implementers alike, and try to constantly understand how best we could evolve our reporting portal to make it even more actionable. For instance based on a set of feedback we enabled comparison features in our reports that enabled learning comparisons between any two states, blocks, schools, and teachers and so on. This allowed Pratham state teams to continuously compare the learning performance for different data levels and set comparative benchmarks
Having to set up a measurement system at scale in a development sector program running in a country such as India is like driving a vehicle while still building it and at times even constructing the highway as well.
There are many things you would want to do but prioritizing and focusing on the important ones is crucial – collect the data you can effectively use, design data collection systems that you can maintain, generate data that can provide you with actionable information in a timely manner, report specific information to different levels of people to make data more actionable, evolve your reporting strategies based on regular feedback and have a sustainable measurement model that allows you to test your program assumptions.
When we started down this path, we weighed our needs, our resources and our capabilities. What we came up with suits the animal that we are and the terrain that we inhabit today. From time to time, we will continue to do this weighing. By then our needs and resources, our capability to do, our appetite for data, the technology and the terrain – all may have changed. Then the measurement framework and architecture will be reworked and revised. This will continue to be an ongoing process.
In the meanwhile, enjoy the driving. Look at the road ahead. But don’t forget to look out of the window to see how far you have come!
 ASER is India’s largest citizen led survey covering over 300,000 households and over 600,000 children annually from 2005 to 2014. It is also the only annual source of information on children’s learning outcomes available in India today. To know more about the impact of ASER visit – www.asercentre.org/p/75.html
 A “Learning Camp” is a period of 6-10 days of intensive activity in a school. To begin the work, a simple assessment of reading, number knowledge and operations is done. Children in Grades 3 to 5 are grouped by their learning level rather than by grade. The teaching-learning activities for each group start at the level of the children. This approach is often referred to as “teaching at the right level”. As children make progress they move to the next group. These activities are carried out for 2-3 hours a day during the camp period. One Pratham team member leads the camp along with village volunteers. Often school teachers get engaged as well. This “camp” is then repeated 3-4 times in the next few months. Total camp days range from 30 to 50 days depending on the state and on the initial baseline of children. Pratham’s “Learning Camp” evolved as a result of large scale experiences on the ground and also with inputs from randomized control trials by JPAL that had been conducted on Pratham’s summer camp program a few years ago.
 At the same time, the “Learning Camp” model was also being subjected to rigorous external impact evaluation. The results of this evaluation conducted by JPAL – ‘Using Learning Camps to Improve Basic Learning Outcomes of Primary School Children in India’ – will be published in the coming few months.
 Pritchett, L., S. Samji and J. Hammer (2013) ‘It‘s All About MeE: Using Structured Experiential Learning (“e”) to Crawl the Design Space’ Center for Global Development, Working Paper 322.