Gather Actionable Data to Improve Literacy Assessments
To prepare children adequately for a successful education, teachers need to measure and track students’ literacy levels frequently and thoroughly. When data that measures progress is both readily available and actionable, teachers are better equipped to determine whether students would benefit from additional instruction to build their language and literacy skills.
Many early childhood educators are familiar with paper-pencil format assessments as a means to take stock of students’ ability to pair letters with sounds, identify words that rhyme, and describe differences between categories. Researchers from the University of Minnesota sought to improve young children’s literacy levels with a modern toolkit that enhances instruction and progress analysis, but also eases the research on how to assess and conduct educational interventions in ways that achieve more positive outcomes.
Integrate User Feedback For Data-driven Testing
The MentorMate team worked closely with researchers to ensure that the application’s interface accommodated each end user’s specific needs. The UI allows for precise examinations for the students while it provides clear instructions to teachers. Exam results are linked to notifications so teachers know when they need to teach certain students differently and/or assess them more frequently.
Software That Supports Childhood Literacy Development
We worked with researchers from the U of M IGDILab to execute their vision outlined in the IGDI-APEL (Individual Growth and Development Indicators: Automated Applications for Performance Evaluation of Early Literacy) research initiative. The new iPad application brings into the classroom one of foremost early childhood assessment suites that measures young children’s literacy levels and language skills.
The solution addresses the cumbersome nature of paper-pencil assessments. Teachers can personalize tests for individual students through a user interface designed specifically for their purposes. The software records the teacher’s choice of assessment as well as the student’s answers and overall performance results. Over time the system illustrates student literacy levels and progress metrics according to the domains of language indicators defined by the U of M’s researchers. The teacher can use these data to adapt lesson plans to specific children’s needs in order to accelerate the improvement of literacy levels for those falling behind.
Streamline Classroom Assessments and Optimize Research Efforts
The iPad application relieves teachers from overwhelming workloads generated by outdated test administration. The backend of the assessment structure uses a computer adaptive testing engine that tailors test items to particular students’ needs. As the application’s supporting database tracks, analyzes, and stores information, teachers can easily digest students’ progress and make more informed decisions about improving their teaching strategies as needed.
While students benefit from a more fine-tuned assessment program that aims to instill literacy from an early age, the labor involved in research is also made more efficient through this software. Not only does it facilitate the storage of large amounts of data, it is also capable of being updated with new assessment features and redeployed with relatively minimal effort.
Support Research As Software and System is Refined
The researchers from U of M’s IGDILab collaborated closely with MentorMate’s programmers to refine the code that controls how ongoing assessments map and adapt to particular students’ needs. The degree of control that this allows within the software’s backend functionalities opens new doors for the researchers.
Their control over what and how information is presented to children, plus the collection of data throughout these processes, allows the researchers to test different hypotheses easily and explore new opportunities in their research in ways that would not have been possible previously.