Important Info
Rapid Online Assessment of Reading
Postdoc in Operational Psychometrics and Educational Data Science
Advisors: Jason D. Yeatman, Ph.D. and Carrie Townley-Flores, Ph.D.
The Rapid Online Assessment of Reading (ROAR) bridges the lab, community, and classroom, aligning academic research to practical challenges in education. Our mission is to inspire a virtuous cycle between research and practice, supporting equity in education through the open dissemination of evidence-based tools that support students and teachers while advancing the frontiers of knowledge through inclusive research at an unprecedented scale. ROAR empowers educators, families, clinicians, and researchers with research-backed assessments to advance learning, accelerate research on learning differences, and foster equitable access to high-quality, data-driven decision-making for all.
ROAR is now approved as a universal screener across multiple states and supports hundreds of thousands of students in schools all across the United States. We have an opening for a postdoc who wants to focus on operational psychometrics, supporting psychometric analyses for various policy applications (e.g., state screener lists) and contributing to the technical foundations of the tool.
ROAR is a dynamic and collaborative team science project consisting of graduate students, postdocs, faculty, research coordinators, professional web developers, and school partnership coordinators. This position is ideal for someone with exceptional technical skills who thrives in a collaborative team environment.
ROAR Vision and Mission: A bridge between the lab and the classroom
Assessments are typically time-consuming and resource-intensive to administer: Individually administering assessments to each student in a classroom means a substantial amount of lost instruction time and requires extensive training for teachers to accurately administer and score measures that are used for high-stakes decisions (e.g., access to intervention). Researchers face these same challenges creating a bottle-neck to research at scale. While education technology companies have built products that lower the demands on teachers, many of these products are expensive, grounded in opaque, proprietary technology, and lack a strong research backing. Hence, these products rarely get used in research, creating a disconnect between educational research and practice.
We launched ROAR envisioning a new model: an open-source, open-access assessment platform, grounded in ongoing academic research, and co-developed in collaboration with school-district stakeholders. Rather than a one-way street from the lab to society (often with a commercial intermediary), ROAR’s goal is to inculcate a virtuous cycle between research and practice. We aim to build a suite of completely automated, lightly gamified, online assessments that are grounded in ongoing cognitive neuroscience research and validated against the current “gold standard” of standardized, individually-administered assessments. Our approach is to partner with school districts and community based organizations at each stage of research and development to ensure that our research is grounded in real-world problems and inspired by the deep knowledge of educators who work with children and youth across a diversity of contexts. Through this “Research Practice Partnership” model, we endeavor towards a new assessment methodology that is more valid, precise, efficient, and informative. We aim to design this platform around the diversity of learners in the United States (and abroad). We prioritize transparency at every stage: whenever feasible, materials and technology are made public and each measure within ROAR is published in open-access, peer-reviewed journals with the goal of building more systemic connections between the lab, classroom, and society.
- Highly motivated postdoctoral researcher with extensive experience with item response theory models, computer adaptive testing, and related measurement methods.
- Demonstrated ability to bridge research (innovative ideas, writing papers, etc.) and practice (implementation of methods at scale for use in the real world)
- Ph.D. in quantitative methods for social science (e.g., educational measurement, quantitative psychology) or related discipline.
- Interest in measurement of reading development and mechanisms of learning
- Substantial experience with latent variable models (especially factor analysis, item response theory, and growth modeling) and coding in R.
- Strong collaborative skills and ability to work well in a complex, multidisciplinary environment across multiple teams, with the ability to prioritize effectively.
- Eager to contribute to a vibrant group of faculty, post-docs, and students coalescing around psychometric and longitudinal modeling issues.
To apply, please submit the following to jyeatman@stanford.edu:
- Cover letter (<1 page)
- CV
- Copies of two research papers that demonstrate research agenda
- Contact information for two letters of recommendation