The Carnegie Knowledge Network (CKN) launched its webinar series on value-added measures (VAM) with a conversation with Doug Harris, Associate Professor and Chair of Public Education at Tulane University. CKN’s initial work in the educator evaluation arena is on value-added measures because this work has the most research behind it.
The webinar, along with a policy brief, and recent book by Doug Harris, provide policymakers and practitioners with a solid, “one-stop shop” resource for understanding this complex work and the implications for state and district decision-making. Harris’ work is intended to inform policymakers and practitioners as they work to create evaluation systems that generate valid and reliable conclusions about teacher performance.
While Race to the Top and ESEA waivers have pushed the value-added model, they also spurred development of a menu of measures for teacher evaluation that states and districts are now implementing. These include VAM, structured classroom observations (e.g., Danielson and Marzano), unstructured classroom observations (principal walk-throughs); student evaluations (the Tripod Project surveys) and student learning objectives (SLOs). Harris looks at the correlation of VAM to these other measures and offers a nice primer on considerations of validity, reliability, and practicality for all the measures of teacher effectiveness. His book dives into the research studies that have focused on the validity of value added measures.
His conclusion on his headline question? Despite the challenges of reliability in comparing any of these measures, Harris concludes that VAMs are correlated with the new menu of options listed above. What VAMs are NOT correlated with is teacher credentials and experience, the traditional indicators for compensation and advancement decisions in education.
During the webinar and in his brief, Harris addresses what more needs to be known on this issue. He identifies some critical issues that cannot be resolved by empirical evidence and must, therefore, be determined by policy makers in the course of professional discourse with stakeholders. What aspects of teaching do we value? Are we mostly concerned about students obtaining academic skills—or do we also value other outcomes such as social skills and creativity? He states, “A valid measure of teacher performance is one designed to capture how well teachers contribute to the outcomes we value most.” On this, differences of opinion still exist.
SCEE members should review Harris’ brief for the practical implications of how selecting valid and reliable measures of teacher effectiveness impacts LEA decision-making. He worries that the performance measures were not tested in high stakes environments and there is some initial evidence that when consequences such as pay, tenure, and dismissal are attached to the use of the measures, the validity of the measures may be reduced. Harris poses the challenges inherent in designing systems from an accountability and economics-based approach that focus on summative performance measures for decisions about teacher salaries and careers versus designing systems primarily to improve practice, which focus on formative measures and information.
How have states and districts using value-added measures addressed these issues? Is there a group of SCEE members who have worked with Doug Harris, read his book (or are willing to) who can spark a discussion on his keen insights?