During the July 10 SCEE webinar, Implementing Evaluation Pilots, presenters shared many points that are helpful to state leaders as they design and carry out pilots of their teacher and leader evaluation systems. The presenters included Circe Stumbo of West Wind Education Policy; Rebecca Garland, Chief Academic Officer, North Carolina Department of Public Instruction; Jennifer Preston, Race to the Top Project Coordinator for Educator Effectiveness, North Carolina Department of Public Instruction; and Jean M. Williams, Ph.D., President, Research and Evaluation Associates.
The lessons learned and the experiences shared by the webinar presenters provided state and district-level examples of the considerations that are outlined in the pre-reading "Critical Considerations When Designing and Implementing Pilots." The following summary showcases four of the key considerations discussed by Circe, Rebecca, Jenn and Jean during the webinar.
Purpose of pilots-research, continuous improvement, building capacity
Rebecca Garland explained that North Carolina has been involved in developing new standards and evaluation instruments for principals, teachers, superintendents, and central office personnel since 2007. Currently, the NC Department of Public Instruction is working on designing tools and processes for evaluating instructional technology facilitators, school media coordinators, school nurses, speech language pathologists, occupational therapists, physical therapists, school psychologists, social workers, and school counselors. According to Jennifer Preston, NC also is piloting a student survey to use as another possible source of data in measuring teacher effectiveness. Jenn described NC's purpose for this pilot as an inquiry to help the NC State Board of Education decide whether or not to adopt this student survey instrument and process as part of the teacher evaluation system in their state. Jean William shared how NC has used evaluation pilots to validate assessment instruments and processes and to see how the evaluation process worked in every district in NC.
Carefully crafted research questions should align with purposes and processes within the pilot
As Jenn explained how they structured the student survey pilot, it became clear that their research questions intentionally aligned with the purpose. Examples of their research question include, "Are students' perceptions about their learning correlated with their performance on student achievement assessments? Does the student survey work in their state? Are data in NC the same as in districts from other states that have worked with student surveys? "
From her experiences with pilots in NC and CO, Jean spoke about both beta testing and validity studies and the related research questions. Beta tests or usability tests consider these questions, "Is the evaluation meaningful? Do individuals interpret items same way? Are the processes useful for those who will use it in their daily work? Will consumers find the work credible and relevant to their daily job?" Validity studies typically deal with questions such as, "Are scores valid for the purposes for which we intend their use?" and "Are they valid under the circumstances as the system is laid out?"
Communicating findings (formative and summative)
Both Jean and Jenn asserted that good communication was critical to getting the buy-in of various stakeholders, and that early in the piloting phase was the best time to establish communication patterns with the districts. Jean explained that the communication strategies used in the pilot were implemented in the state-wide roll out. Communication plans should include timelines, clear explanations of what districts are expected to do, and what districts can expect from the Department of Public Instruction for both the pilot and the eventual roll out of the evaluation instrument and processes. Using good communication during the pilot phase gives you time to work with districts to prepare leaders and staff for their roles and create readiness.
The NC DPI actively worked with all professional organizations and a variety of stakeholders as they developed the evaluation systems for teachers, principals and superintendents, which paid off in many ways. Not only were they able to achieve buy in of the organizations and their constituents, they also found that responding to questions that surfaced during the piloting stage resulted in very few questions during the roll out phase.
What are you actually doing? (Terminology)
Circe stressed the importance of the intentional use of terminology-"field test," "pilot test," "usability study," and "rapid prototyping,"--before you launch a pilot. She encouraged SCEE members to read "Pilot Testing and Usability Testing" (2011, National Implementation Research Network) to learn more about piloting and usability. Jean shared how NC used an iterative process that was very similar to a usability study. Like a usability study, their process used a small number of participants and allowed for changes and adjustments while the work was underway. They initially crafted the rubric together, used it with a small group, got feedback, made changes, and pulled in another small group. Once they started the validity study, no changes were made until all the results were in. To learn more about the sequence these pilots followed, numbers of participating districts, district selection criteria, and timelines, listen to the webinar at this HERE.
During the webinar, participants were invited to share information with each other about the purposes, processes, and results discovered through pilots that are underway or already completed. If you are interested in learning more or have information to share, please comment on this blog or start a discussion HERE. (Please note you have to be a member of the Evaluation Private Discussion group in order to post a discussion thread.)