Connecticut’s System for Educator Evaluation and Development (SEED)

Connecticut’s System for Educator Evaluation and Development (SEED) is a model evaluation and support system that is aligned to the Connecticut Guidelines for Educator Evaluation (Core Requirements), which were adopted by the Performance Evaluation Advisory Council (PEAC) in 2012 and revised in 2014, and inform implementation of a model teacher and administrator evaluation and support system which was piloted in 2012-13 school year.

The SEED model was informed by research, including the Gates Foundation's Measures of Effective Teaching (MET) study. The MET study and other research have consistently found that no school-level factor matters more to student success than high-quality teachers. To support teachers, we need to clearly define effective practice, provide strong leadership, develop systems/practices that give accurate, useful information about strengths and development areas, and provide opportunities for growth and recognition throughout the career continuum. Connecticut's new evaluation and support system is designed to fairly and accurately evaluate teacher and school leader performance in order to help strengthen practice to improve student learning.

Teacher Evaluation- Design Principles

The SEED model for teacher evaluation, developed in partnership with Education First, adheres to the following design principles:

Consider multiple, standards-based measures of performance

An evaluation system that uses multiple sources of information and evidence results in fair, accurate and comprehensive pictures of teachers’ performance. The model defines four categories of teacher performance: student learning (45%), teacher performance and practice (40%), parent feedback (10%) and school-wide student learning or student feedback (5%). These categories are grounded in research-based and national standards: The Connecticut Common Core of Teaching (CCT) 2010; CT Core Standards; the Connecticut Framework K-12 Curricular Goals and Standards; the CMT/CAPT and Smarter Balanced Assessments; as well as locally-developed curriculum standards.

Promote both professional judgment and consistency

Assessing a teacher’s professional practice requires evaluators to constantly use their professional judgment. No rubric or formula, however detailed, can capture all of the nuances in how teachers interact with students. Synthesizing multiple sources of information into performance ratings is inherently more complex than checklists or numerical averages. At the same time, teachers’ ratings should depend on their performance, not on their evaluators’ biases. Accordingly, the model aims to minimize the variance between school leaders’ evaluations of classroom practice and support fairness and consistency within and across schools.

Foster dialogue about student learning

This model hinges on improving the professional conversation between and among teachers and administrators who are their evaluators. The dialogue in the new model occurs more frequently and focuses on what students are learning and what teachers and their administrators can do to support teaching and learning. To be successful, educators must master their content, refine their teaching skills, reflect on and analyze their own practice and their students’ performance, and implement the changes needed to improve teaching and learning.

Encourage aligned professional development, coaching and feedback to support teacher growth" variation

Novice and veteran teachers alike deserve detailed, constructive feedback and professional development, tailored to the individual needs of their classrooms and students. SEED promotes a shared language of excellence to which professional development, coaching and feedback can align to improve practice. John Hattie’s (2008), research revealed that feedback was among the most powerful influences on achievement.

Ensure feasibility of implementation

Implementation of this model requires hard work. Throughout each district, educators are developing new skills and learning to think differently about how they manage and prioritize their time and resources. The model aims to balance high expectations with flexibility for the time and capacity constraints in our districts.

Administrator Evaluation- Design Principles

The SEED model for administrator evaluation, developed in partnership with New Leaders, adheres to the following design principles:

Focus on what matters most

The CT Guidelines for Educator Evaluation specify 4 areas of administrator performance as important to evaluation – student learning (45%), administrator practice (40%), stakeholder feedback (10%), and teacher effectiveness (5%). Since the first 2 categories make up 85% of an administrator’s evaluation, we  focus the bulk of our model design on specifying these 2 categories. In addition, we take the view that some aspects of administrator practice – most notably instructional leadership – have a bigger influence on student success and therefore demand increased focus and weight in the evaluation model.

Emphasize growth over time

The evaluation of an individual’s performance should primarily be about his/her improvement from an established starting point. This applies to his/her professional practice focus areas and the outcomes/he is striving to reach. Attaining high levels of performance matters – and for some administrators, maintaining high results is a critical aspect of their work – but the model should encourage administrators to pay attention to continually improving their practice. Through the goal-setting processes described in the SEED Handbook, this model does that.

Leave room for judgment

In the quest for accuracy of ratings, there is a tendency to focus exclusively on the numbers. We believe that of equal importance to getting better results is the professional conversation between an administrator and his/her supervisor that can be accomplished through a well-designed and well-executed evaluation system. So, the model requires evaluators to observe the practice of administrators enough to make informed judgments about the quality and efficacy of practice.

Consider implementation at least as much as design

We tried to avoid over-designing the system for 2 reasons: (1) the pilot provided a significant opportunity for the state to learn and adapt the model before full implementation; and (2) the model should not be so difficult or time-consuming to implement as to create excessive demands on those doing the evaluation or being evaluated. Sensitive to the tremendous responsibilities and limited resources that administrators have, we designed the model to align with other responsibilities (e.g., writing a school improvement plan) and to highlight the need for evaluators to build important skills in setting goals, observing practice, and providing high-quality feedback.