You are on page 1of 95

CLINICAL SUPERVISION: A CASE STUDY OF CLINICAL SUPERVISION AS IT

RELATES TO THE IMPROVEMENT OF NOVICE TEACHERS

A Thesis Presented to the


Faculty of the College of Education
University of Houston

In Partial Fulfillment
of the Requirements for the Degree

Master of Education

by

Wendy L. Hampton

May, 2009
UMI Number: 1466122

Copyright 2009 by
Hampton, Wendy L.

All rights reserved.

INFORMATION TO USERS

The quality of this reproduction is dependent upon the quality of the copy
submitted. Broken or indistinct print, colored or poor quality illustrations and
photographs, print bleed-through, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.

UMI
UMI Microform 1466122
Copyright 2009 by ProQuest LLC.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.

ProQuest LLC
789 E. Eisenhower Parkway
PO Box 1346
Ann Arbor, Ml 48106-1346
Copyright

By

Wendy L. Hampton

May 2009
CLINICAL SUPERVISION: A CASE STUDY OF CLINICAL SUPERVISION AS IT

RELATES TO THE IMPROVEMENT OF NOVICE TEACHERS

A Master's Thesis for the Degree

Master of Education

by

Wendy L. Hampton

Approved by Thesis Committ

Dr. Patricia Holland, Chairperson

Dr. Angus Macneil, Committee


Commi Member

• — • — * • • ' ^ ^ » ^ ^

Wayne Emerson, CommitteeMember

Dr. Robert Wimpelberg, Dean


College of Education

May, 2009
Acknowledgement

First, I would like to thank my committee members, Dr. Patricia Holland, Dr.
Wayne Emerson, and Dr. Angus Macneil for their time, guidance, and suggestions in the
successful completion of this project. I extend a special thank you to Dr. Patricia Holland
for encouraging me to pursue this research project, and continuing to encourage and
support me throughout the process.
I would also like to thank my parents, Bonnie and Lynn Hampton who have
always believed in me and at a young age taught me to believe that I could do anything
that I set my mind to.
Many personal sacrifices go into a project such as this, so I would also like to
acknowledge the Burrell and Booth family for being there to listen, understand, and offer
support during times of frustration. Additionally, thank you for sharing in the excitement
of the project's completion.
Finally, to Brenda, my best friend, biggest cheerleader, and biggest supporter,
thank you. Without your patience, understanding, encouragement, and support, I never
could have finished this research project. You give me inspiration and a determination to
succeed, and be the best that I can be.
CLINICAL SUPERVISION: A CASE STUDY OF CLINICAL SUPERVISION AS IT

RELATES TO THE IMPROVEMENT OF NOVICE TEACHERS

An Abstract
of
A Thesis Presented to the
Faculty of the College of Education
University of Houston

In Partial Fulfillment
of the Requirements for the Degree

Master of Education

by

Wendy L. Hampton

May, 2009
Hampton, Wendy L. "Clinical Supervision: A Case Study of Clinical Supervision as it
Relates to the Improvement of Novice Teachers." Unpublished Thesis,
University of Houston, May, 2009.

Abstract

This study explores the potential for Clinical Supervision (Cogan, 1973) to

provide support and improve the instructional processes of novice teachers.

A qualitative single case study design consisting of a teacher and supervisor (the

researcher) engaging in five cycles of the Clinical Supervision process was conducted.

Data from each observation and conference were collected and analyzed using the

constant comparative analysis method.

The results of this study indicate that using Clinical Supervision with a novice

teacher can impact the teacher's instructional practice. Instructional practices that

showed improvement specific to this study included questioning strategies and lesson

design. Leithwood (1992) proposes that there are six stages in the development of

professional expertise for teachers. The results of this study indicate that Clinical

Supervision helped the novice teacher advance from Stage 1 "survival skills" to Stage 2

"basic competence in instruction" according to Leithwood's six stages.

v
CLINICAL SUPERVISION: A CASE STUDY OF CLINICAL SUPERVISION AS IT

RELATES TO THE IMPROVEMENT OF NOVICE TEACHERS

TABLE OF CONTENTS

Chapter Page
I. INTRODUCTION 1

Need for the Study 2


Definition of Terms 4
Research Topic 5
Research Question 5
Purpose of Research 6
Significance of the Study 6
Limitations 7

II. REVIEW OF LITERATURE 9

Introduction 9
Clinical Supervision Overview 9
Supervision as an Instructional Practice 14
Clinical Supervision in the Development of Novice Teachers 16
Empirical Studies Using Clinical Supervision 20
Summary 23

III. METHODOLOGY 25

Introduction 25
Qualitative Case Study 25
Research Design and Procedures 27
Summary 31

IV. CLINICAL SUPERVISION CYCLES 32

Introduction 32
Pre-Conference #1 32
Observation #1 35
Feedback Conference #1/Planning Conference #2 36
Observation #2 38
Feedback Conference #2/Planning Conference #3 40
Observation #3 43
Feedback Conference #3/Planning Conference #4 44
Planning Conference #4 Continued 46
Observation #4 48
Feedback Conference #4/Planning Conference #5 50

vi
Planning Conference #5 Continued 53
Observation #5 57
Feedback Conference #5 59
Summary Interview 60

V. CONCLUSIONS 63

Introduction 63
Summary and Discussion 63
Implications for Professional Practice 67
Implications for Teacher Development 69
Summary 76
References 78

VII
CLINICAL SUPERVISION: A CASE STUDY OF CLINICAL SUPERVISION AS IT

RELATES TO THE IMPROVEMENT OF NOVICE TEACHERS

LIST OF TABLES

Table Page

1 Student Off Task Behavior 58

VIII
CLINICAL SUPERVISION: A CASE STUDY OF CLINICAL SUPERVISION AS IT

RELATES TO THE IMPROVEMENT OF NOVICE TEACHERS

LIST OF FIGURES

Figure Page

1 Parallel Lines Intersected By a Transversal 51

2 Pythagorean Theorem Model 53

ix
Chapter One: Introduction

Introduction

The current trend in educational reform is the development of Professional Learning

Communities, particularly in large schools (DuFour, 2004). One of the driving principles

of the Professional Learning Community is the shift from ensuring that students are

taught to ensuring that students learn. In Professional Learning Communities, as teachers

collaborate, there are three questions used to define what should take place in the

classroom:

• What do we want each student to learn?

• How will we know when each student has learned it?

• How will we respond when a student experiences difficulty in learning?

The last question, according to DuFour (2004, p. 8), is what separates Professional

Learning Communities from traditional schools. It brings to focus the concept of early

intervention, as opposed to remediation. Teachers must balance, more than ever before,

the increasing demand to meet the needs of every student regardless of how diverse those

needs might be. So, while the focus might now be on student learning, we must not

forget that a key element that ensures learning takes place is still in fact the teacher, and,

in order to ensure that students learn, we must still ensure that students are taught and

taught effectively. We must have schools that are filled with professional teachers. Wise

and Darling-Hammond (1985) define the professional teacher as follows:

one who has sufficient knowledge of subject matter and techniques to make

appropriate decisions about instructional content and delivery for different


students and classes. In other words, professional teachers are able to ascertain

their clients' needs and determine how to meet them (p. 31).

There are teachers that most of us have encountered that seem to make this look

easy. These qualities seem to come naturally to them. The question then is what do we

do with the teachers that walk into a classroom and are not yet professional teachers?

Not only do we need to intervene early with students when they fall behind, but we also

must intervene early with teachers that may not have effective strategies that ensure

student learning. Unfortunately, often times, we do no realize there is a problem until it

is too late. Thus, the purpose of this research study: how to intervene early with novice

teachers in order to expedite their transition from novices to professional teachers.

Clinical Supervision is a well established model designed to offer a support system to

facilitate instructional behavioral changes (Sullivan, 1980). According to Goldhammer

(1969), Clinical Supervision has greater potential that any other supervisory process used

in education to remedy instructional weakness.

Need for the Study

According to the Alliance for Excellent Education (AEE) an estimated 157,000

teachers leave the field of education each year and more that 232,000 others change

schools (2008). Together these numbers account for 12% of the total teacher workforce.

Additionally, one third of all new teachers leave the profession after three years and the

number increases to almostfiftypercent after five years (Baldacci, 2006). This is

significant to the study because 37.2% of teachers that either left the profession or

changed schools cited lack of support as one of the motivating factors in their decision.

2
The problem is not merely that of teacher turnover and attrition, but the impact that it has

on the students in our schools. According to Hanushek (2004), research consistently

shows that first year teachers are much less effective than their more experienced

counterparts.

This problem is exacerbated in urban school settings. According to a 2005

MetLife Survey, teachers hired who were already "at risk" for leaving the profession

were also more likely to be teaching in urban schools with high populations of students

from poverty and minority students. The teacher turnover rate in high poverty schools is

20% compared to 12.9% in low poverty schools (NCTAF 2003). According to AEE

(2008), "There is a general consensus that the single most important factor in improving

any student's performance is the quality of the teacher, but researchers have found that

the impact of a higher quality teacher is particularly significant for low-performing,

minority students." Darling-Hammond confirms this idea stating that a quality teacher

correlates strongly with student success even when considering factors such as limited

English proficiency and low-income status of students (1999). Unfortunately, a large

number of new teachers have a negative impact on student achievement (Ingersol, 1999;

Portner 2003). This revolving door of teachers causes urban schools to continue in an

unstable state and adversely affects student achievement; in addition to placing increasing

demands on remaining school staff, it also strains budgets with continuous teacher

recruitment and hiring (Ingersoll & Smith, 2003).

In the 1980s mentor programs sprang onto the scene to help address many of the

problems new teachers face. However, according to Feiman-Nemser (1996) there has

been no data to validate the process and few programs can be sited for success.

3
According to Perez, Swain, and Hartsough (1997), effective new teacher programs should

provide an integrated, systematic approach to prepare confident teachers who will remain

in the profession. These authors add that mentoring alone does not accomplish such a

goal. Additionally, current trends in teacher evaluation practices are not used as the

opportunity to give teachers feedback on their instructional practices and do little to

change a teacher's practice or their effectiveness (Goldrick, 2002).

In the second half of the 20th century, the field of supervision became closely

identified with various forms of Clinical Supervision, initially developed by Harvard

professors Morris Cogan and Robert Anderson (Cogan, 1973). Since that time, numerous

studies have been done in the broad area of Clinical Supervision regarding teacher and

supervisor attitudes towards the process of Clinical Supervision (Weller, 1969; Putnal,

1981;Nsien 1984; Benjamin, 1987; Langmuir, 1998). Few researchers, according to

Gibson (1986), have studied the effects of Clinical Supervision as it relates to behavior

changes of the classroom teacher. However, Clinical Supervision remains a viable

resource for providing novice teachers with individualized instructional support (Pajak,

2003).

Definition of Terms

• "When supervision is direct, centered in the classroom, focused on teachers'

issues, aimed primarily to helping teachers understand and improve their

teaching, and collaborative the term Clinical Supervision is often used"

(Sergiovanni & Starratt, 2002, p. 221). A more elaborate explanation of the

process of Clinical Supervision will be discussed in Chapter Two. The

4
methodology for which this procedure is employed for the purpose of this case

study will be explained in Chapter Three.

• In the context of Clinical Supervision the roll of the supervisor is to help teachers

develop, "skills for analyzing the instructional process based on systematic data;

skills for experimentation, adaptation, and modification of the curriculum; and a

broader repertoire of teaching skills and techniques" (Acheson & Gall, 1997,

p.12)

Research Topic

During my course on Supervision at the University of Houston, I became

interested in the practice of Clinical Supervision and how it could help me work with

teachers in my roll as a math curriculum specialist. Over the course of the year, I acted as

a novice supervisor working with two first year teachers and one second year teacher.

While this was not a formal research project, I found that this practice could impact the

way teachers provide instruction, which could also impact student achievement.

Therefore, I wanted to further explore, in this study, what the implications are for

employing the process of Clinical Supervision with a novice teacher. The current study

will be conducted as a case study in a different school setting with a different novice

teacher, than the original informal study.

Research Question

The research topic is framed into the following research question:

How can Clinical Supervision be used to improve and support the instructional practices

of a novice teacher?

5
Purpose of the Research

"A case study is an empirical inquiry that: investigates a contemporary

phenomenon within its real-life context when the boundaries between phenomenon and

context are not clearly evident, and in which multiple sources of evidence are used"

(Yin, 1984, p.23). The purpose of this research is to conduct a case study in order

investigate the following questions:

1. Do a novice teacher's instructional practices change when he/she works

with a supervisor in the Clinical Supervision process?

2. Can the conferences during the cycles of Clinical Supervision begin to

uncover instructional issues that the teacher may not have originally been aware

of?

3. If so, how can these issues be addressed in the Clinical Supervision process?

Significance of the Study

Because teacher effectiveness has a direct impact on student achievement, and,

because research shows that novice teachers often are not as effective as those with more

experience, the results of this study may give investigators a better understanding of how

to move the novice teacher along the continuum from novice teacher to professional

teacher, as defined earlier, at a faster rate. This progress, in turn, has the potential to raise

student achievement. According to Johnson (2006), when teachers feel that they have the

opportunity to become even slightly more effective with their students, they are more

likely to remain not only in the teaching profession, but also at their same location.

6
Limitations

As with any case study involving human subjects, there are foreseeable

limitations to the research.

• Case study participant: The participating teacher in this case study is a

volunteer. Because the participant is a volunteer, there can be no guarantee that

the participant is fully vested into the process of the research study.

• Time constraints: Clinical Supervision is a time consuming process for both the

supervisor and the supervisee. Because of the huge learning curve that novice

teachers face, it could be difficult for them to share the time that it takes to have

an open discussion during the pre and post conference in the supervision cycle.

• Trust: In order for Clinical Supervision to really be successful, a strong rapport

must exist between the supervisor and the supervisee. Because a first year teacher

might be scared or embarrassed to admit the things they are really struggling with,

the research could be skewed. Additionally, at the time I will be conducting this

study, I will be working as a full time principal intern. This position carries no

administrative or evaluative authority, nor am I certified to conduct evaluations by

the Texas Education Agency. Nonetheless, my position could result in the

participating teacher viewing me with more authority on campus than I actually

have. It might also make it more difficult to establish trust with the participant.

• Supervisory skills: While I have acted in the role of the clinical supervisor in my

previous position and research, I have had no formal training in the practice, and I

am still a novice supervisor.

7
• Knowledge of student learning: While this study is designed to determine the

effects of the Clinical Supervision process in relation to a novice teacher's

instructional practice, there is no direct way to link those changes to a difference

in student learning.

• Attitudes, beliefs, and values: While Clinical Supervision is to be collegial and

free of judgment, the supervisor will still have values, beliefs, and ideas about

what the teacher is doing in the classroom that go beyond the scope of data the

supervisor is to collect during the observation.

8
Chapter Two: Review of Literature

Introduction

The purpose of this study is to explore how Clinical Supervision can be used to

improve and support the instructional practices of a novice teacher, if indeed it can be

used at all for such a purpose. In the first section of this literature review, I will examine

the basic purpose and tenets of the Clinical Supervision process. In the second section, I

will explain how Clinical Supervision can be used as an effective instructional practice.

Next, I will examine why this process should be used as an instructional practice to aid in

the development of novice teachers. Finally, I will examine other empirical studies

conducted to determine if Clinical Supervision can be used to effect changes in the

classroom behaviors of teachers.

Clinical Supervision Overview

Clinical Supervision has as its primary goal the professional development of

teachers with its main focus on improving teachers' classroom performance (Acheson &

Gall, 1997). Acheson and Gall (1997) outline more specific purposes for Clinical

Supervision such as: providing teachers with objective feedback on their instructions,

diagnosing and solving instructional problems, helping teachers develop skill in using

instructional strategies, evaluating teachers for promotion, tenure, and other decisions,

and helping teachers develop a positive attitude about continuous professional

development. This leads to the questions of what is Clinical Supervision and how can it

accomplish these goals where other methods of supervision have failed.


Clinical Supervision was first introduced by Cogan and Anderson in the 1950s

(Cogan, 1973; Goldhammer et. al. 1980). Cogan viewed Clinical Supervision as the

vehicle for developing professionally responsible teachers who were capable of analyzing

their own performance, who were open to change and assistance from others, and who

were above all, self directing (Pajak, 2003). Cogan's goal was to use classroom events in

a collegial environment to improve teacher performance (Cogan, 1973). Though

numerous authors have expanded upon and reinvented Cogan's original theory, the

practice of Clinical Supervision remains a viable source to improve teacher performance

(Pajak 2003).

Clinical Supervision is a continuous series of cycles in which the supervisor

assists the teacher in developing ever more successful instructional strategies. The three

phases of Clinical Supervision, as explained by Acheson and Gall (1997), are: the

planning conference, the classroom observation, and the feedback conference. In all

three phases, the supervisor and supervisee work together in a collaborative effort to plan

for what teacher or student behaviors may need to be analyzed, how those behaviors will

be observed and recorded, and finally how to address those behaviors in a way that

promotes student achievement and teacher effectiveness (Stoller, 1996). This establishes

a more teacher-centered form of supervision. The teacher and supervisor share a "power

with" relationship instead of a "power over" relationship, which according to Holland

and Garman (2001), is one of the first steps in redefining the evaluation of teachers.

According to Stoller (1996), it is the planning conference that sets the stage for

effective Clinical Supervision. "The goal of the planning conference is to identify and

define an area of genuine concern that the teacher would like to understand better or

10
improve" (Stoller, 1996, |2). Stoller explains that, when a specific area of concern is

defined for observation, the teacher is more likely to explore solutions or alternatives to

those practices later in the feedback conferences (1996). During the planning conference,

the supervisor and teacher also set the date and time for the observation and determine

how data will be collected during the observation, so that both the teacher and supervisor

have the same set of expectations (Glickman, Gordon, & Ross-Gordon, 2007).

The second phase of the Clinical Supervision process is the classroom

observation. During the observation, the supervisor focuses only on the agreed upon

behaviors or actions discussed in the planning conference using the data collection

method discussed in the conference. Another key to effective Clinical Supervision

(Stoller, 1996) is selecting the right data collection method. Acheson and Gall (1997) list

twenty -four different data collection techniques to elicit data for numerous classroom

happenings. Cogan maintains that inadequate classroom records defeat the entire process

of the Clinical Supervision approach. "Supervisors and teachers find themselves mired

down in fruitless arguments about what did and did not actually occur in the course of

instruction" (Cogan, 1973, p 136). Recording "hard" data during the observation can

alleviate defensiveness that teachers may feel in the post-conference if they sense the

observation as evaluative (Acheson & Gall, 1997).

The analysis of data takes place in the third phase of Clinical Supervision called

the feedback or post-conference. With proper data collection, teachers are able to note

the differences in what they believe to be occurring in their classrooms and what is

actually occurring (Bennan, n.d). It is this feedback, if given in a timely manner, that can

lead to improved teacher effectiveness. According to Acheson and Gall, important

11
information to discuss in the feedback conference should be, "information that is

objective (unbiased), accurate, clear (to both parties), relevant to the agreed upon

concerns, and interpretable in respect to what changes are feasible and reasonable" (1997,

p. 150). In addition, the data presented must be understood and immediately useful to the

teacher. The feedback conference is used to analyze and interpret the data collected for

the basis of deciding what instructional changes may need to be made in the future.

Acheson and Gall (1997) and Crane (2002) agree that the teacher should take the lead in

the majority of this discussion as the supervisor listens, facilitates, and asks questions.

Depending upon the skill level of the teacher, however, Glickman and other authors

(2007) explain that the supervisor might use several different approaches during the

feedback conference: directive control, directive informational, collaborative, and

nondirective.

When a supervisor takes directive control approach, he/she basically takes

ownership of the teacher's problem, decides how best to solve it, and tells the teacher the

solution and the appropriate actions to take (Glickman et.al., 2007). Glickman explains

that this is not a type of supervision to use with all teachers all of the time. When used in

conjunction with Clinical Supervision, direct control approaches should only be used

when teachers are functioning at low developmental levels (Glickman et.al., 2007). In

addition, Glickman explains that when you are working with a teacher that necessitates a

directive control approach, the goal should be to move toward a directive informational

approach.

Using the directive informational approach during the post conference, the

supervisor is the main source of information, but does solicit and consider teacher

12
feedback. Instead of telling the teacher exactly what to do and how to do it, the

supervisor will provide several alternatives for the teacher to choose from, and then help

put an action plan in place to address the issues at hand (Glickman et.al., 2007) As with

directive control, there is a time and a place for the directive informational approach. It

can still be used with teachers functioning at relatively low levels of development, but

also when the teacher is confused, inexperienced, or at a loss for what to do, and the

supervisor has knowledge of effective practices (Glickman etal. 2007).

The final two approaches that Glickman suggests are for teachers functioning at

higher developmental levels who have moved beyond the directive control and directive

informational stages. The collaborative approach is best used when the teacher and

supervisor share equal expertise on an issue, are both involved in carrying out the

decisions, and are both committed to solving the problem (Glickitnan et.al. 2007). The

non-directive approach is based on the idea that the teacher has the knowledge and skills

to know what instructional changes should be made and how to make them. Decisions

rest with the teacher, and the supervisor acts as a facilitator to assist the teacher in

reflecting on their behaviors and planning for change (Glickman et.al. 2007).

The goals of the feedback conference are to provide teachers with actual data

observed; analyze the impacts of the data on instructional practices and student learning;

and discuss strategies teachers can use to implement more effective instructional practice.

Once these goals have been accomplished, it is often the case that the feedback

conference turns into the planning conference for the next classroom observation as the

cycle of supervision continues.

13
Clinical Supervision as an Instructional Practice

After understanding the process of Clinical Supervision we are still left with the question

of what benefits this method employs when other methods of supervision fall short.

Cogan states,

Clinical Supervision takes its principal data from the events of the classroom.

The analysis of these data and the relationship been the teacher and supervisor

form the basis of the program, procedures, and strategies designed to improve the

students' learning by improving the teacher's classroom behavior (1973, p. 9).

The important distinction to be made when considering the practice of Clinical

Supervision is, that at its very core, it is designed to improve student learning by

improving on the methodologies of the teacher. It is not designed as a check list to

ensure that an objective is written on the board, classroom rules are posted, lesson plans

are complete, and curriculum is aligned. When done effectively, Clinical Supervision

gives the teacher a lens through which they can examine their own instructional practices

in order to identify which have the greatest impact on student achievement and to alter

the practices as needed. An exemplary supervision model according to Goldsberry

(1998) that promotes teacher development would include the following:

• Teachers and supervisors hold a pre-observation conference during which

the teacher gives the supervisor important information concerning the

lesson to be observed and the focus for the lesson's success.

• The pre-conference is followed by an observation or multiple observations

during which the supervisor collects data to answer questions concerning

the lesson's success.

14
• The supervisor and teacher have a period of time to review and interpret

the recorded data.

• A post-observation conference is held during which the teacher and

supervisor share in an exchange of ideas and interpretations concerning

the data with suggestions for future educational activities.

• A time for reflection during which the information from the supervisory

practice is reviewed and followed by changes in the teaching practice if

changes are deemed appropriate.

"Teacher involvement in an exemplary supervisory practice such as in the supervisory

structure outlined above, optimizes teacher professional growth and facilitates the growth

of teacher leadership" (Williams, 2007, p. 13). Clinical Supervision offers teachers just

such an opportunity.

Williams (2007) conducted a study of one school's pilot of the Clinical

Supervision model in lieu of standard teacher evaluation practices. His study was an

attempt to determine whether there were benefits and professional gains to the teachers

that participated in the study. His findings support the idea that Clinical Supervision does

lead teachers to more closely examine their own practice and reflect more on what

transpired in their classrooms. One teacher reported, "I could see myself as a teacher, a

facilitator and not doing the work for them (the students). Allowing them to come up with

their thoughts about the math process" (p. 177). Another teacher explained, "I was

successful using some of the things I learned in the training and I learned that I don't

always need to tell them (the students) what to do - 1 began reflecting on my own

practice and learned I needed to change some things" (p. 177).

15
Houk (1999) found similar outcomes in her qualitative study on supervision of

beginning teachers. All five teachers in the study reported changing their behavior to

varying degrees, during the process of Clinical Supervision. Houk found that the

teachers who experienced the more "textbook" example of Clinical Supervision made the

most significant behavior changes. Houk reports that one of the participants, Kevin, took

the outcomes of the post conference "into my repertoire of knowledge.. .The rest of my

lessons for that term.. .1 really worked on gaining their attention before I started.. .1 used

eye contact" (p. 87). Kevin's supervisor had worked with him on how to develop a

concept at a pace appropriate for his students. This proved to have a direct impact on his

instructional practices. During the supervision process, Jaime, another participant, began

to consider different learning styles and methods to keep his students focused. The other

participants in the study did not report significant changes in behavior. However, it is

important to note that these participants had supervisors that did follow through on post

conference and deviated from the true cycle of the Clinical Supervision experience

(Houk, 1999).

Clinical Supervision in the Development of Novice Teachers

Before we consider the development of new teachers, let us first take a moment to

consider training programs such as those for aspiring doctors. Those seeking a medical

degree spend a great deal of time learning the theory behind their practice in the

traditional classroom setting. However, this only accounts for a small portion of their

development. They also spend a great deal of time "in the field" through clinical

rotations in a hospital, culminating their "schooling" with a one year internship where

16
they develop their skills for practicing medicine under the watchful eye and guidance of

licensed medical doctors. Upon completion of their degree, once fully licensed and ready

to practice medicine, they still aren't handed the scalpel to perform the difficult practices

of their more experienced counterparts. As residents, they continue to learn and perfect

their practice under the guidance of experienced practitioners. Compare that to the

novice teacher.

First year teachers have often spent little time on an actual school campus

learning the proper techniques and skills required to teach. There are different levels of

experience that novice teachers enter our classrooms with. Some have spent 12 weeks in

a classroom student teaching with a supervising experienced teacher, some have only

done a few classroom observations, and with the emergence of different alternative

certification programs, some of our novice teachers enter the classroom on the first day of

school for the first time since they themselves were students. On day one, these novice

teachers (delete are) often have the same classroom responsibilities as those of a 20 year

veteran. We need them to deliver high quality lessons, engage students at all levels, and

have a positive impact on student achievement by raising test scores. It is no wonder

that first year teachers report feeling overwhelmed. Each of the live participants in Brian

Coffee's study on mentoring reported feeling overwhelmed in the first months of school.

According to Portner, many new teachers, "must merely cope rather than focusing on

teaching well" (2003, p.4).

Because of this, according to Portner, far too many teachers leave the profession

before they have the opportunity to develop the "art" of teaching, because they had no

initial or ongoing support (2003). As explained in the introduction, teachers leave the

17
profession at alarmingly high rates in their first five years of teaching. New teachers are

even more likely to leave in their first year if they do not receive proper support and

preparation to teach in challenging situations (Portner 2003; AEE, 2008). This problem

is exacerbated when we consider that urban schools serve more minority students and

students of poverty than their suburban and rural counterparts. It is the students in these

schools that are often exposed to the less trained and experienced teachers (Darling-

Hammond, 1988, Ingersol 2004). Because new teachers are often left to "fend for

themselves," they do not become continuous learners, and instead of changing or

modifying instructional practices when they are ineffective, these teachers often become

mired in bad habits and teaching practices that span the length of their careers, if indeed

they continue their careers past the first year (Haberman, 1987). Therefore, it is

imperative that we consider and address the needs of novice teachers, especially those in

urban schools. Teachers who are shown how to increase the achievement levels of their

students are likely to continue teaching longer and are less likely to leave lower-

performing, poorer schools (AEE, 2008).

Teachers need an environment of support. According to Millinger (2004), nearly

one fourth of teachers that left the profession is 2000 did so due to inadequate

administrative support. Millinger explains that by establishing relationships of working

"with one another" instead of "for one another", teachers are more likely to ask questions,

seek assistance, and obtain information that they require.

In many situations the term supervision has a negative connotation. As Holland

and Garman explain, "scholars and practitioners express a desire to redefine their work

by the use of words other than 'supervision,' a term that many believe has become

18
debased and offensive" (2001, p. 97). Acheson and Gall substantiate this by explaining

that most teachers do no like to be supervised. They are often defensive and do not find

the supervision helpful.

Supervision of new teachers is often linked to evaluation. According to McCann

(2005), new teachers have a strong desire to meet the expectations of their supervisors.

"Teachers are best severed by a supportive evaluation plan that focuses on professional

development and discourages punitive approaches to teacher evaluation" (2005, p. 33).

He goes on to explain that supervisors should engage in professional conversations with

teachers about their observations. The spirit of the supervision should be one of coaching

and support with planned observations and reflective conversations.

According to Stoller (1996), one of the greatest challenges supervisors face is

changing the negative attitude accompanying supervision so that it can, in fact, serve as

professional development for improved instruction. She explains that, in order to change

these attitudes, we must use an approach that is interactive, teacher-centered, concrete,

objective, and focused, rather than approaches that are directive, supervisor-centered,

vague, subjective, and unsystematic. "Honest dialogue and constructive feedback will

lead to professional growth," (Stoller, 1996, Tf2).

19
She concludes that,

Clinical Supervision is one-traditional approach that meets the criteria specified

above. An examination of this approach (see Acheson and Gall 1992) reveals that

the use of Clinical Supervision techniques can radically change

supervisor/supervisee relationships, resulting in less stress and anxiety-on the part

of both the supervisor and teacher-and a more positive teacher response to

supervision. (f3)?

One of the most significant impacts of Clinical Supervision is the direct role it can

play in changing new teachers' behavior and improving their classroom instruction

(Cogan, 1973; Acheson & Gall, 1997; Sergiovanni & Starratt, 2002). Addressing the

needs of new teachers early in their career can have a lasting impact on teacher retention

and teacher effectiveness (Kaplan and Owings, 2002).

Empirical Studies Using Clinical Supervision with Novice Teachers

In 1985, Thomas Gibson conducted a study entitled "The Effectiveness of

Clinical Supervision in Modifying Teacher Instructional Behavior". Five untenured

teachers participated in Gibson's study. This study was designed to answer the questions

1. Do teacher classroom behaviors change after using the techniques of teacher

self-guided training via multimedia packets and self-analysis of instructional

performance?

2. Do teacher classroom behaviors change after using the techniques of

supervisor-guided training along with supervisor analysis of instructional

performance (Clinical Supervision)?

20
3. Which of the above training procedures lead(s) to the greater improvement in

teacher performance? (Gibson, 1985, p. 3)

Important to this research study is Gibson's answer to question two. Gibson claims that,

"the answer to this question based solely on statistical significance is no" (1985, p. 70).

However, one must consider how the study was conducted. In Gibson's study there were

no pre-conferences for the supervisor and teacher to determine together what teacher

practices would be observed. Instead, Gibson pre-determined three teacher behaviors

upon which data would be collected during the observation. These behaviors included

questioning techniques, teacher praise, and teacher/student contact during seatwork.

Because these behaviors were predetermined, it violates one of the basic tenants

of Clinical Supervision. As Acheson and Gall explain, it is the planning conference that

sets the stage for effective Clinical Supervision. "A basic purpose of the planning

conference is to provide an opportunity for the teacher to communicate with a fellow

educator about a unique classroom situation and style of teaching" (1997, p. 61).

Because there was no preconference, making a determination about the effectiveness of

Clinical Supervision in the behavior change of teachers is flawed from the outset.

Initial observations of teachers were conducted to determine to what level each of

the targeted behaviors was occurring. Based on the need, teachers were then given a

multimedia packet which was designed as a self-study guide for the teacher, which was

explained and discussed in a conference with the principal, who acted as the supervisor in

this study. The teacher was asked to implement the new behaviors, and after seven days,

the principal again began to observe the teachers every day for a period of three weeks.

After each observation, the findings were given to the teacher to review, and at the end of

21
each week, the principal and teacher had a conference in which the principal "provided

suggestions and methods for improvement, as well as encouragement" (Gibson, 1985, p.

21).

Clearly Gibson's study was not a true study on the effectiveness of Clinical

Supervision to change teacher behavior. Granted, based on his results, the techniques

used in his study did not effect significant change in teacher behavior. However, these

techniques do not follow the true protocol designed by Cogan in the use of Clinical

Supervision. The misuse of the terms and given outcomes of the study provide further

reasons why we need to continue to research the effectiveness of Clinical Supervision to

change the instructional practices of teachers.

Rauch and Whittaker (1999) conducted a study in which a variation of Clinical

Supervision was used for peer supervision with pre-service teachers. In this study, pairs

of student teachers observed each other in action, provided written feedback, and

conducted conferences with each other regarding the feedback. In this study, teachers

reported that they were given insight into their teaching practices, learned instructional

approaches that worked with different groups of students, and were provided insight that

caused reflection about their own teaching practices. It does not sight specific examples

of behavioral change. Like Gibson's study, this design also omits the preconference in its

design. Rauch and Whittaker's study also differs from the current study proposed in that

student teachers were being observed by other student teachers. The observers did not

have a great deal more skill or experience in dealing with students, curriculum or other

teaching experiences than did those whom they observed.

22
One other study related to the Clinical Supervision of novice teachers was

discussed in a previous section of Chapter Two: a 1999 study by Houk. Houk's

qualitative study included five second year teachers who had participated in Clinical

Supervision during their first year of teaching. Those teachers were interviewed and

surveyed in order to determine what themes arose. Houk reported in her study that,

"professional impact was minimal as the teaching behavior of each teacher was not

significantly altered by the Clinical Supervision experience" (1999, p. ii). Possible

reasons for the lack of professional impact, as Houk reports, are that there were

inconsistencies with the purpose and the implementation of the Clinical Supervision

practice. She also reports that each supervision experience had an evaluation tone to it

and different teachers experienced different levels of intensity of the supervision process.

It is important to note, however, that those teachers that did report instructional behavior

changes were those that had supervisors that most closely followed Cogan's traditional

model of Clinical Supervision (Houk, 1999).

Summary

Clinical Supervision has long been a practice that can serve to improve the

instructional practices of teachers. This type of supervision is non-evaluative and gives

teachers and supervisors the opportunity to communicate about instructional concerns

and provides for a collegial relationship. The Clinical Supervision model involves three

phases: planning conference, observation, and feedback conference. When the process is

followed correctly, it offers teachers a way to look at and reflect upon their own practice

in a non-threatening, non-evaluative, collegial environment.

23
A large number of teachers leave the profession in their first five years, and new

teachers are more likely to leave after one year if they have no initial or sustained support

systems. The problem is even worse in urban schools. Teacher attrition has a direct

impact on student achievement. Therefore, it is imperative that we meet the needs of

beginning teachers and offer professional development to improve their instructional

practices. The Clinical Supervision model is a vehicle through which we can accomplish

such goals.

24
Chapter Three: Methodology
Introduction

The purpose of this study is to explore how Clinical Supervision can be used to

improve and support the instructional practices of a novice teacher. In this chapter, I will

begin by explaining how the question to be studied lends itself to a qualitative case study

using continuous comparative analysis. I will then outline the design for this study,

including how the participant was selected, as well as provide information about the

school environment in which the study takes place. Finally I will explain how data was

collected and analyzed over the course of this study.

Qualitative Case Study

"Qualitative research is an umbrella concept covering several forms of inquiry

that help us understand and explain the meaning of social phenomena with as little

disruption of the natural setting as possible" (Merriam, 1998 p. 5). There are several

different forms that qualitative research can take; however, Merriam explains five

commonalities found across all forms of qualitative research.

• Qualitative researchers are interested in understanding the meaning people

have constructed, that is, how they make sense of their world and the

experiences they have in the world (p. 6).

• In all forms of qualitative research, the researcher is the primary

instrument for data collection and analysis (p. 7).

• Qualitative research usually involves field work (p. 7).

• Qualitative research primarily employs an inductive research strategy.


• Since qualitative research focuses on process, meaning, and

understanding, the product of a qualitative study is richly descriptive (p.

8).

The procedures for this study, which will be outlined later in this chapter, meet all five

criteria for a qualitative study. Additionally, what often differentiates a qualitative study

from a quantitative study is that the researcher often spends a great deal of time in the

natural setting of the study and is often very closely involved with the participant(s) of

the study (Yin, 1984; Merriman, 1998). This adds another criterion which the current

study meets.

One specific form of qualitative research is the case study. According to Yin

(1984), case studies should be considered as the most appropriate form of study when

dealing with exploratory questions that ask "how" or "why". He further explains the

validity of the case study to be used in research when contemporary events are examined

and relevant behaviors cannot be manipulated and direct observation and systematic

interviewing takes place.

Merriman explains that while case studies can be quantitative, in the field of

education they are more likely to be qualitative. Yin gives three applications that give

case studies a distinctive place in evaluative research.

• The most important is to explain the causal links in real-life interventions

that are too complex for the survey or experimental strategies (1984, p.

25).

• A second application is to describe the real-life context in which an

intervention has occurred (1984, p. 25).

26
• Finally, the case study strategy may be used to explore those situations in

which the intervention being evaluated has no clear, single set of outcomes

(1984, p. 25).

Additionally, in the classic case study, it is an individual that is the primary focus of the

study and primary unit of analysis. It is the researcher that gathers and analyzes the data

relevant to the case. The current study meets the criteria for a case study as outlined by

Yin.

Data gathered in the study was analyzed using the constant comparative analysis

method which will be further explained later in this chapter.

Research Design and Procedures

Initial Protocols

Prior to beginning the proposal for this study, several protocols had to be

completed. First and foremost, an application was submitted to the Department for the

Protection of Human Subjects at the University of Houston. Permission to proceed with

the research was granted. In addition to approval from Human Subjects, I also had to

submit a proposal to the research department for the school district the study was to be

conducted in, which also required approval from the campus principal. Approval to

proceed with the study was granted by all parties.

27
Selection of Participant

Since the focus of this study is on the use of Clinical Supervision with novice

teachers, the participant needed be a first year teacher. According to Sergiovanni and

Starratt (2002), the supervisor must "build a relationship based on mutual trust and

support" (p. 227) before every walking into a teacher's classroom in order to set the stage

for effective Clinical Supervision. Because I have eleven years of instructional

experience in the math classroom, I believed that I would have more credibility

supervising a math teacher and that would help in building a sense of trust with the

participant. Therefore, participation was limited to mathematics teachers. There were

five possible candidates for participation in the study. I met with the candidates as a

group and explained the study I would be conducting and the purpose of the study. Each

candidate was also given a letter explaining in detail the requirements for participation,

the format of the techniques to be used and the time commitment involved. Candidates

were invited to participate on a voluntary basis and were asked to visit with me further if

they were interested in participating. One candidate contacted me via email to let me

know she would like to participate. I met with her and she signed the consent form as

required by the Department for the Protection of Human Subjects. Ms. Cruz (name

changed to protect identity), the participant in the study, is an eighth grade math teacher.

She received her certification through a traditional certification program in the state of

Texas. However, the requirement for student teaching was waived because she spent

four years working on a school campus, two of which she was employed as a teacher's

aide.

28
Background of School

Baker Middle School (name changed) is a Title I school with a population of

approximately 1,000 students. The school demographics are as follows: 97% Hispanic

48% Limited English Proficient (LEP); 65% At Risk; 20% Mobility; 93% Economically

Disadvantaged. The school employs approximately 55 teachers and four administrators.

Baker Middle School has maintained a state rating of Academically Acceptable;

however, the school failed to meet Adequate Yearly Progress (AYP) for the 2007 - 2008

school year based on the performance of LEP students in mathematics on the Texas

Assessment of Academic Skills (TAKS) test and Special Education students'

performance on the TAKS test in mathematics and reading.

Procedures for Data Collection

In this study, I conducted five cycles of the Clinical Supervision process. In the

initial planning conference, I asked questions to help Ms. Cruz determine which areas in

the instructional process she felt most comfortable with. From there we moved to

discussions of other areas to determine what behavior would be observed in the initial

observation. Each observation lasted for a period of twenty minutes. The date and time

of the observation were be determined by Ms. Cruz during the conferences.

During the observation, I recorded data on the relevant behaviors. After the

observation, I transcribed the data into an appropriate form to be stored on the computer.

I also keep a reflection journal that was not shared with Ms. Cruz. This allowed me to

make notes of other events that may have occurred during the observation that weren't

relevant to the agreed upon observable behaviors, but that might aid in data analysis for

the purpose of this study. After the initial observation, I scheduled a time to meet with

29
Ms. Cruz for the post conference during which I shared the data. At this point, we

discussed the data and what theories and best practice research say about the data

observed. Our conclusions set the stage for the next observation.

During each subsequent observation, I collected data and used the constant

comparative analysis method to interpret the data. This method, developed by Glasser

and Straus in 1967, is a means for developing a grounded theory (Merriman, 1998).

A grounded theory consists of categories, properties, and hypotheses that are the

conceptual links between and among the categories and properties. Because the

basic strategy of the constant comparative method is compatible with the

inductive, concept-building orientation of all qualitative research, the constant

comparative method of data analysis has been adopted by many researchers who

are not seeking to build substantive theory. (Merriman, 1998, p. 159)

Since the purpose behind this research is to explore how Clinical Supervision can be used

to support and improve the instructional processes of a novice teacher, it calls for this

type of analysis. The data gathered on the behaviors observed were continuously

compared to data recorded in previous observations to see if the interventions (as

referenced by Yin previously in Chapter 3) put in place are indeed changing the

instructional practice of Ms. Cruz.

30
Summary

This qualitative single case study uses Cogan's (1973) model of Clinical

Supervision. I acted as the supervisor and conducted five cycles of Clinical Supervision

with a first year mathematics teacher. Data were collected and analyzed using the

constant comparative analysis method in order to determine if this process does support

and improve the instructional practices of a novice teacher, as well as provide suggestions

and alternatives for future research.

31
Chapter 4: Clinical Supervision Cycles

Introduction

In this single case study, I conducted five cycles of the Clinical Supervision

process. Chapter Four summarizes each conference and observation. Each observation

summary contains information regarding the data that were collected during the

observation. The chapter ends with a summary interview conducted with the teacher

regarding her experiences during this process.

Pre-Conference#l: November 3,2008

I began the first pre-conference with a reminder of the purpose and what our roles

would be. I explained that I was here to offer insight into instructional classroom

practices. I reminded Ms. Cruz that all of our conversations were confidential and that

the purpose of this conference was to decide what she would like me to observe during

the first classroom observation. I explained that I would only record data on information

that we agreed upon in the conference.

Without hesitation, Ms. Cruz told me that she wanted me to visit her fourth period

class. She stated, "This is my absolute worst class. I have six level-one students and

seven special education students. I have a very hard time trying to get through the lesson.

I just have a hard time figuring out what to do. I just have bad classroom management.

The kids are horrible." She reiterated that she felt like she could not control the class. In

order to better understand what the classroom dynamics looked like, I asked Ms. Cruz,

Level-one refers to a Limited English Proficient (LEP) student's acquisition of the English language.
"When you say that the students are horrible and that you can't control them, what does

that look like? What specifically happens in class that makes you feel like they are

horrible?"

Ms. Cruz explained that she could not get the students to do their work. She said

that she felt like the students liked her, but they were very behind. She said that she

wastes a lot of time redirecting their behavior. She also explained that she has a co-

teacher33 in that class every other day. This teacher follows the same group of students to

each of their classes. Ms. Cruz told me that she had discussed the behavior of the class

with the co-teacher and he said that when they were in her class, they were actually on

their best behavior.

At this point, I was a little surprised because I had not heard anything from other

administrators, the campus math content specialist, or other teachers or students that

would have indicated to me that Ms. Cruz had any issues with classroom management.

Actually, I had heard quite the opposite. As we talked in the pre-conference, I was

having a hard time figuring out exactly what I would need to observe and give Ms. Cruz

feedback on. In order to assist me in getting a better idea of observable practices, I asked

Ms. Cruz what methods she had tried with the students and what her lesson delivery

usually looked like. She explained that she tried to keep the lesson and the language

simple. She tried to model examples with the students, and she sometimes used videos

from the internet to try and get them engaged. "The students just aren't interested in

learning. The class is so disruptive that one student even told me one day that she

Co-teachers are teachers certified in Special Education. These teachers work with regular education
teachers to assist special education students mainstreamed into the regular education classroom.

33
couldn't write in her journal because she couldn't think because of all the distractions,"

explained Ms. Cruz.

The class is a ninety minute class, so I asked Ms. Cruz which part of the class she

thought that most of the disruptions occurred: the first 30 minutes, the second 30 minutes

or the last 30 minutes. She said that the students were most disruptive during their

independent practice time. She also explained that she sits with the most disruptive

students during their independent practice. I asked her at what point during the class the

students generally work on independent practice, so that I could observe during that time.

She explained how her class was structured time-wise. During this explanation, she also

told me that when students begin working on their independent practice they move into

groups. This requires movement throughout the classroom. I suggested that sometimes it

is in the transition between activities that disruptions begin to occur, and that I would like

to see the transition from her instruction to the independent practice time. We agreed that

I would do the observation on the following Wednesday from 2:35 to 3:05. However,

after much discussion we still had not agreed on what I would observe.

I asked Ms. Cruz exactly what she would like for me to record so that we could

discuss it after the observation. She said, "Just tell me what I am doing wrong, and how

to fix it." I explained that I understood her frustration, but that I really needed data that

we could look at together because to tell her what she was doing wrong would be a

judgment on my part, and that was not our purpose. She had a difficult time giving me

anything specific to observe. I made the suggestion of recording any verbal student

disruptions along with her responses to the students during the transition from whole

class instruction to independent practice and during independent practice. I explained

34
that we could look at that to see if there were two or three students that were always

causing the disruptions or if it was a problem throughout the class and that we could see

exactly how many disruptions were taking place, Ms. Cruz agreed that this information

might give her insight into the true nature of the problems.

Observation #1: November 5,2008

Just as I entered the classroom and took a seat in the back, the teacher was

finishing up her instruction. One student asked a question, and the teacher did not

respond. The student turned around to another student and asked the question again. The

other student responded. The teacher then asked the students to move into their groups. I

observed the students as they moved to different areas of the room to work with their pre-

assigned groups. During this time, only one student called out verbally with a question to

the teacher. The teacher responded to his question. The class moved into their groups

without need for any redirection from the teacher. During the independent practice, one

student left his seat, crawled over his desk, picked something up, crawled back over the

desk and sat down. The same student was later laughing and talking to the student beside

him. While this student did not appear to be on task, his talking and laughter was not

loud enough that I could hear what he was saying and students around him continued to

work. During this observation, I could hear quiet talking around the room, but from my

observations everyone was working on their worksheet and discussion appeared to be

about the work. Some students were speaking in Spanish, so I do not know exactly what

they were speaking about, but they were working and no one was causing loud classroom

35
disturbances, as the teacher discussed in the planning conference. This behavior

continued for the length of the observation.

Feedback Conference #1/Planning Conference #2: November 10,2008

As soon as I sat down, the teacher said, "Oh, it went so much better!" I asked the

teacher if they were behaving differently just while I was present in the classroom. She

explained that the class had continuously been going better. She said that she had called

a couple of parents of the students that previously had been the most disruptive. She

explained that she had re-formed the groups and tried to put behavior students with

quieter students, low achieving students with higher achieving students, and the level one

students with bilingual students that could help interpret for them. I explained that for

group activities those were all research based strategies for cooperative groups. I

explained to her that my responsibility during the previous observation was to record

verbal disruptions during the transition and independent work, and I shared the

information that I had, though there was not much to share or discuss. I asked what else

she might like feedback on. She thought for a moment and said, "Students don't read.

They know how to do it, but they don't do it. They don't read the problems or the

directions." What I realized at this point was that in both conferences, in which I had

asked what her instructional concerns were or what she would like feedback on, she gave

me student behaviors. I explained to the teacher that a student reading or not reading was

not really something that I could observe or that we could discuss and plan for

improvements on. To which she replied, "They have a lot of trouble still with fractions."

As the supervisor, I had to determine how I could get Ms. Cruz to focus on a behavior

36
that was more teacher centered. I explained again, that she was really describing a

student behavior and what I really wanted to do was look at something regarding her

practice as a teacher. She thought for a moment, and said "I don't know." I then

suggested we think about it another way.

I said "Think about your end of the year observation that one of the principals

will do. The process that we are doing is something that can help ensure that you get a

really good evaluation. When any of the administrators on campus have done walk-

through evaluations of your class, have they given you any feedback that was either good

or bad that you aren't sure about?" At this point, the teacher's eyebrows lifted and she

said, "Yes, apparently I ask good questions." I told her that was a great starting point,

and asked her why she thought she asked good questions. She said that when Ms.

Maldanado came to her class that she had written on the check off sheet, "great

questioning strategies." Then she said, "But, I don't really know why she said that." At

that point I suggested that I record the questions that she asks in class which would give

us real data to look at so that she could examine if she thought heir questioning strategies

were good and why. She agreed that she would like to do that. The observation was

scheduled for December 2,2008.

37
Observation #2 and Data Analysis: January 7,2009

The second observation was originally scheduled for December 2, 2008,

following the planning conference that occurred on November 11, 2008. However, over

the Thanksgiving holiday, Ms. Cruz had knee surgery and consequently was out on sick

leave on December 2. The observation was rescheduled on two other occasions prior to

the winter break, but each was subsequently cancelled due to either an unforeseen

conflict in schedule with complications from Ms. Cruz's surgery or a conflict with the

supervisor's schedule. Upon returning to school on January 5, 2009, the observation was

scheduled for Wednesday, January 7, 2009 during third period from 11:55 to 12:15.

During a brief visit with Ms. Cruz on Monday, January 5,2009,1 checked with Ms. Cruz

to make sure she was still comfortable with me observing the class and recording the

questions that she asked students. She agreed that this was okay.

When I entered the room, Ms. Cruz was in the middle of a review to prepare

students for an upcoming benchmark exam. She was reviewing material related to

converting between decimals, fractions, and percents, reducing ratios, ordering numbers,

and operations with integers. I remained in the room for twenty minutes and recorded the

questions that she asked.

38
Upon analyzing the data, I found the Ms. Cruz asked a total of 109 questions in a

span of twenty minutes. Of those questions, 59 questions were at the knowledge level of

Bloom's Taxonomy and basically asked students for the next step in solving a math

problem. Samples of the questions include:

Do I move the decimal to the left or to the right?

What do I do next?

How many spaces?

Forty one questions were for purposes of clarification such as "Do you understand?" Of

the nine questions remaining, five were used for student redirection and four could be

considered to be at the level of comprehension on Bloom's Taxonomy. This alone was

concerning to me as the supervisor because this teacher had been told by an administrator

on campus that she asked good questions during her classroom instruction. She did in

fact ask a lot of questions to keep most students' attention. However, when compared to

questions that students will face on the Stanford Test and the Texas Assessment of

Knowledge and Skills (TAKS) test, the questions are on a very low level. As I observed

the class, while I did not record specific data, I also noticed that the teacher answered a

lot of her own questions and rarely allowed enough time between asking the first

question, and either answering the question herself, or asking the question in a different

manner. Additionally, all of the questions, except for two, were group response

questions. Since questioning strategies were initially something that the teacher felt good

about, I did not want to bombard her with a great deal of data that might be interpreted as

negative in the feedback conference. I decided to simply give Ms. Cruz the lists of

questions and let the conversation flow from there.

39
Feedback Conference #2/Planning Conference #3: January 13,2009

I began this conference by asking the teacher how many questions she thinks she

asked the students in a twenty minute time span. She responded with maybe thirty or

forty. When I told her that she had asked 109, she was very surprised, and her immediate

response was to ask, "Is that bad?" I explained that it was not about it being good or bad,

it was just information. I showed her a list of the questions that were asked during the

observation, and I asked her to look them over. Her response was, "I think they are very

low." When she spoke of them as low, she was referring to them being low level or

simple questions. She continued to explain that she thinks in another class, they might be

higher. I asked her if she had a copy of the question stems that indicate what category

questions fall in regard to knowledge, comprehension, application, analysis, synthesis or

evaluation. She said she did not and had never heard of it. I told her that I had a copy I

could share with her, and she indicated that she would like a copy.

I then asked her to think back to her teaching methods class in college and tell me

what she remembered concerning what research and best practice said about questioning

strategies. She replied, "Don't ask yes or no questions. Always ask why. Show them

how you do it." I asked her what she remembered about whole group response questions

and individual student response questions. She said that she didn't really remember

much about that topic, but responded, "I think when you want one student to answer, you

call on that student and then ask the question." I responded, "When you do that, when

you call on the student first and then ask the question, what does it do for all of the other

students in the class?" She was quiet for a moment, and then said, "Well, I guess they

know you aren't going to call on them."

40
"Right," I said, "so at that point do they really have to listen and pay attention and

think about the question?" "No, I guess they really don't. So, if you want to call on an

individual student, you should ask the question and then call on the student?" she asked.

I suggested that this would be a good strategy to try because it would give more students

a chance to think about the question and think of an answer. However, I also wanted Ms.

Cruz to think about how much time she gives students to answer the questions before she

either answers the question herself or asks another question. I asked her, "Do you

remember what your class said about wait time?" She answered, "I think like ten seconds

or something like that." That answer really surprised me. I answered by explaining that

generally research says three to five seconds, but longer would be better for students that

had learning difficulties, such as her special education students or her English language

learners (ELL). I asked her if she thought she gave students that much wait time between

questions, and she said, "I don't know. Maybe. Probably." While I did not record wait

time during the observation, I knew that she was not allowing that much time, so I

suggested that we time ten seconds. I asked a question and waited for ten seconds to go

by. When it was over, Ms. Cruz immediately said, "Oh, no, I don't give them that much

time." I explained that it was okay, and that maybe she could make a conscious effort in

future lessons to wait three to five seconds before rephrasing the question or answering

the question herself. When I mentioned this, she said, "I answer the questions myself?

Do I do that a lot? That's bad." I responded, "I don't know how often that you do it

because I did not record that data. I did notice a couple of times throughout the last

lesson that you answered yourself."

41
I asked her to look at her lists of questions again and decide if she thought they

were mostly group response, individual student response, or whole group response. She

looked over the list and said that she believed they were mostly whole group questions.

She saw only two questions that looked like individual student questions. I summarized

by saying that we had talked a lot about questioning strategies. I asked Ms. Cruz what

she would like to work on so that I could come back and do a follow up observation to

see if there were changes. She said that she would like for me to look at her wait time,

the levels of questions that she asks (referring to Bloom's Taxonomy) and varying how

she asks the questions. She also said she would like to make sure that students are

answering the questions and that she doesn't answer her own questions. I suggested that

the next time I come in, I again record the questions that she asks during the lesson. This

time I would add how much wait time she gives, count how many questions were group

response versus individual response, and that I would make a notation if she answered

her own question. During the next conference, we would look at the data to see if she

was asking a greater percentage of higher order thinking questions throughout the lesson

in addition to the knowledge and comprehension questions, if she was allowing for three

to five seconds of wait time, and if she was varying the questions between group response

and individual response. I also explained that this was a lot to try to incorporate at one

time, so I suggested that we wait a week between this conference and the next

observation to give her time to consciously incorporate the things we had discussed into

her instruction.

42
Observation #3 and Data Analysis: January 28,2009

While this is the third observation of this teacher's instruction, it is only the

second observation regarding specific instructional practices. The teacher and I decided

that I would continue to observe during third period to get a consistent picture of the

classroom dynamics and teacher-student interactions. As I walked into the teacher's

classroom, the teacher was standing at the front teaching a lesson involving measures of

central tendency. The teacher used a Power Point slide in which she gave the students

the definition of a term, such as mean, had them copy the definition, and then worked a

sample problem with the students. This lesson structure was similar to the lesson

structure the teacher used when reviewing with the students during the previous

observation. During the conference, I recorded questions asked by the teacher, calculated

wait time after teacher questions, tallied whether questions were group response or

individual response, and tallied the number of times the teacher answered her own

questions.

Upon analyzing the data collected, I found the teacher asked a total of 51

questions: 21 questions at the knowledge level, 10 questions at the comprehension level,

and 20 clarifying questions. While this does represent an increase in the percentage of

questions that are at the comprehension level, there are still no questions that ask students

to apply the knowledge they have gained or extend their thinking to higher levels.

Examples given in class gave students a set of numbers and asked them to either find the

mean, the median, or the mode. Of the questions asked, two questions were directed to

individual students while the rest solicited a class response. There were approximately

ten instances in which the teacher waited less than two seconds and then either answered

43
her own question or asked a different question. Most concerning to me, however, was the

fact that between one observation and the next, there was no significant changes in the

level of questioning. For the previous feedback conference, I gave the teacher a list of all

questions asked with no other information. For the next conference, I prepared a list of

all the questions asked and categorized them according to knowledge, comprehension,

and clarification. I also recorded the amount of wait time the teacher gave after each

question. Additionally, I marked which questions the teacher answered or followed

immediately with another question.

Feedback Conference #3/Planning Conference #4

At the beginning of the conference, the teacher immediately asked about her wait

time. I gave her the data and asked her to look it over. As she looked at it, she asked me,

"Is it good or bad?" I explained to her what the symbols meant on the data to be sure that

she could interpret it correctly. Instead of answering her question, I asked what she

thought about it. She said that it looked like most of the time she waited between three

and five seconds and that a lot of time the students answered immediately. She didn't

really comment on the number of times she answered her own question or gave no wait

time at all. When she commented that a lot of times the students answered the questions

immediately, I asked her what she thought that could tell her about the question itself.

She thought for a moment and said, "I guess maybe the question is pretty easy." At that

point, I asked the teacher to look at the types of questions she was asking. As she looked

at them, she asked me, "What kind of questions should I be asking?" I asked her to think

about what TAKS questions look like and asked her if they were knowledge questions.

44
She responded with, "No, they are high. They are hard." I replied, "If that is what you

are trying to get students ready for, what kind of questions should you be asking?" She

responded that her questions should be high level questions.

I explained at that point to Ms. Cruz that the questions she asks in class at the

knowledge and comprehension level are not bad questions. As she is teaching them a

skill, she needs to check to ensure that they can answer the questions at the procedural

level; do they know what step comes next? At this point we talked about how the levels

in Blooms Taxonomy relate to mathematics. I explained that if I give a student a set of

numbers and ask them for the steps to find the median, they are demonstrating knowledge

of the steps. I explained to Ms. Cruz that often times in math we think that when students

can work a problem by themselves that they are at the application stage. However, this is

really more of the comprehension phase. This is where students have the knowledge and

remember the steps to comprehend how to work a problem. However, to take the

knowledge and skills they have acquired and know to use those skills in a problem

situation demonstrates application.

Afterwards, I asked her how she could take the process even further up the ladder

past application. She said, "Maybe give them a more real life problem?" At that point, I

felt like she was really unsure as to how to develop higher level questions, so we looked

at the questioning stems I gave her prior to our last observation. I gave her an example,

"Compare and contrast the uses for the mean and median when analyzing data? Why

would the mean be a better measure of central tendency for this problem than the

median? What would happen if I have outliers in my data?" She said that really helped

her see how to reword questions so that students would have to think more. I reiterated to

45
the teacher that we have to teach them skills at the knowledge and comprehension level,

but then we must ensure that they can function using those skills at the analysis and

evaluation level too.

I suggested to Ms. Cruz that we sit down and design a lesson together. I told her

that I wanted her to design the lesson and then I wanted us to sit down and look at how

we can raise the level of questioning throughout the lesson. She agreed to do this. She

said that she would like to do this for geometry and that she was teaching geometry next

week. After looking at our calendars, we planned to work together the following

Monday, February 2, 2009 after school. She said that she would put together a rough

draft for the lesson she would be presenting on Tuesday, February 3, and then we could

look at it together and make changes. We scheduled the next observation for February 3,

2009 during third period.

Planning Conference #4 Continued: February 2, 2009

When I arrived on Monday, Ms. Cruz had her lesson ready. The lesson she

planned to teach the following day was on line and angle relationships. Specific topics

that she had included were complementary angles, supplementary angles, vertical angles,

corresponding angles, alternate interior angles, and alternate exterior angles. Ms. Cruz

had designed a Power Point presentation in which she first gave the students the

definition of all six terms which she planned for students to copy into their notes.

Following that, she had an example drawn, from which students had to list each type or

pair of angles defined. It was very similar in format to the two previous lessons I had

observed, and the questioning levels where still at the knowledge and comprehension

46
level. After reviewing the lesson, I asked Ms. Cruz who was doing most of the work in

the lesson she designed. She did not understand what I was asking her. So I explained,

"In this lesson you are giving the students a lot of information. Basically you are doing

the work. You are supplying the definitions and walking them through the practice

problems. The students themselves are not really doing anything." She responded with,

"Oh, I see what you are saying. I've never really thought about it like that."

I asked her what she thought about designing a lesson in which the students came

up with the definitions themselves that explained each of the concepts she wanted them to

learn. She looked at me with one eyebrow raised and said, "Okay, how do I do that?" I

drew a picture of intersecting lines and labeled the angles a, b, c, and d. I drew a second

set of intersecting lines and labeled the angles in the same manner ensuring that angles a

and c were vertical angles in each example. "What if," I explained, "we give the students

a protractor and have them measure and record the angle measurements for each of the

angles. Then we ask them to look at the relationship between a and c in both examples

and write a description of what they notice about them. Then we ask them to write a

description of what the relationship they see between b and d in each picture. Then we

can tell them that angles a and c along with angles b and d in each picture are called

vertical angles because they share certain characteristics. Then we ask them to decide

what those characteristics are and write a definition. Afterwards have each group share

out with feedback from the rest of the class. When you are done, you should have a

working definition of vertical angles, and you really didn't tell the students anything."

She smiled and said, "I like this way a lot better!" We worked through how to do the

same thing for supplementary and complementary angles. Because the students would be

47
using the protractor to measure the angles themselves, we decided that it might be too

rushed if she tried to teach all of the concepts originally planned, so she planned to do the

rest on Wednesday.

Once we finished with the designing the lesson, I talked her through some of the

strategies that she might want to employ since this would be her first time teaching a

lesson in this manner. We discussed that the students might initially have trouble using

the protractor, so in order to help students use and read it properly, she might want to go

through measuring all four angles in the first picture together, and then have students

measure the angles in the second picture individually or as a small group. After our

discussion, the teacher said she felt very confident and excited about teaching the lesson

the following day. We agreed that I would continue to record the same questioning

patterns as I had done previously.

Observation #4 February 3,2009

When I walked in to begin this observation, the teacher was standing at the front

of the room using the Power Point that she had designed previously for the lesson. The

Power Point had a definition for vertical angles that the teacher asked a student to read.

This was followed by a definition of complementary and supplementary angles. At this

point, I was a bit confused. The lesson we had created together was designed to have the

students define these terms themselves using a guided discovery process. Ms. Cruz then

asked the students what should be the first thing that should come to mind when they

think of complimentary and supplementary angles and told them that complementary

angles were 90 degrees and supplementary angles were 180 degrees.

48
Next, she explained to the students that they would be doing the lesson a little

differently today. She explained that they would be using the protractor to help them

answer some questions and that they would have to think a little bit harder today. Ms.

Cruz modeled for the students how to properly use the protractor and worked with the

students to measure one angle. She then told the students to measure the remaining

angles so that they could answer the questions together. As the students worked to

measure the remaining angles, Ms. Cruz walked around the class helping students that

were having difficulties. As I listened to the questions, many students had the same

problems associated with using a protractor correctly. As some groups began finishing

with their measurements, she told them to go ahead and answer the questions (which she

had previously told students they would answer together).

As in previous observations, I recorded the questions the teacher asked students in

order to see if the teacher was moving up the level of Bloom's Taxonomy and including

more application, analysis, synthesis, and evaluation questions. In the lesson's

introduction and the explanation for using a protractor, the questions the teacher asked

were all knowledge or procedural based questions. However, these questions were

appropriate for the scenario. Because the teacher spent the largest part of her time

helping students measure their angles, she was not actually involved in asking or guiding

them through the more difficult questions on their discovery activity. This lesson did not

proceed as I would have taught it. However, what I had to remember, as I prepared to go

to the feedback conference, was that the lesson was not about me. It was about assisting

this teacher with her own practice, and I had to make sure to withhold my judgments

regarding what I thought should have happened.

49
Feedback Conference #4/Planning Conference #5

I began the conference by asking the teacher how she felt about the lesson. She

said, "I think it went pretty good. I think I like that way better because the kids had more

of a way to discover it for themselves, but you know I had to teach them how to use the

protractors. It took a little bit of time to explain to them how to use the protractor and

then I had to go to the groups and explain it to each group, but I think I did like that one

and it went much better.. .especially that they came out with it. I didn't have to give it to

them. The only thing is that it didn't go over corresponding angles and alternate interior

angles. I tried to come up with work similar to that, but I had a hard time coming up with

something. I couldn't come up with it. It was so hard."

I thought at this point that Ms. Cruz was talking about inserting pictures drawn in

Microsoft Word into the text document, so I asked, "you had a hard time drawing the

pictures?" She replied, "No not the pictures, it was hard to come up with the questions to

challenge them so that they could come up with the definitions themselves.. .like the

questions you helped me come up with for the one that I did." Ms. Cruz was referring to

guiding questions that would help students draw conclusions from work that they had

done in order to come up with definitions or characteristics on their own of corresponding

angles and alternate interior angles. She said, "I didn't want to actually tell them. I

wanted them to learn by themselves, and I had such a hard time." She said, "I kind of

know what to do, but I can't put it on paper." I asked her why she thought she could not

put it on paper. She explained that she really didn't know. At that point, I suggested that

we go through the process of coming up with the types of questions that she was having

50
difficulty in formulating specifically relating to corresponding angles. To begin, I drew

a picture similar to Figure 1.

Figure 1

"If angle a and angle e are corresponding angles, what is it that students need to know

about them?" I asked Ms. Cruz. She said the students needed to know they had equal

measures. I explained that in order for them to understand that on their own, after

measuring all of the angles, you would ask students what they notice about angles a and c

and also about angles e and g. I explained that what we want students to see is that each

pair of angles has equal measures. "Oh, I get it," she said. "Angles a and d are equal

because they are vertical angles. Because they are equal, are they also corresponding?"

She looked for a moment, and said "No, they can't be because they are on opposite sides

of the transversal." I explained that she was right, and that in order for students not to

make that mistake, we would need to ask them some more questions about common

characteristics between each set of angles. At this point, I pulled out a textbook to show

Ms. Cruz how I would use the definition in order to help me create questions. As we

looked at the definition, Ms. Cruz said, "So you can just tell them that they have the same

51
measure and they are on the same side of the transversal." I reminded her that we would

not be telling them, we would be asking them to make their own generalization. I

explained to Ms. Cruz that I would look at the definition in order to find out what I

wanted students to know. What answers do I want students to give and from there, I

would ask a question that should solicit that response. For example, "What do you notice

about the measures of angles a and e?" I also explained that I would give them several

more examples because what I also want them to see is that they are on the same side of

the transversal. I would also ask, "What do you notice about the measures of angle c and

g? What do you notice about angles b and f?" At this point, Ms. Cruz said, "Okay, okay

now I understand. So, now that they see that we are asking them about these pairs of

angles, now we want to ask them a question about where they are. So maybe we could

ask them about which side of the transversal they are on." I told her she was right.

About that time the bell rang and we were out of time, so I suggested we do

something similar for our next observation. I asked her if she would try and design a

similar lesson on a topic that she would be covering. She explained that she would be

teaching Pythagorean Theorem after spring break, and she really wanted to do something

similar that was also hands on. I explained that she did not have to re-invent the wheel,

and that she could find prepared lessons online that were often effective. She could take

one of those lessons and tweak it to fit her needs and the needs of her students. We

agreed that she would find a lesson, and before the next observation, we would look at it

together, and I could answer any questions that she might have. What I realized after this

conference was over is that we never really looked at the data that I gathered during my

last observation. All of our time was spent discussing lesson design.

52
Planning Conference #5 Continued: March 23,2009

Upon returning from spring break, Ms. Cruz and I met to look at the lesson she

designed for the upcoming observation. In this lesson, Ms. Cruz would be introducing

the Pythagorean Theorem to students. Ms. Cruz found a lesson online that engaged the

students in active participation throughout the lesson which had them discovering the

formula for the Pythagorean Theorem. Ms. Cruz first asked me to look through the

lesson to determine if it was a good lesson. I asked her if she thought it was a good

lesson. She said that she believed it was because the students would be using tangram34

pieces to develop the Pythagorean Theorem on their own. She explained that they would

be cutting out the shapes themselves and manipulating them so that they would be more

involved than if she just told them the formula. This lesson gave the students a picture

similar to Figure 2.

c h

Figure 2

Tangrams usually consist of seven geometric shapes, usually a square, a parellelogram, and five
triangles to form different shapes, www.encarta.msn.com/dictionary_1861718299/tangram.html

53
In this activity, students would have two separate sets of tangram pieces exactly

alike. They would use one set to cover the area in squares a and b. The second set of

tangram pieces would be used to cover the area in square c. The purpose of this activity

is for the students to see that the total area in square a and square b is the same as the total

area in square c. This concrete activity is done in order to move the students to the more

abstract idea that a2 + b 2 = c2. From this point, the students test the theory by drawing a

right triangle with dimensions of 3 units, 4 units, and 5 units onto grid paper, determining

the area of each square, and determining if the total area of the two smaller squares is the

same as the total area of the larger square. This process will be repeated with two other

right triangles. From this point, the teacher moves into a brief explanation of the

Theorem, and students use their newly acquired knowledge (If a2 + b 2 = c2, then the

triangle in question is a right triangle) to determine abstractly if a given triangle is a right

triangle. Additionally, they will find missing side lengths of triangles using the

Pythagorean Theorem.

After we went through the lesson together, I suggested to Ms. Cruz that when

presenting this type of lesson, it is usually a good idea to anticipate any challenges the

students might face in completing the activity. She began by explaining that she did not

think she would have enough time to complete the lesson during one class period. She

said she thought she would only be able to complete the part where the students use the

tangram pieces to cover the area of the squares. It should be noted that the class period is

ninety minutes long. I explained to Ms. Cruz that part of using lessons like this was

monitoring the time for the students. I suggested that she give the students a time limit

for each activity in order to ensure that students remain committed to the task at hand,

54
and in order to maximize learning time in the class. Her concern was for the student

groups that might not finish during the allotted time. I suggested at that point, that she

have students from groups that completed the task help those groups that had not yet

finished.

She also expressed concern that once the students used the tangram pieces to fill

in the area of each square that they might not make the connection that the areas of

squares a and b are the same as the area of square c. I asked her, "How do you think you

can help them make that connection without telling them they are the same? You can tell

them, but if they are not really able to see it or visualize it, then the activity is pretty

meaningless." She thought for a moment and said, "Well, could I have them pick up the

pieces from squares a and b and lay them on top of the pieces in square c?" I asked, "Do

you think that would help the students see that the area of the two smaller squares is the

same as the area of the two larger squares?" She was quiet for a moment, and said, "Yes,

I really think that would help them see for sure that the areas are the same."

I asked if she had other questions or concerns, and she said she was very excited

about the lesson. She believed the students would really enjoy it and that it would help

them understand the concept better than just using a formula that they are given. At that

point, I shared the data from observation four which we did not have time to look at the

last time we were together. I asked her to look it over and tell me what she saw and give

me her thoughts. "Well, it still looks like I asked a lot of procedural questions," she said.

I responded, "Look at the context of the lesson you were teaching at that point. You were

showing students how to use a protractor to measure angles. In that context, would it be

55
more appropriate to ask knowledge and procedural or higher level questions?" She

paused for a moment and said, "Oh, I see. So sometimes those questions are good too."

I asked her if there was anything else that she saw in the data. She explained that

compared to the last observation, she answered fewer of her own questions, and gave

students more time to answer. I asked how she felt about that, and she answered, "I

really think it is better. It slows things down a little bit and is not so rushed. I don't

really know if the students notice, but I know those are things that I need to do." At this

point, I asked what data she would like for me to record in the upcoming observation. "I

just want you to watch and tell me how I do. If I do something wrong, I want you to tell

me." I told Ms. Cruz, that I understood that she wanted feedback on how the lesson

went, but I could not really tell her if it was good or bad. I explained again that we

needed some verifiable data to look at and draw conclusions from together. "Okay, okay,

I know. I guess you can just look at the questions again and the wait time? Is there a

way you can tell me if you think the students are doing better?" I answered, "I can record

the questions as we've done in the past. As for the students, I can scan the room and

record on and off task behavior. However, if I try to do both of those things at once, I

probably will not be able to record all of the questions." She said that was okay. The

observation was scheduled for the following day.

56
Observation #5 and Data Analysis: March 24,2009

As I entered the classroom, students were already in groups and Ms. Cruz had

passed out the packet students would be using. She was in the process of handing out the

papers that had the tangram pieces on them that students would cut out. I noticed that

one paper was pink and the other was yellow. During this lesson, Ms. Cruz structured it

into chunks of time for students to complete activities. She gave the students instructions

and then asked them to begin to cut out their tangram pieces and use them to fill in the

areas of squares a and b using the pieces they cut from the yellow paper and cover square

c with the pieces they cut from the pink paper. Each group finished in the allotted amount

of time.

At this point, instead of having the students try to answer the questions that

followed on their own, Ms. Cruz took more of a direct role in helping the students think

through the process. She asked questions such as, "Is there anything that you notice

about the area of squares a and b compared to square c?" One student answered, "They

are different colors." As several students laughed, Ms. Cruz said that was a good

observation, but there was more too it than the color. She asked, "What do you think

would happen if you took the pieces from squares a and b and laid them over the pieces

in square c?" One student stated that they would be the same. She had the students try it

and see if that was true. From this point, she helped them to develop the equation for the

Pythagorean Theorem. Throughout the lesson, she asked several questions such as

"What do you think would happen if..." and "Do think this would work for all triangles?"

She asked students to make a prediction and write it in their journal. Then she had the

57
students experiment with the triangles on the grid paper to see if their conclusions were

correct.

Throughout the lesson, I transcribed her questions, but was unable to always

accurately count the wait time as she requested. I also monitored the room approximately

every minute to record how many students were on or off task at any given time. During

this observation, I stayed a total of thirty minutes as opposed to twenty. Upon reviewing

the data, I found that Ms. Cruz asked a significant number more higher level questions.

Of the thirty questions that I recorded, fifteen of them were at the application level or

higher, seven questions were knowledge or comprehension, and eight were clarifying

questions. The following chart indicates students on task behavior during the class

period.

Minute I 2 3 4 5 6 ~~7 8 9

# of students 5 0 0 0 0 0 I 2 6

off task

Minute 10 11 12 13 14 15 16 17 18

# of students 4 1 0 1 0 0 0 0 1

off task

Minute 19 20 21 22 23 24 25 26 27

# of students 3 4 5 7 2 1 0 0 0

off task

Table 1

58
While I was noting student off task activity, I made note of what was happening

in class each time the number of students off task began increasing. Each time you see an

increase such as minute nine, and minutes nineteen through twenty two, it was at a time

an activity was nearing completion. As some groups finished more quickly than others,

they began to talk quietly at their tables. In fact some of them could have been talking

about the outcomes of the activities, but from my position, I could not hear the details of

the conversations.

Feedback Conference #5: April 6,2009

When I walked into Ms. Cruz's room, she started smiling. I asked her how she

felt about the lesson, and she said, "Oh my gosh! It was so much fun!" I asked her why

she thought it was fun. "The students seemed really into it. It was hard for them to come

up with the formula, but they did it. They really did it. I wasn't sure before. And, class

went by so quickly. The bell rang and the students and I both were surprised. They

worked the whole time. I can't believe it. I didn't have to wake any of them up or really

redirect any behavior. Some of them seemed a little reluctant at first, but when they got

the tangram pieces to fit on the first square, they seemed to get a little more excited."

I told her I was glad she was excited about the lesson, and that I was glad she

enjoyed it, but I asked, "Do you think it helped the students learning? Do you think it

will help them to remember the theorem?" She told me that she really believed that it

helped them to understand. She said, "Now any time I put up a question like that as we

have reviewed for TAKS, they say, 'Oh, that is the problem like we did with the

squares.'" At that point, I shared with her the data I collected and gave her a chart for off

59
task behavior like that in Figure 1. She was at first concerned when she saw at times

there were several students that appeared to be off task. I explained to her that these were

instances in which groups finished before other groups and that I could not hear their

conversation. However, I posed the questions, "What could you do in the future to make

sure the groups that finished before others remain committed to the lesson?" She at first

remained quiet and shrugged her shoulders. I also remained quiet. Then she said, "Oh,

maybe I could give them a journal prompt or something and ask them to summarize what

they just did." I just smiled, and I told Ms. Cruz that she had more answers than she gave

herself credit for. At this point, I asked if there was anything else she wanted to discuss

about this particular lesson, and she said no. I asked if it would be okay to shift to the

summary interview, and she agreed.

Summary Interview: April 6,2009

I began the interview with the question, "Do you think the process has been

beneficial? Do you feel like it has helped you in any way?" Ms. Cruz said "Yes, because

if I have a question I can go and ask you. And like the time I give the kids, I wait more.

I know I don't always wait the right amount of time, but I am trying very hard to be

patient with them and I am getting better, and I am much more aware of answering my

own questions, and I don't do it as much. I think the way that you explained to me to do

the lessons, it's improving. I can see that it helps them."

I followed with the questions, "Do think that if we had not done this, that any of

those things would have changed from the beginning of the year until now?" She said,

"No. I was observed, but I don't think they would have really noticed any of those

60
things. It is so general the way they observe us. They wouldn't have told me, 'You are

answering your own questions' or 'You aren't giving enough time to the kids,' or 'Why

don't you try this?'"

I asked, "What if one of the assistant principals had asked you to participate in a

process like this with them? Would you have agreed? Would you have felt as

comfortable with it?" She said, "No, because I think I would be more afraid. I mean

even if they said it wasn't going to affect you, they might think you are a bad teacher.

When anyone comes in, even you, I sweat, I stutter, my mind goes blank, I just get so

nervous. Even with you, even though I know it isn't going to affect anything and that

you are here to help, I still get nervous. If I were doing something like that with one of

the administrators it would be horrible. I couldn't tell them, I was having a hard time

with this or that. Oh no. I would lie. I'll be honest, I would lie to them. I would say,

'Oh everything is fine. I'm not having any trouble.'" I must admit, here I laughed, but I

did ask, "So were you honest with me?" She said, "Yes, I'm honest with you."

I asked her if it was just the fact that the others are in an administrative capacity

that bothers her. She said, "No, it's not just that. I feel comfortable with you because

you were like my mentor. And second of all, I didn't have anybody. I didn't have

anyone that could tell me do this or do that, There were no math teachers in my

department that I could ask; we are all new. We have cool stuff, but we needed someone

to guide us, so I just decided to try it. I decided if I messed up by opening up or being

honest, then I guess I would just learn from that, but I really had nothing to lose."

I asked her for suggestions on how the process could be improved. I asked if

there was anything I could have done differently that would have helped her more or if

61
the process could have been changed in anyway that she thought would make it better.

She said that she really liked the process. She explained that the only thing that could

have made it better were more observations. She also explained that, at the same time

that would have made it worse, because it was so difficult tofindthe time to meet for the

conferences and plan the observations.

I ended the interview by asking Ms. Cruz what she thought was most important

about this process and what she thought she would carry with her in her teaching

practices. She said, "All of it. I will continue to work on my questioning and not

answering my own questions so much. I will try to be more patient with the kids and

give them more time. I definitely want to do more lessons like the ones that we did

together. I just think they are better for the students. They like them better, and it makes

it easier for them."

62
Chapter 5: Conclusions

Introduction

The purpose of this research was to answer three specific questions: How does a

novice teacher's instructional practice change when he/she works with a supervisor in the

Clinical Supervision process? How can the conferences during the cycles of Clinical

Supervision begin to uncover instructional issues that the teacher may not have been

aware of? How can these issues be addressed in the Clinical Supervision process? In the

first section of Chapter 5, these questions will be addressed in a summary based on the

interactions with Ms. Cruz as summarized in Chapter Four.

In the second section of Chapter 5,1 will discuss implications of Clinical

Supervision on instructional practice including questioning strategies, change in lesson

designs, and implications for teacher development. I will follow that with a discussion of

Clinical Supervision as a formative rather than summative process of supervision. In this

section, I will include considerations for the supervisor to take into account as well as

limiting factors in the use of Clinical Supervision. Finally, I will offer considerations for

future research.

Summary and Discussion

The first chapter of this thesis outlines a need to find a method that could offer

novice teachers a system of support that would provide a non-judgmental, non-evaluative

environment that could help them progress from a novice teacher to a professional

teacher. Using the Clinical Supervision process with Ms. Cruz enabled us to discover

instructional concerns throughout the conferences and observations that Ms. Cruz did not

originally realize could be inhibiting her instructional practice. Through conferences,


observations, and informal conversations, we were able to address these issues in a

professional manner that allowed Ms. Cruz to make changes in her instructional practice.

In this section, I will explain how questioning strategies and lesson designs were

addressed.

During the second planning conference, the teacher indicated that one of the

assistant principals informed her during an observation that she had good questioning

strategies. During the second observation and third observation, I recorded the questions

that Ms. Cruz asked during her lessons and, upon analyzing the data after each

conference, found that Ms. Cruz asked very low level questions throughout both lessons.

In order to address this concern after the second observation, I gave Ms. Cruz a list of the

questions and tried to take a more collaborative approach to addressing the issue of low

level questioning. Upon reviewing the data, Ms. Cruz quickly acknowledged that she

thought the questions were "low." Because Ms. Cruz acknowledged that the questions

were low, I made the assumption that she could self-correct this behavior. However,

when the second observation yielded similar results, I found that this was not the case.

When presented with the data in the feedback conference, and Ms. Cruz asked what type

of questions she should be asking, I realized that a different approach to correcting this

problem would be needed.

As Glickman explains, a more directive informational approach can be taken with

a teacher in the supervisory process when the teacher is confused, inexperienced, or at a

loss for what to do, and the supervisor has knowledge of effective practices (Glickman

et.al. 2007). It was at this point that I suggested we design a lesson together that

incorporates questions from various cognitive levels. During this lesson design, I

64
designed most of the lesson for the teacher which might at first sound like Glickman's

directive control style of supervision. However, throughout the lesson planning, I did

solicit the teacher's feedback and input. There was never a point that I told the teacher,

"This is what you have to do."

During the feedback conference after the teacher presented the lesson that we

designed together, another important instructional issue was reveaded. As we discussed

the lesson the teacher presented, she explained that there were two concepts that were not

presented in the lesson that she had to teach the next day. She explained that she tried to

design similar questions that would help lead the students to the conclusions they needed

without her having to tell them the information. However, she expressed that it was

really difficult and that she could not write questions of that nature. At this point, I was

able to explain to Ms. Cruz how I developed such questions by looking at what I needed

students to know and using the definition of terms to work backwards to develop

questions for the students.

It is important to note, that without the use of Clinical Supervision for the

development of this novice teacher, these issues may have been left undiscovered. First

of all, Ms. Cruz felt good about her questioning ability because she had been told by a

campus administrator with thirty plus years of educational experience that she "asked

good questions." Had we not developed a lesson together in which students participated

in more of discovery type learning, Ms. Cruz would not have realized that she had

difficulty preparing a lesson of this nature with appropriate questions to lead students to

conclusions, rather than just providing them with the information.

65
During our first planning conference, she explained the format of her instruction.

She generally began with a "bell ringer," an activity the students were to begin upon

entering the class. This was followed by a lecture in which she gave students notes to

copy from a Power Point followed with examples they worked together. This was

followed by a time for independent practice, during which, students worked problems by

themselves or with a partner. During the first three observations of Ms. Cruz's class, I

found that as she explained, she used this very traditional model of instruction.

The first lesson that we designed together fostered a different approach to student

learning. This lesson followed the 5E lesson cycle approach that is advocated by the

district: Engage, Explore, Explain, Evaluate, Extend. The lesson was designed so that

students would be engaged in their own learning as they used protractors to measure

angles so that they could explore the relationship among certain angles. This would be

followed by a brief explanation by the teacher to clarify any questions or confusion the

students might have. At this point, we did not focus on the Evaluate or Extend phase of

the lesson cycle.

Upon observing Ms. Cruz's lesson during the fourth observation, I saw that Ms.

Cruz had some trouble adhering to the format of the new lesson cycle. She began the

lesson by explaining some concepts the students were to discover for themselves.

However, she did continue with the lesson and had students measure the angles and see

for themselves the relationship among different angles. As mentioned previously, Ms.

Cruz admitted during the following feedback conference, that when she tried to design a

lesson of a similar nature, she was not able to do so. After further explanation on how to

do so, I encouraged Ms. Cruz to try the process again, and recommended that we look at

66
the lesson together before she presented it. I also explained, I often searched for lessons

online and then modified them as needed to fit with my teaching style or the needs of my

students.

When Ms. Cruz and I met for the planning conference prior to observation five,

she had such a lesson ready. She did look online and found the lesson outlined in

Chapter Four for teaching the Pythagorean Theorem. I looked over the lesson with her,

and we discussed problems that students might encounter and how she might address

those problems. During the next observation, Ms. Cruz followed the lesson cycle with

much more ease than during observation four and, as noted in Chapter Four, students

were highly engaged in the lesson and demonstrated understanding, according to Ms.

Cruz, at a much deeper level than they had on previous concepts in which a more direct

style of teaching was used.

Implications for Professional Practice

One might at first think that the changes in questioning strategies and instructional

delivery do not amount to significant changes that will impact student achievement.

However, research indicates that these two implications alone are significant to student

understanding.

Benjamin Bloom and his colleagues defined the cognitive domain as objectives

which "deal with the recall or recognition of knowledge and the development of

intellectual abilities and skills" (1956, p. 7). Bloom's cognitive levels or taxonomy

classify educational objectives into six major categories: knowledge, comprehension,

application, analysis, synthesis, and evaluation. These categories are listed in a

67
hierarchical order with knowledge being the lowest level and evaluation being the

highest. According to Oliva (1988, p. 386), "A central premise of professional educators

is that the higher levels of learning should be stressed. The ability to think, for example,

is fostered not through low level recall of knowledge but through application, analysis,

syntheses, and evaluation." There has been debate over whether or not questions at the

higher cognitive level lead to increased student learning. According to Orlich, Harder,

Callahan, Trevisan, and Brown (2004, p. 233), "Merely asking questions is not causal to

student thinking. More important, you should realize that your higher-level questions do

act to invite and encourage higher levels of critical thinking in students. Furthermore, it

appears that if teachers systematically raise the level of their questioning, students raise

the level of their responses correspondingly." Acheson and Gall (1997) conclude, from

studies done, that higher cognitive questioning is necessary, but not sufficient for the

development of a student's ability to think.

Because the questions are necessary for developing a student's ability to think,

and, because they were virtually non-existent prior to Ms. Cruz participation in this study,

the mere fact that the students are now exposed to such questioning techniques can have

implications for the students' cognitive development in mathematics if these questioning

patterns continue to be developed. Second, in Texas, students must take the Texas

Assessment of Knowledge and Skills (TAKS) Test. Students in the eighth grade must

pass this test in order to be promoted to the ninth grade. Regardless of whether or not

one accepts the importance of Bloom's Taxonomy and higher order thinking questions,

the level of questions asked at the eighth grade level on the TAKS Test are much more

difficult that the questions the students were previously exposed to in Ms. Cruz's class.

68
It is not merely the change in questioning strategy by Ms. Cruz that has the

capacity to impact student achievement, but also the change in her thoughts and actions

around lesson design. In the more traditional approach to content delivery that Ms. Cruz

used in the first three cycles of observations, students were meant to absorb and process

information. They were not called upon to be active participants in their own learning.

By implementing classroom practices that utilize a more inquiry or discovery based

teaching strategy, students are called upon to be active participants in the instruction.

According the Schlechty (2002), when students are actively engaged in classroom

activity, they are more likely to learn what we want them to learn.

Implications for Teacher Development

Chapter One references a definition of "professional teachers" as defined by Wise

and Darling-Hammond (1985):

One who has sufficient knowledge of subject matter and techniques to make

appropriate decisions about instructional content and delivery for different

students and classes. In other words, professional teachers are able to ascertain

their clients' needs and determine how to meet them (p. 31).

Chapter One also poses the question at to whether or not Clinical Supervision can help

move a teacher along the continuum from novice teacher to professional teacher at a

faster pace. This section will take a more specific look at the continuum of teacher

development and what the implications of this research are on teacher development.

Kenneth Leithwood proposes that there are six stages in the development of

professional expertise for teachers (Fullan and Hargreaves, 1992). He explains that after

69
stage one, each stage includes expertise gained in previous stages. However, each stage

is not absolute. "Expertise at higher stages will begin to develop quite early given

appropriate, formative assessment" (Fullan and Hargreaves, 1992, p. 87). The first stage

in this process of development is referred to as developing survival skills. According to

Leithwood, in this stage, teachers have a semblance of classroom management, limited

skill in the use of different teaching models, randomly select models, use assessments

that are mostly summative, and may not be aligned to curriculum goals. In stage two,

"becoming competent in the basic skills of instruction," the teacher's classroom

management skills and skills in the use of different teaching models are more firmly

established, there is continuous practice in the use of those models, and assessments

become more formative and are aligned with curriculum goals which are easiest to

measure. It is not necessary at this point to expound on the other four stages because Ms.

Cruz's level of expertise shifted between these two stages throughout the process.

While the focus of only one of our conferences was on classroom management

and discipline, based on conversations that we had outside of the conferences, I would

say that Ms. Cruz had a fairly well established discipline-management plan in place in

her classroom. When she had problems, she often sent me an email or called me

soliciting advice on how to handle different problematic situations. Upon entering the

classroom at the beginning of the year, I would estimate that Ms. Cruz was in stage one

bordering on the brink of stage two. As the school year nears an end, Ms. Cruz has

demonstrated stronger, more established discipline management practices, which indicate

that her expertise in this area has crossed over to stage two. I base this on the

observations that Ms. Cruz very rarely had to redirect students when I was in her class,

70
students are respectful to Ms. Cruz, both in and outside of class, and she has very few

discipline referrals. While this was not a direct result of the Clinical Supervision process,

it does deserve recognition because, as mentioned previously, as a result of this process,

Ms. Cruz considered me to be her mentor and sought out my advice when she

encountered problems she was unsure how she should handle them.

Prior to beginning this research project and throughout the first stages of this

process, I cannot say with certainty how much knowledge Ms. Cruz had of different

teaching models. However, I can say with certainty that the only model she practiced

was that of direct instruction. While I cannot declare that Ms. Cruz has well-developed

skills in the use of several teaching models, I can say that she is now aware of, and has

practiced, an additional model of instruction. She is very enthusiastic about the results

and differences she has seen in class upon implementing the new model and has

expressed interest in learning about other methods. The Clinical Supervision process has

given her a boost towards increasing expertise in instructional design, as opposed to

remaining stagnant throughout the year, which might have been the case otherwise.

While Leithwood explained other facets of expertise in these two stages, such as

conscious reflection of choice in teaching models and assessment practices, these were

not areas of focus during the Clinical Supervision practice employed with Ms. Cruz and I

cannot draw any conclusions about Ms. Cruz's professional progress in those areas.

71
Clinical Supervision as a Formative Process

With students, we use formative assessment as an assessment for learning, which

Stiggins, Arter, Chappuis, and Chappius (2006) explain as an assessment that happens

while learning is still happening. "These are the assessments that we conduct throughout

teaching and learning to diagnose student needs, plan our next steps in instructions,

provide students with feedback they can use to improve the quality of their work, and

help students see and feel in control of their journey to success" (Stiggins, et.al, 2006, p.

31). For teachers, we can use formative evaluation in much the same process. Glickman

et.al. (2007, p. 288) define formative teacher evaluation as a "supervisory function

intended to assist and support teachers in professional growth and the improvement of

teaching."

These authors go on to explain that "formative evaluation" is usually based on

systematic observation, which is limited to a single aspect of classroom process (e.g.

questioning techniques, student participation, classroom movement, and so on)" (p. 288).

Consequently, formative observation instruments are agreed upon prior to the observation

relevant to what the teacher wants to learn about his/her practice (Glickman et.al., 2007).

Finally, Glickman et.al. conclude that formative assessments are focused on building

rapport, trust, and a collegial relationship between the teacher and evaluator, addressing

the teacher's specific needs, and improving the teacher's performance. This is significant

because, as Leithwood explains, teachers can begin acquiring professional expertise at

higher levels early if they have the proper formative assessment. Clinical Supervision is

a form of supervision that can offer just such assessment. In this study, using Clinical

Supervision allowed me to focus on a single aspect of instruction agreed upon by Ms.

72
Cruz. We built a relationship of trust and addressed her specific needs, which did result

in a change in her performance. This change in her performance, as indicated by prior

research on questioning techniques and lesson design and delivery, as well as her

progression from stage one professional expertise to stage two professional expertise, can

be considered an improvement in teacher performance.

There are some researchers who believe that school administrators can

incorporate Clinical Supervision as a formative process into their practice without it

interfering with summative assessments of teachers done annually (Acheson and Gall,

1997). However, there are important limiting factors to such an approach that should be

considered. Cogan intended the use of Clinical Supervision to be a collegial practice to

help teachers develop in their profession (Cogan, 1973). When school administrators

take on the role of supervisor in Clinical Supervision, it has the potential to lead to what

Grimmet and Crehen (see Fullen and Hargreaves 1992) describe as administratively

imposed collegiality. "The administratively imposed type of contrived collegiality

consists of 'top-down' attempts to manipulate directly the practices and behaviors of

teachers as professional educators." Even if this is the case, there could still be some

benefit to the teacher.

However, one of the most important factors involved in the success of Clinical

Supervision is trust among the supervisor and supervisee (Acheson and Gall, 1997;

Sergiovanni, 2002). Based on the response of Ms. Cruz, when I asked if she would have

been as apt to engage in the process of Clinical Supervision if one of her campus

administrators had asked her to engage in the process, the trust necessary to sustain the

process might be limited. Ms. Cruz, in fact, said she would be dishonest with her

73
administrators because she would not want them to think she was a bad teacher and that

she could not be sure what the administrators saw in her classroom would not be used

against her on her end of the year summative assessment. Additionally, if the teacher

does feel that the process is summative or evaluative, it could actually be detrimental to

the main objective which is improving teacher performance, because the teacher would

be less willing to participate openly in the Clinical Supervision process and less willing to

change classroom practice (Glickman et.al., 2007).

Trust is not the only factor that could limit the effectiveness of this practice if

comingled with summative evaluation by campus administrators. Even more debilitating

than the trust factor, perhaps, is the time factor. The litany of things that campus

administrators must deal with on a daily basis is long. At Baker Middle School, there are

eight first year teachers. For me, it was extremely difficult to dedicate the appropriate

amount of time to the conferences, data analysis, and observations for just one teacher.

As it stands, we did five observations. The practice has implications for greater teacher

development, but only if the observations and conferences are conducted on a more

frequent basis can those implications be realized. To do so necessitates more time.

Finally, the supervisor, whether a campus administrator or other personnel, must

consider their own personal attitudes, beliefs, and values when engaged in a supervisory

role. As stated numerous times throughout this research, Clinical Supervision is meant to

serve as a non-judgmental, non-evaluative means of teacher support (Glickman, 2007;

Acheson and Gall, 1997; Sergiovanni, 2002; Cogan, 1973). However, it is in our human

nature to make judgments based on our own values, beliefs, and attitudes. As I observed

Ms. Cruz's class, I could not help but form opinions on what I thought she was doing

74
well and what I thought she might improve. When I watched her present the lesson that

we first designed together, my first thought was, "This is not how I would have done

this." As the supervisor, I had to make a conscious effort to withhold these judgments

and other evaluative thoughts from the conferences I conducted with Ms. Cruz.

However, I cannot say that my opinions in no way affected the outcome of the Clinical

Supervision experiences. The lesson we designed together was based on a format that I

had previously had success using in my own classroom instructional practices. Because I

was concerned about the level of wait time Ms. Cruz used during questioning, I was able

to bring it up in the process of the conference as a general question to her about what best

practice says about wait time. From there we were able to address the issue in the next

observation. Because I was not also responsible for doing Ms. Cruz's end of the year

summative assessment, my personal values, beliefs, and attitudes were not as big a

limitation in the Clinical Supervision process as they could have been otherwise.

Implications for Future Research

While this research does provide insight into what implications Clinical

Supervision can have when used with novice teachers, it also leads to other questions to

be considered for future research purposes. First, additional studies are needed to add to

the knowledge base on the use of Clinical Supervision with novice teachers. Some

research has been done over the years, but with the influx of alternative certification

programs, comes new circumstances that need to be studied.

While this study focused on Clinical Supervision as it relates to the improvement

of novice teachers, further research could be done in relation to veteran teachers. Texas

75
uses what is called the Professional Development and Appraisal System (PDAS) as the

tool for summative teacher evaluation. It rates teachers as below expectations, proficient,

or exceeds expectations across eight different domains. When a teacher falls below

expectations in any domain, he/she is placed on a growth plan to improve his/her

performance. Additionally, prior to the final summative assessment, if the appraiser feels

that the teacher might fall below expectation in any domain, the teacher can be placed on

a growth plan to improve his/her skills in that domain. A question to be considered is,

"How effective Clinical Supervision might be in improving the instructional practices of

teachers that have been placed on growth plans?" Furthermore, additional research could

also add to the small number of empirical studies that have been done using peers as

clinical supervisors.

Summary

The findings in the research suggest that Clinical Supervision can be used to

effectively provide support for new teachers in order change and improve their

instructional practice. Additionally, this practice can help teachers move along the

continuum of Leithwood's stages of teacher professional expertise at a faster pace than

what might be accomplished if left on their own.

Clinical Supervision also fits the formative model of assessment that Glickman

proposes can help improve teacher development by maintaining a trusting, collegial

relationship between the teacher and supervisor and focusing on specific areas of

instructional practice agreed upon by teacher and supervisor. However, there are

76
limitations to the Clinical Supervision process which should be considered, such as trust,

time, and the supervisor's attitudes, beliefs, and values.

Finally, this study offers implications for future research which include whether

or not the process of Clinical Supervision can be used to improve the effectiveness of

teachers that have been put on growth plans. Additional research can be done to measure

the effectiveness of the Clinical Supervision model when used among teacher peers.

77
References

Acheson, K.A. & Gall, M.D. (1997). Techniques in the Clinical Supervision of teachers:

preservice and inservice applications. New York: Longman

Alliance for Excellent Education (AEE). (2008). What keeps good teachers in the

classroom? Understanding and reducing teacher turnover. Washington DC.

Retrieved on January 30, 2009 from www.all4ed.org/files/teachturn.pdf

Baldacci, L. (2006). Why new teachers leave. American Educator. Retrieved on

January 20,2009 from www.aft.org/pubs-

reports/americaneducator/issues/summer06/teacher.pdf

Bay, M., Straver, J., Bryan, T., and Hale, J. (1992). Science instruction for the mildly

handicapped: direct instruction vs. discovery teaching. Journal of Research in

Science Teaching, 29(6). Retrieved on April 5,2009 from

www3 .interscience.wiley.com/journal/112759403/abstract

Benjamin, S. (1987). An investigation of the relationship among teacher perceptions of

clinical supervisory practices, principal authenticity, and supervisory outcomes.

Unpublished doctoral dissertation, Northeastern Univeristy

Bloom, B. (1956). Taxonomy of educational objectives: the classification of educational

goals: handbook I: Cognitive Domain. New York: Longman

Brennen, Annick (unknown). Clinical Supervision. Educational administration and

supervision. Retrieved on 11/16/2007 from

http://www.soencouragement.org/clinical-supervision-case-studv.ht

Cogan, M. (1973). Clinical Supervision. Boston: Houghton Mifflin Co.

Crane, Thomas (2002). The heart of coaching. San Diego: FTA Press

78
Darling-Hammond, L. (1988). Teacher quality and equality. Unpublished paper prepared

for the College Board's Project on Access to Learning.

Darling-Hammond, Linda (1999). Teaching and knowledge: policy issue posed by

alternate certification for teachers. Seattle, WA: Center for the Study of

Teaching and Policy, University of Washington

DuFour, Richard (2004). What is a "Professional Learning Community". Educational

Leadership, 61(8), 6-11.

Feimen-Nemser, S. (1996) Teacher Mentoring: A Critical Review. Michigan State

University

Fullan, M. and Hargreaves, A. (1992). Teacher development and educational change.

United Kingdom: Routledge

Gibson, Robert (1985). The effectiveness of Clinical Supervision in modifying teacher

instructional behavior. Unpublished doctoral dissertation, University of

Montana.

Glickman, C , Gordon, S., & Ross-Gordon, J. (2007). Supervision and instructional

leadership: A developmental approach. Boston: Pearson

Goldhammer, R. (1969). Clinical Supervision: special methods for the supervision of

teachers. New York: Holt, Rinehart, and Winston.

Goldhammer, R., Anderson, R.H., Krajewski, R.J. (1980). Special methods for the

supervision of teachers. New York: Holt, Rinehart, and Winston.

Goldsberry, L. (1998). Teacher involvement in supervision. Handbook of Research on

School Supervision .(pp. 428 - 462). New York: MacMillan Publishers.

79
Goldrick, L & National Governor's Association, Washington DC Center for Best

Practices (2002). Improving teacher evaluation to improve teaching quality issue

brief. Retrieved January 20, 2009 from ERIC Document Retrieval Service (ED

480 159).

Haberman, M. (1987). Recruiting and selecting teachers for urban schools. New York:

ERIC Clearinghouse on Urban Education, Institute for Urban and Minority

Education

Holland, P. and Garman, N. (2001). Toward a resolution of the crisis of legitimacy in the

field of supervision. Journal of Curriculum and Supervision, 16(2), 95 - 111

Hanushek, Eric (2004) Why public schools lose teachers. Journal of Human Resources

39(2), 326 - 354

Houk, Tracy (1999). The Clinical Supervision experiences of new teachers: a qualitative

study. Unpublished master thesis, University of Regina.

Ingersol, R.M. (1999). Teacher turnover, teacher shortages, and the organization of

schools. University of Washington, Seattle: Center for the Study of Teaching

and Policy.

Ingersoll, R. M. & Smith, T. M. (2003). Do teacher induction and mentoring matter?

National Association of Secondary School Principals Bulletin, #7(638), 28-40.

Johnson S. (2006). Why new teachers stay. American Educator. Retrieved on

January 20, 2009 from www.aft.org/pubs-

reports/americaneducator/issues/summer06/teacher.pdf

Kaplan, L. and Owings, W. (2002). Enhancing Teacher Quality. PDK Fastbacks

Publication 499. Bloomington, IN: Phi Delta Kappa Educational Foundation.

80
Langmuir, D. (1998). Making sense of teacher collaboration: A case study of two

teachers' engagement in Clinical Supervision. Unpublished doctoral dissertation,

The University of British Columbia (Canada)

Leithwood, K. (1992). The principal's role in teacher development. In Fullan, M. &

Hargreaves, A. (Eds.) Teacher development and educational change.

(86 - 103).United Kingdom: Routledge

McCann, T.; Johannessen. L, & Ricca, B. (2005). Responding to new teachers' concerns.

Educational Leadership, 62(8), 30-34.

Merriam, Sharan. (1998). Qualitative research and case study applications in education.

San Fransisco: Jossey-Bass

Millinger, Cynthia (2004). Helping new teachers cope. Educational Leadership, 61(8),

66-69.

Nsien, E. (1984). The perceptions of Clinical Supervision by experts and instructional

personnel in a public secondary school. Unpublished doctoral dissertation.

University of Pennsylvania

Oliva, P. (1988). Developing the Curriculum. Illinois: Scott, Foresman and Co.

Orlich, D., Harder, R., Callahan, R., Trevisan, M. & Brown, A. (2004). Teaching

strategies: A guide to effective instructions. New York: Houghton Mifflin Co.

Pajak, E. (2003). Honoring diverse teaching styles: A guide for supervisors. Alexandra,

VA; Association for Supervision and Curriculum Development.

Perez, K., Swain, C , and Hartsough, C. (1997). An analysis of practices used to support

new teachers. Teacher Education Quarterly. Retrieved from

http://teqiournal.Org/backvol/l997/24 2/1997v(24)n205.pdf on January 20, 2009

81
Portner, H. (2003). Mentoring new teachers: Updated edition. Thousand Oaks, CA:

Corwin Press

Putnal, J. (1981). Factor's identified by selected teachers and supervisors as problems in

Clinical Supervision which might contribute to its limited acceptance.

Unpublished doctoral dissertation, Florida State University

Rauch, K. and Whittaker, C. (1999). Observation and feedback during student teaching:

Learning from peers. Action in Teacher Education, 21(3), 67 - 78.

Schlechty, P. (2002). Working on the work: an action plan for teachers, principals, and

superintendents. San Fransisco: Jossey-Bass

Sergiovanni T., and Starratt, R. (2002). Supervision: A redefinition. Boston:

McGraw Hill

Stiggins, R., Arter, J., Chappuis, J. and Chappius, S. (2006). Classroom assessment for

student learning: doing it right - using it well. Columbus, Ohio: Pearson

Stoller, Fredricka (1996). Teacher supervision: moving towards an interactive approach.

Forum, 34(2). Retrieved from

http://exchanges.state.gov/forum/vols/vol34/no2/p2.htm

Sullivan, C.G. (1980). Clinical Supervision: a state of the art review. Virginia:

Association for Supervision and Curriculum Development.

Weller, R. (1969). An observational system for analyzing Clinical Supervision of

teachers. Unpublished doctoral dissertation, Harvard University

Williams, Robert (2007). A case study in Clinical Supervision: Moving from en

evaluation to a supervision mode. Unpublished Master Thesis, Pennsylvania

State University

82
Wise, Arthur & Darling-Hammond, Linda (1985). Teacher evaluation and teacher

professionalism. Educational Leadership, 42,28-33.

Yin, Robert. (1984). Case study research design and methods. Beverly Hills: Sage

Publications.

83

You might also like