You are on page 1of 3

Christa Roy

EDU 360

Technology Assessment Project I Analysis:
A. District Results:
I. When breaking down the three grade levels, Grade Six students were grade level proficient
all five school years. Moving to Grade Seven, two of the five years, or 40% of the time,
students were grade level proficient. Unfortunately, Grade Eight had only one year of grade
level proficiency among students. Looking at the data from a collective standpoint, there
were five school years of data per grade for a total of 15 years of collected information.
Overall, eight of the 15 years were grade level proficient, which meets the 50% criteria.

II. There are three cohort groups that completed 6
th
, 7
th
, and 8
th
grade. For the sake of this
assignment, these three complete groups were used in the cohort data. There were other
partial cohort groups that were not included in the cohort data. One similarity between all
three cohorts is that as the students progressed to the next grade level, their grade level
proficiency declined. The one exception was Cohort 2 where grade level proficiency went
from 48% to 49%. This may be statistically insignificant. Cohort 1 and Cohort 3 had grade
level proficiency for two of the three years (6
th
and 7
th
grade years). Cohort 2 only had
grade level proficiency for the 6
th
grade year. No cohort group had grade level proficiency
in the 8
th
grade year. The question has to be asked: Why were students not grade level
proficient in 8
th
grade? Secondly, for the last year of collected data, 2011-12, why was
there a 12% decline in 8
th
grade level proficiency (49% to 37% proficiency)?

B. Grade Level Results:
III. Proficiency scores can be an important measureable factor to gauge how well the teachers
instruction connects with the students learning. Looking at this grade level dataset, I broke
it down into four proficiency groups: Advanced (90-100), Proficient (80-89), Basic (70-79),
and Below Basic (below 70). While there were several demographic categories collected
during the assessment that may have affected a students score, I first looked only at their
assessment score without factoring any other measurements. It was disappointing to find
that as many students scored at the Basic level (42%) as Advanced and Proficient combined
(16% and 26% respectively). Next, to validate the above point further, I applied the 50%
criterion by combining Advanced/Proficient and Basic/Below Basic categories. Two points:
Christa Roy
EDU 360

First, as I could already tell by the earlier analysis, more students fell into the lower
category. Secondly, because of this fact, the 50% criterion was not met.

IV. For the second part of this analysis, I chose to include the students English Proficiency,
labeled as English Proficient (EP) and Limited English Proficient/English Language (LEP/EL).
Furthermore, I wanted to see if there was a difference between Teacher A and Teacher B as
it relates to the above categories. Was one teacher more effective than the other? Was
there a correlation between a students English Proficiency and their score? These
questions would be answered by breaking down the data. As you can see in the table
below, for Teacher A, more EP students scored Proficient and Above than Basic and Below
(53% and 47% respectively). Also, more LEP/EL students scored Basic and Below than
Proficient and Above (83% and 17% respectively). These results are what I expected to see.
Teacher B, on the other hand, showed opposite results with the EP students as it related to
English proficiency levels. More EP students scored at Basic and Below than Proficient and
Above (45% and 55% respectively). This was unexpected because one would think a
student would score better if they had stronger English language skills. As was the case
with Teacher A, Teacher B had more LEP/EL students score at the Basic and Below level
than Proficient and Above (71% and 29% respectively).

Teacher A Teacher A 50%
Criterion Met?
Teacher B Teacher B 50%
Criterion Met?
EP Students
Proficient and
Above
53% Yes 45% No
EP Students
Basic and Below
47% 55%
LEP/EL Students
Proficient and
Above
17% No 29% No
LEP/EL Students
Basic and Below
83% 71%

Christa Roy
EDU 360

Interesting to note, with Teacher B, 29% of LEP/EL students scored Proficient and Above that is
12% more than Teacher A. So, looking at the big picture, Teacher B was not as effective as
Teacher A with students who had proficient language skills, but had a higher percentage of
students with limited English language skills score Proficient and Above. It should be noted
there were only 13 total LEP/EL students. Teacher A had six LEP/EL students and Teacher B had
seven LEP/EL students. That is a low number of subjects to draw any substantial conclusions.
Only Teacher A met the 50% criterion for EP students at Proficient and Above levels. Neither
teacher met the 50% criterion for LEP/EL students. What can we apply to the results? What
changes do you make to improve students scores, regardless of English proficiency? First, it is
important to make sure both teachers are teaching the same curriculum throughout the year.
This is vital because the assessment was taken the same day so students need to be learning
similar material at the same level. Second, how cognizant are the teachers of their students
specific learning needs/proficiencies as it relates to their language skills? Again, it is puzzling
because Teacher B had poorer scores with EP students, but higher scores for LEP/EL students.
Even with the low number of LEP/EL students in the analysis, it indicates Teacher B is most likely
not considering a students ability to absorb the material based on their English language skills.
Perhaps some additional training on being more sensitive to language barriers would benefit
Teacher B.

You might also like