Wednesday, November 25, 2015

Searching for Solutions Using Benchmark Data

Most homegrown or commercially-available benchmarking packages provide stakeholders with similar aggregate or individual academic growth data if using a Pre/Post model, item analyses, individual student profiles, and so forth. Ensuing data discussions based on the benchmark results typically highlight troublesome areas where student performance fell below a targeted threshold. 

Unfortunately, the interventions recommended often follow the logic of the overused definition of insanity: Doing the same thing over and over and expecting a different result. Suggestions such as re-teaching the concept, assigning students to an after-school program, or duplicating study worksheets are all quasi-effective attempts at addressing possible voids in student understanding of the content that produced such lackluster results.

Perhaps, we, as educators, are looking on the wrong side of the equation. If we truly believe that quality instruction impacts achievement, why are we not examining the level of teaching with the same level of tenacity as we do benchmark scores? What if the issue with student academic progress is not just a content or student issue, but a pedagogy issue? In other words, perhaps the level of teaching innovation is not commensurate with what students are being asked to perform on benchmarks that have the “look and feel” of the high stakes assessments.

Consider the following set of benchmark results and Level of Teaching Innovation (LoTi) results that were captured via walkthroughs during a recent benchmarking period.
 
Grade
Students Completed
Checkpoint 2 Test (Mean)
Grade 5 - RST Social Studies
122
45.9%
Grade 6 - RST Social Studies
122
39.4%
Grade 7 - RST Social Studies
96
37.3%
Grade 8 - RST Social Studies
95
50.5%

LoTi Level
% Observed
Danielson Practice Score Projection
0
0
Ineffective
1
22
Partially Effective
2
34
Partially Effective
3
42
Effective
4
2
Highly Effective
5
0
Highly Effective
6
0
Highly Effective

Notice that based on the walkthroughs, 56% of the instruction was considered to be “Partially Effective” based on the Danielson Framework for Teaching. What impact does Partially Effective instruction have on student academic progress? Should we be surprised that the Checkpoint results were so low?

Attempting to make sense of benchmark scores requires that we consider the entire teaching/learning process; otherwise, we will continue to make certain untested assumptions about the quality of instruction as we design follow-up student interventions.