Experiment: Explain an example of when the “worst case scenario” actually happens — THE REAL LIFE EXAMPLE
This entire series of posts (see chapters 1,2,3,4) was prompted by a few Facebook messages about John Hattie’s “visible learning” research.
I recently contacted a former colleague looking for a favor and they mentioned a silly reflection form required of all the secondary teachers at their new school. This colleague wondered with bemusement about what my response would have been. Well, that was enough to get me to write several thousand words on the topic. This is a real life example of how potentially good research can be distorted through implementation.
The “reflective survey” is a 23 question instrument with 21 rating scale items and two open ended questions. The scale is 1-7 with 7 labeled as “high” and N/A available for items that are not applicable. The items are related to Hattie’s academic achievement interventions but, as I’ve mentioned, Hattie did not intend for his research to become the guiding purposes of any school. The teachers were asked to reflect on the previous “formative and summative aspects of their practice”. I tend to only use those words in relation to assessment, not teacher pedagogy and that kind of introduction tells quite a bit of the story. But I’m getting ahead of myself. Let’s take a look at some of the ways this school’s document is a distortion of the research.
The small distortions that undermine the purpose of the survey:
- A poor rating scale design (7 is “high” and N/A means not applicable) – None of the other numbers are defined, so there is no way to know for sure what any other answer means. Seven could just as usefully be labeled “empty” or “pudding”. Technically, this reduces the reliability of the rating scale significantly and substantively, it makes all answers meaningless. It also sends the message that no other answer is acceptable. This is obviously not a rating scale that the teachers are intended to grow with.
- The wording of all but one of the items could be answered with yes or no and are not suitable for a rating scale. The items are worded “Did you ensure that. . .” or “Was the material . . .”, which do not fit with rating scale answers related to “high”. (Unless you’re writing about this really late at night, then “high” seems like an hilarious answer to every question. . . . ehm, okay, moving on)
- An appeal to authority logical fallacy – There is no appeal to improve education, discourse within the school, or increase reflection on teaching practice. The purpose just seems to coerce teachers to bow to the authority figures placed before them and judge themselves based on external expertise.
- Unnecessary information included – after each item prompt is an intimidating looking remark about which of Hattie’s interventions the item relates to and a report of the effect size. This is unnecessary but seems intended to justify the usefulness of the intervention without true discussion about the purpose of including it in the school. Kind of a proof by assertion
- Personally, I think the term “reflective survey” doesn’t make sense. This is a very picky criticism, but it makes me laugh. It is personification. Is the survey being reflective? I don’t get it 😉
The medium distortions that change the purpose of school and teachers:
I conducted a quick content analysis of the survey, labeling each item with a conceptual theme or two. The clear message of the survey is that academic achievement is defined as high exam scores. The prevalent themes of the items are preparation for exams, assessments, revision or practice papers, and the exams themselves. The definition and understanding of assessment criteria was also a major theme. Student growth and improvement, the interactions between students and teachers or teachers and parents was a smaller theme. Curriculum and scope and sequence of material was a minor theme. Although the introduction of the survey defined the purpose as increasing student success in coursework and external assessment, the largest focus is on the external examinations. The clear message to teachers is that their job is to be the cogs in the machine to deliver the commodity of education.
The big flaw that distorts the purpose of education and negatively affects the lives of the people in it:
The big flaw is that the distorted implementation of this research has severe severe consequences on teachers. I understand; Administrators are busy and the team behind this was obviously positivist, hoping to be able to solve the problems of school by making sure all the teachers fall in line with what “we know works”. Unfortunately, the repercussions of subjecting your teachers to this kind of instrument are unacceptable. Teachers will either comply and fill out a poorly constructed and meaningless survey to follow the rules to conform or they will resist. Neither of those leads to better education. Resistance may take many forms, several of which are emotionally and ethically painful. This kind of survey may be one of the most effective ways to cause active, thoughtful professionals to leave a workplace.
In research of schools similar to the one to this example’s school; Keddie, Mills and Pendergast (2011) found that “prescriptive and top-down management of teachers’ work, its narrow focus on measuring academic outcomes and its strong focus on teacher surveillance and accountability compromise the quality of schooling” (p. 79). They found that teachers within the school lost the feeling of ontological security, that is, reality and the expectations of school seemed to be constantly shifting in a way that undermined the teachers’ concepts of their own work, their purpose as teachers, and themselves as human beings. In my opinion, no salute to Hattie is worth that.
Next time, I’ll discuss the ramifications of this kind of evaluation instrument on teachers. This is the part that makes me emotional.
Hattie, John. (2012). Visible Learning for Teachers. Routledge. Retrieved 6 January 2016, from <http://www.myilibrary.com?ID=363783>
Hattie, John. (2009). Visible Learning: A synthesis of over 800 meta-analyses related to achievement. Routledge.
Keddie, A., Mills, M., & Pendergast, D. (2011). Fabricating an identity in neo‐liberal times: performing schooling as ‘number one’. Oxford Review of Education, 37(1), 75-92. doi:10.1080/03054985.2010.538528
One thought on “Distortion – Chapter 5”
Pingback: Distortion – Chapter 6 | winchip