Tag Archives: assessment

Dummy Runs and Schooled Writing

click for readable size

In December I had the pleasure of joining a group of 5th graders in the high desert mountains of Utah. That week, my niece, Alaina, and her classmates had just asked their teacher if they could have time to write to children in Newtown, Connecticut after the Sandy Hook Elementary School tragedy. In a discussion with Alaina about how she decided what to write about, it was clear she (and her classmates) were very attentive to the audiences she hoped would eventually read her note. She was thinking about the children who survived, and how they may be frightened by the thought of going back to school. She also talked about how helpless she imagined the community members must feel. To address these weighty matters, she decided to share a fear of her own that could work as a metaphor for moving forward:

So let’s not look for the rain
Let’s look for the rainbow
Let’s look for new hope
There is always hope

photo 3 (2) Over the next week I had several conversations with Alaina about writing in school. For instance, she was working on an essay comparing and contrasting earthquakes with volcanoes. In class, they had been introduced to the Venn diagram as a way to jot notes. They had lists of transition words for comparison. She was set up for some great content area writing.

Then the time came when Alaina was trying to decide what information to include in her essay. To help her decide, I asked her for whom/to whom she was writing this assignment. I was surprised when she didn’t understand what I was asking–especially considering her attentiveness to audience in her note to the youth in Newtown. She didn’t consider her teacher the audience or her peers who would read it in small groups. There was effectively no audience.

James Britton and others have long ago argued for more attention to audience in school-based writing tasks. In our text Developing Writers: Teaching and Learning in the Digital Age, Richard Andrews and I reviewed Britton’s studies and contended:

The influence of audience is one of the most well-known findings from this section of the study. Fifty per cent of the 500 written pieces analyzed which were deemed as immature, i.e. with no distinguishable function or audience, were from work completed for English language arts courses. Many of these pieces were considered by the researchers to be ‘dummy runs’ or student products written merely to show a teacher capacity to complete a certain written task (Britton et al., 1976, p. 106). To this day, the importance of creating written assignments with ‘real’ audiences or audiences logically aligned with the purpose of the written task and beyond the teacher as audience is looked upon as instrumental in ensuring student engagement in writing a product, as well as higher quality end products.

Her school district had also begun to use a computerized writing assessment system that has become popular in recent years. In talking to her teacher, her teacher was concerned that Alaina’s scores were not reflecting Alaina’s writing abilities. Determinations about placement and advancement were based on these scores. When I asked Alaina what she took into consideration when writing to the computer program’s prompts and when being assessed by the computer program, she–again–wasn’t sure how writing changed when the rhetorical frame changed. Not only did she not know how to articulate (or have declarative knowledge) about rhetorical frameworks, she wasn’t demonstrating the kind of procedural knowledge she readily applied in writing for her own purposes.

In our digital age, we have more access to distribute written pieces to audiences who previously we could have imagined, but not practically reached. We can compose in varying genres and more easily design with multiple modes to really address topics previously out of reach. In other words, our rhetorical frameworks (form, message, audience) can be realized in the writing we do in schools (and out of school) in ways just a decade ago were far more difficult. However, we’re still seeing “dummy runs” dominate schooled writing, and we are using our digital technologies in ways which essentially distance our students from the “real” audiences they actually have access to. I see many critiques of computer-based writing assessment, but I have yet seen the argument taken up that these programs take writing out of its communicative framework. I think that is an argument we need to make moving forward.

I was pleased to be invited to join Alaina’s class to teach during their next hour dedicated to writing. In my next post, I will share the mini-lesson and guided practice we completed together on the topic of audience. We then extended that discussion into considering what writing for audience means in contemporary times. The young people in that class shared great advice for the demands on writing in a digital, networked age. I can’t wait to share those with you!

photo 2 (1)

As always, I apologize that WordPress has begun to force ads on each post. Please ignore any ad that follows. I have not vetted and do not support whatever is advertised below.

Advertisements

Duh, Duncan

Education is coursing through the veins of public media from Wisconsin’s attacks on unions (and Jon Stewart’s apropos responses) to Capital Hill’s review of No Child Left Behind (NCLB). And what is the sound we are hearing?

Money Prop

Money. Money. Money. (And we aren’t talking  about this guy’s chump change.)

We’re talking about the business–the big business–of education.

Last week US Secretary of Education Arne Duncan took an important stand regarding the necessity of revising No Child Left Behind, and though he was correct in his appeal to congress, he failed to make the link between the law’s impairments and the big business the law has generated. In Duncan’s report he shared with the US Congress his projection for next year’s No Child Left Behind school passing and failure rates.

Wait for it.

82% of US schools are expected to fail.

For those of you unfamiliar with the law, it may not be immediately obvious that this number is actually not a sign of the general failure of education in the United states.

Duncan explained:

“The law has created dozens of ways for schools to fail and very few ways to help them succeed. We should get out of the business of labeling schools as failures and create a new law that is fair and flexible and focused on the schools and students most at risk.” (emphasis added)

One of NCLB’s “dozens of ways” is that each year schools are required to have not only a higher percentage of students making passing (independently a nice goal), but passing at increasingly higher gains. Schools fail if a pre-determined percentage of students aren’t passing and then schools fail if they aren’t passing at a percentage higher than the years previous. Sam Dillon of The New York Times humorously, and astutely, made this comparison:

Critics of the law say it is a bit like requiring all city police forces to end certain crimes — like burglary and drug trafficking — by 2014.

He continued:

They have also long predicted that the law will, over time, determine that all but a handful of schools are failing — a label that would demoralize educators, lower property values and mislead parents about the instructional climates in their schools. (emphasis added)

One of the other “dozens of ways” that misleads parents (and all consumers of our educational system) is that students’ gains are not actually measured. Rather, averaged assessment results from one year are compared to results from the next year’s assessment results. And they call it “gain” or “loss.” This model has been and is used in US states to measure effectiveness at all levels of education. For a recent criticism of the issue when attempting to use this model with  rating teacher effectiveness, take a look at another NY Times article.

Let’s look at another “dozens of ways” that applies directly to focus of this website: writing development. How exactly are writing gains measured and these supposed gains and losses determined? In the US, this is relegated to be determined by each state. Jeffery (2009) analyzed the end-of-year writing assessments from 41 US states and reported that not only did the tests vary in terms of prompted genres, but also in terms of the ways the written products were assessed on the rubrics (see table from Jeffery’s article below for the variance). In other words, passing and failing writing in each state differs widely. It isn’t comparable. It doesn’t mean the same thing. A “failing” school using one writing measure could be “passing” if only it used a different measure.

And we aren’t done with all the reasons why the NCLB passing and failing rates are faulty in terms of writing assessment. Jessica Lussenhop of CityPages recently reported what was to some a shocking truth about the business of writing assessment–a $2.7 billion industry. Namely, writing assessments are assessed by inexperienced temporary workers with two day’s training who are paid by quantity of papers and 80% matching scores. Lussenhop’s article is quite a read; she tells the stories of three such scorers who worked quickly, quietly and admittedly, poorly and unethically. One scorer and supervisor summarized the issue as such:

“They get paid money to put scores on paper, not to put the right scores on papers,” he says. “They have a bottom line. Why anyone would expect anything else is beyond me.”

The NCLB formula is nearing the moment that for 82% of schools will not be able to pass. The methods of measurement are faulty–at best–for measuring student gain and teacher effectiveness. And the business of test assessment is a sweatshop-style, invalid and unreliable number cruncher. With states cutting funding for teachers and class size, now is not the time to continue to fund this assessment machine. For more reasons than Duncan pointed to this Wednesday, it is time to get out of the BIG business of failing schools.

Jeffery, J. (2009). Constructs of writing proficiency in US state and national writing assessments: Exploring variability. Assessing Writing, 14(1), 3-24.


The Two-Faced Coin (Part 2 of 2): Education’s Two-Face–Time

Flip a Coin

What do you think? Is it going to be heads or tails? At this moment, can you tell? What will determine on which side it will drop? A gust of wind? The momentum of the roll? (Someone with a physics degree chime in with a comment. I am sure we’d all love to know the actual factors that will contribute to the outcome.)

When it comes to the educational development/deficit coin though, we have only one factor to consider: Time. “Oh, no, no,” you might be saying, “it’s the quality of the product that determines whether a writer is developed or not.” Or you might argue that it’s the sophistication of the writer’s composing processes. You may even ask us to consider the resources he/she consider and draw from or the repertoire of genres with which he/she has facility.

I’d agree with you that each of these dimensions of a writer are important, but these characteristics aren’t what determine development in education. Let’s take a look at a piece of writing to see what we find.

How would you determine whether the writer is developed, developing or in deficit? Of course, as indicated above, you might ask if it is a draft or if it is considered good “for a poem.” But then let me ask you: What if I told you it was written by a second grader? Would your judgement change? What if I told you it was a college student? What if I told you it was the first time the person had tried this genre or if it was after five years of participating in a community of poets? At the heart of any of these approaches to deciding whether writing or a writer is developed are questions of time: how long? how old? what grade?

We should then ask: Where do we get those ideas of what is expected at certain ages and after certain lengths of time and at certain grades? One such source is a study by Britton, Burgess, Martin, McLeod and Rosen conducted in 1976. They studied the audiences and functions (to persuade, to entertain, to tell) of the written products and writing tasks from classes of students ages 11-17 from across the UK. From their results, they suggested a curriculum of increasing cognitive abstraction in written products from personal experiences, to argument, to tautological statements. This suggestion has been taken up and is pervasive in the educational field in both curricula (e.g. in the first version of the National Curriculum in England in English) and research studies (e.g. McKeough & Genereux, 2003).

Buried in their study report was the statement that the audiences and functions of students’ written products aligned closely with the writing tasks assigned to them in school. From this the researchers reasoned that the range of written products in schools was the result of teaching curriculum and methodology rather than students’ independent writing development or even current skill sets:

We are clear about one thing: the work we have classified cannot be taken as a sample of what young writers can do. It is a sample of what they have done under the constraints of a school situation, a curriculum, a teacher’s expectations, and a system of public examinations which itself may constrain both teacher and writer. (p. 108)

In essence, then, the developmental model offered by Britton, Burgess, Martin, McLeod and Rosen (1976) is a model of the development of school curriculum—how to characterize the sequence of tasks assigned to students in first, third, fifth and seventh years of secondary school in the UK. The implication is that writing development is intricately tied to the writing experiences that have been afforded; and a common denominator to young persons’ development is the experiences required in school. Britton et al.’s (1976) developmental scheme, however, is not an indication of students’ cognitive or writing capacity, nor reflective of the entire range of audiences of functions of students’ writing.

The point here is simply this:

Chronological time is the ultimate determiner of development in writing. Our benchmarks on this linear scale of time have been based on studies and curricula that are not based on how youth actually develop as writers, but rather how we organize the products, practices and participation across a linear scale. 

When schools determine one child is developed and another is at deficit, we are just at the mercy of units of time we have segmented and decided should correlate to a set of practices. We aren’t actually saying anything about the child’s abilities or capacities. Yet the consequences of being thus labeled are left to the child, and deficit always leaves a mark.

I know. Ouch.

[Flip a Coin by The Bartender 007 / © Some rights reserved. Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike license]
Britton, J., Burgess, T., Martin, N., McLeod, A. & Rosen, H. (1976). The development of writing abilities (11-18). London: Schools Council Publication.
McKeough, A., & Genereux, R. (2003). Transformation in narrative thought during adolescence: The structure and content of story compositions. Journal of Educational Psychology, 95(3), 537-552.

The Two-Faced Coin (Part 1 of 2): Development and Deficit

Alright. Let’s go in. Progress, improvement and development are—in essence—the project of education. Sounds pretty good, right? But it’s not so simplistically altruistic.

For one, there have been many people who have pointed out problems inherent in this project. Developing countries, for instance, can definitely benefit from implementation of certain social and physical structures that have improved the quality of life for others in the world—like public sewage systems or public education. At the same time, these “improvements” historically have come at a steep price of subjugation, and even imperialism. It is useful for us to pause to ask who gets to determine what is a “quality life” for another. Discussions that help us illuminate disparities between intent and result are important, but they aren’t want I want to focus on right now.

I’d like to talk about something a bit more fundamental to the concept of development in education: The flip side.

If we turned over the coin with development’s face, we’ll find deficit on its tail. In the name of progress and in our efforts to further development, we are constantly creating deficit. In Discipline and Punish: The Birth of Prison, Michel Foucault gave the example of how this occurs in the human sciences. He explains the project of studying human behavior is defining what is normal, healthy, desirable (i.e. the good girl, law-abiding citizen). In the process, this act of defining a norm creates abnormality (i.e. the criminal, the crazy person). Entire professions are then brought to rectify the deviants of the norm—a “deviation” the field itself created. Foucault quipped at another time:

…if you are not like everybody else, then you are abnormal, if you are abnormal, then you are sick. These three categories, not being like everybody else, not being normal and being sick are in fact very different but have been reduced to the same thing…

Demented, right? It reminds me of Two-Face from the Batman series, portrayed by Aaron Eckhart in The Dark Knight. Your chances of life or death, sanity or insanity, and in education development or deficit is left to the chances of the flip of a coin. Today you may be developed, but that same activity tomorrow may be deficit.

http://www.youtube.com/watch?v=jOOjM08zH5o

In Batman, it’s Two-Face who flips the coin and chance determines your life or death. In the human sciences, it’s the psychological, behavioral and sociocultural rating scales and evaluation measures. What flips the coin in education? Although similar to other human sciences, assessment measures could be seen as the coin-flipper, I humbly submit that education’s Two-Face is Time. And that’s Part 2 of 2. See you then.

Michel Foucault, (2004) ‘Je suis un artificier’. In Roger-Pol Droit (ed.), Michel Foucault, entretiens. Paris: Odile Jacob, p. 95. (Interview conducted in 1975. This passage trans. Clare O’Farrell).