Punctuated Equilibrium, Progress and Schools

Punctuated Equilibrium is a theory in evolutionary biology that seems to fit well with progress in students’ learning.

What is Punctuated Equilibrium

Punctuated Equilibrium was first proposed in the 1970s by Nile’s Elderedge and Stephen Jay Gould. They argued that while most of us think that evolution happens gradually, the fossil record showed  evolution happens in spurts. Stasis (or equilibrium) is the norm, then there are bursts of activity (the equilibrium is punctured) and then stasis reigns again.

There is a helpful post here explaining it in more detail but the difference between gradualism and punctuated equilibrium can be shown as per the below

As a model, it has been attacked by Dawkins and Bennett. They called Punctuated Equilibrium “evolution by jerks” (to which Stephen Jay Gould’s response was that Gradualism was “evolution by creeps”).

That said, the punctuated equilibrium dynamic seems to happen outside of the natural world too. In industry, there is often stasis, then a new environment (often triggered by an innovation) leads to a burst of new prototypes before these are whittled down to a smaller handful of product categories.  Bicycles seem to be a good example of this. As BicycleHistory says:

Between 1817 when Nicéphore Niépce created his first velocipede and 1880 when first “safety bicycles” became highly popular across Europe, bicycle designs were highly varied

It seems is might be a useful lens through which to assess learning too

Punctuated Progress

David Didau has written a number of good posts about the myth of progress.  He points out that, as in Hugh Macleod’s Gaping Void Cartoon, while we act like progress is linear, it’s more confusing that that.

It’s a great diagram for pointing out the shortfalls of thinking about progress as linear, but it doesn’t help much in terms of “where next”. I’m curious to know if the punctuated equilibrium model is more helpful.

There are some obvious ones such as that if progress is characterised by periods of stasis, then there will be lessons in which students are not performing substantially differently.  Threshold concepts look to be a tidy fit with the model and I’m going to have a further read to see how these might help. If anyone has any pointers, I’d love to hear.

Good News, Bad News

Students Love Technology
Via: OnlineEducation.net

So, the good news is that Twitter can help students boost their grades. The bad news is that many students are device-o-holics.

Or perhaps it’s all bad news. Perhaps it’s just that students without Twitter lose marks because the Delirium Tremens they are wrestling with after being told they can’t use their phones makes it harder for the poor lambs to focus on the test in front of them.

They’re wonderful things, out-of-context statistics.

The Pedagogy of Oxford Tutorials

It’s funny how blind one can be.

This article, by Robert Beck, outlines the Pedagogy of the Oxford Tutorial system, the jewel in the University’s crown.

Essentially the process is research (reading, writing, lectures, chatting with friends) – essay – presentation of essay – discussion with tutor.

A couple of things caught my eye, now that I have a teacher’s hat on.

First, a comment about marks:

there is an extreme aversion among the Oxford tutors in my study to provide letter grade evaluations to essays. While formative feedback, nuanced notes and other annotations are used copiously, there was no tendency to grade essays, which is regarded as inhibiting motivation. Why? Perhaps, because grading violates the open-ended quality of the tutorial and suggests a sense of finality or, at least, may be taken that way

Second, an observation about feedback loops:

When a tutor asks a question about some claim within a student’s essay or presentation, he or she is requesting information from the student, but the intent may also range from uncertainty, to doubt, and even outright dispute and opposition. While the phrasing of the question may be subtle, relatively non-specific, and indirect (“what are you getting at here?) or direct and specific (why do you claim that economic factors alone led to WWII?) or challenging (Aren’t you dead-wrong about this?), in each case the tutor is referring to possible errors in the student’s argument. At the very least, the tutor is indicating that more information is needed to answer the question and is offering clues in potentially useful directions. But when the student responds to such questions, the answer may indicate further problems in the student’s thinking, and the tutor’s subsequent feedback in the next exchange(s) will indicate how adequate the answer was, thus pointing out additional errors; for example, the student may not have understood the question or may have provided answers that are deficient in evidence or a relevant warrant (Toulmin, 1958).

This process is very different than the mindreading and guessing games some teachers employ when they ask: who knows the capital of Wisconsin? Rather, in tutorials questions and feedback are used to induce students to repair their reasoning, although some direct corrections of information are inevitable. … In fact, on close examination of this process, I have observed that the tutorial hour involves an almost continuous formative assessment of students’ arguments that result in the identification of many points of error, some of which may be repaired successfully by students. And, in this process, contrary to argumentation theory, the object is not explicit agreement between tutor and student, but to induce the student to make his own repairs to his argument and thus, to learn to think for himself.

So there’s metacognition, project-based learning, assessment for learning and more in the Tutorial System.

I grew up in Oxford. My father’s a don. I worked as a research associate in Oxford for a couple of years. And I have only just made the link between home turf and modern schooling. Depressing really.

Taking student feedback seriously

Here’s something any teacher (and probably by extension any school) should be thinking about:

How useful are the views of public school students about their teachers?

Quite useful, according to preliminary results released on Friday from a $45 million research project that is intended to find new ways of distinguishing good teachers from bad.

Teachers whose students described them as skillful at maintaining classroom order, at focusing their instruction and at helping their charges learn from their mistakes are often the same teachers whose students learn the most in the course of a year, as measured by gains on standardized test scores, according to a progress report on the research.

Makes sense intuitively. And makes me think again how important it is to take on board the feedback from students. Next step, I suppose, is to design a feedback form (to go a long with the how can I improve board)

Quantity is quality (2)

This post from Victoria made me think. She

If we told students that we would give them ONE test a year and that their entire grade for the whole year rested on that ONE test, nothing else. What would we see?

We would see parents yelling. We would see students crying. We would see legislators acting against those “horrible teachers” who don’t teach.

It reminded me of the pottery story. And it made me think of that story in different terms. Whereas before I had thought of it simply in terms of learning from mistakes, now, especially having marked school’s summer exam papers and written their reports, I’m thinking of it in other terms.

If quantity is quality, if the one test a year approach fails, shouldn’t we be continually testing and appraising? Like those potters who were asked to make as many as possible, don’t we need to be shortening the feedback loop for learners (and with that their teachers). Perhaps one major test a year will always get worse results (and as Victoria hints, worse behaviour) than numerous, smaller less pressurised tests.

On a scale of one to five

One of the ways I have used in classes to get a quick snapshot of how well children are understanding things is the thumbs up thumbs down approach advocated by Assessment for Learning gurus like Dylan William. You ask the children to put their thumbs up to indicate how well they feel they are understanding things. Thumbs up means they get it, thumbs sideways means they think they get it but aren’t sure, and thumbs down is they’re struggling.

I’d always thought it was quite an effective tool, but a recent experience has made me doubt it.

I’m in hospital at the moment, after an operation, and one the questions they have asked me during recovery is how much pain I’m feeling. To help me, they asked me to put it on a scale of 1 to 10, where one is no real pain and 10 is agony. (I should add here that this is not a complaint. Apart from one ludicrously self-important nurse, the care here has been outstanding. Really, truly outstanding.) Anyway, putting the pain on a scale of 1 to 10 is quite hard. It is hard because I wasn’t sure I had enough experience to know what pain rated as what. Having an arm sawn off, for example, sounds like it’s 10. Or breaking one’s spine. At the same time I didn’t want to downgrade my pain, because that would mean less morphine, tramadol et al.

It occurred to me that the same might be true for the children being asked to rate their learning. Do they feel they have enough experience to rate it confidently? Equally importantly, are they aware enough to know that their own understanding may not be a ten out of ten. (Ben Goldacre has a wonderful graph of levels of ignorance – those who know, those who know they don’t know, and those who don’t know they don’t know). The reason this matters to me is that, if they aren’t giving sensible feedback, then it limits the effectiveness of any help I can give. I either overdo or underdo the morphine.

Which leads to another parallel. In hospital, they’re keen to get you off the morphine as soon as you can manage it. Perhaps it should be the same in schools. Perhaps we teachers should be aiming to get students off the adult help as soon as they can manage on their own.

Danish pupils use web in exams

On the morning of the exam, the exam room the floor is covered in cables. IT experts are busy helping the teenagers set up their laptops, making sure they all work.

At five to nine, the room falls silent. CD-roms and exam papers are handed out together. This is the Danish language exam.

One of the teachers stands in front of the class and explains the rules. She tells the candidates they can use the internet to answer any of the four questions.

They can access any site they like, even Facebook, but they cannot message each other or email anyone outside the classroom.

At nine o'clock the exam begins

Source: here

How Should Teacher Effectiveness Be Assessed?

In a report titled "The Widget Effect," the nonprofit New Teacher Project found that in public schools nationwide, teacher effectiveness is not measured, recorded or used to inform decision-making in any meaningful way. The result, according to the study, is a system where teachers are treated as interchangeable parts.

Source: here