What is PaCT and Why Did It Cost $6M?

June 16, 2013 – 5:44 pm

(this post is about the New Zealand education system. I wanted to say more than Twitter made easy)

NZ has an amazing education system. We went through the “Tomorrow’s Schools” revolution in the 1990s, which devolved governance of schools to the communities in which those schools sit. A Remuera school will teach and value different things than an Otara school, despite being only a few km away from each other, and that’s okay. But in a world of devolution, how does the state ensure that schools don’t suck?

They set out expectations then monitor and enforce them. The expectations for “what gets taught” is the New Zealand Curriculum (NZC). It talks about subjects (“learning areas”), competencies and values for the NZ educational system. It’s deliberately vague, though, and schools must spend time unpacking the NZC for their local context.

The NZC lays out achievement objectives which are levels. Read an example for English and you’ll see how vague it is:

Show an increasing understanding of how language features are used for effect within and across texts.
INDICATORS:

  • identifies oral, written, and visual features used and recognises and describes their effects
  • uses an increasing vocabulary to make meaning
  • shows an increasing knowledge of how a range of text conventions can be used appropriately and effectively
  • knows that authors have different voices and styles and can identify and describe some of these differences.

This forms a progression through the material: a kid at level 2 can do these things and will have to do these other things to get to level 3. Schools would sometimes report on the level that kids were at, sometimes not.

Notice that nowhere in here are expectations about what kids should be able to do by when. Levels aren’t tied to years in school. Enter National Standards. National Standards are expectations that in (some year) students should be working towards (some year). Schools must report to parents in plain English about where their kids sit against these national standards.

The implementation of National Standards was a shitfight for at least two reasons: (a) it was done to rather than with the teachers and principals and boards; (b) they pulled the levels largely out of thin air (or, rather, “were arbitrary and didn’t reflect enough research”); (c) and they required schools to label kids “below National Standards for the age” which isn’t something educators were comfortable doing. Now we have them, and parents get to see how their kids measure up.

School-wide summary statistics of variations of “how many kids are at, below, or above National Standards level for their age” are what get reported to the Ministry of Ed, and which are “ropey” (to use our Prime Minister’s word). They’re ropey because National Standards isn’t a national exam: there’s no standardised testing to determine whether a child is at a particular level or not, one day upon which the child must perform or else be ruled to have failed. America is driven by these standardised tests, to the detriment of their educational system.

So, kudos NZ for not having standardised tests!

But that still leaves us wondering how to decide whether a child is at, below, or above the expectations according to National Standards. Teachers look at the achievement objectives (those broad statements about “can identify different voices and styles”) for the appropriate level (as determined by the National Standards) and then try to find examples of this in the student’s work, or use the many and various STAR, PAT, e-asttle, etc. testing systems to elicit evidence to indicate whether the kid does it.

This bit is messy. It’s called an Overall Teacher Judgement (OTJ). It’s awesome that teachers have this flexibility to find evidence and make their decision, but it’s time-consuming and not at all standardised. By which I mean, teachers in two adjacent schools might be making very different interpretations of what kind of work meets the achievement objectives. A child might be ruled “below” at one school and “at” at the next. Schools are currently working around this by trying to “moderate” their judgements, namely have teachers from different schools sit down and agree on what they see is “at”, “above”, or “below” for each of the levels.

Enter Progress and Consistency Tool (PaCT).

This is a web-based tool chock full of examples of work illustrating aspects of what achievement looks like at a particular level. As a teacher, you indicate what year and subject you’re looking at, and you’ll get a bunch of exemplars to look at. One by one you go “yup”, “no”, “no”, “no”, “yup”, “no” and eventually it’ll say “sounds like a no to me”. You (the teacher) can still disagree but you’ll have to defend your judgement.

This obviously replaces moderation and opens the question: how were the examplars come up with? My understanding is that they’ve had a reference group of teachers working on this. I took notes at a workshop at the NZ School Trustees Association conference last year. One of my notes:

PaCT org is Ministry driven. Technical development (IT guy), Psychometric development (NZCER), Rubric development, Teacher Advisory Group, Ministry Personnel. Psychometry used to manage presentation and use of statistics to ensure people understand them when they make a decision.

This might go some way to answering $6M for checkboxes?! (Ben Gracewood’s words).

You might ask “why, if it’s so good, is there such a shitfight going on around its introduction?”. You probably didn’t have to ask that: it could easily be interpreted as a lack of faith in the ability of teachers to make consistent and accurate judgements. Furthermore, the National Standards ropiness is partially because every school reports numbers in a different table and format which hinders their compilation and analysis. The National Standards roll-out timeline includes PaCT and it is very much seen as a facilitator for building and gathering the information which will permit ranking of schools by achievement and those whose students have low achievement against National Standards will be deemed to be “failing”. Not every teacher agrees with the research showing socioeconomic status and home life are way down the list of effects on educational achievement.

PaCT will integrate with the School Management Systems which hold data on students. PaCT will be run by the Ministry and although it is intended to save teachers time that’s been lost by National Standards overheads (yay!), the teachers who use it don’t feel a sense of ownership of the exemplars (meaning they feel as disempowered as they did when NatStds was rolled out). And, of course, the Ministry of Education is widely seen as a pack of overworked bureaucratic and politically-driven fuckwits by the educational sector (“politically-driven” is, of course, the nature of the beast for a Ministry) and so any tool that is used to gather data is likely to also be Trojaned to give that data to the Ministry and suddenly the act of assessing a child is political and not purely educational. Furthermore, once the Ministry holds National Standards assessments broken down by teacher, there’s the data necessary to start cocking around with performance pay and other things that the profession is highly allergic to.

Why did it cost $6M? In the words of the PaCT site:

The PaCT tool is being developed in consultation with the sector, and will be trialled within schools. The project has established a Teacher Advisory Group, comprised of teachers and principals, to provide advice to the project team regarding the development of PaCT.

The development is being undertaken by a number of external providers, drawing on the specialist expertise of various curriculum, psychometric, and technology groups. Each group is actively working with the sector in developing and testing the tool in regard to these areas. The project expects to engage and seek feedback from schools in over 500 interactions throughout the development process.

Consultation + external providers + expertise + 500 interactions = time + money + money + time = money * 4. My understanding is they’ve used an agile-like process, not the Novopay Mongolian Clusterfuck(tm) process. It’s due to be rolled out starting next year, and my sense is that the headlines are around turf protection and not IT project management (“They say PaCT amounts to a national test and are concerned about how the data will be used.”)

Hope this context helps! Any questions? I’m @gnat on Twitter.

  1. 2 Responses to “What is PaCT and Why Did It Cost $6M?”

  2. Nat this is superb – thanks.

    By lancewiggs on Jun 17, 2013

  3. I think the real ‘cost’ of PACT is not so much the money, but the implications of it all.

    PACT measures (thus values) only numeracy and literacy. The implications of this is that it narrows our curriculum.

    PACT assumes that learning is linear. The implications of this is focusing on areas of weakness (alleged gaps) as opposed to strengths.

    PACT (NS) levels come from working backwards from level 2 NCEA. The formula:
    All 5 year-olds = Year 12 minus 7 years
    The implications of this is that the labels At, Below, Above get distributed by age 6. Once ‘below’ is issued a child has to work twice as hard to get to ‘At’ as the ‘gap’ is cumulative. BUT REMEMBER this is only in numeracy and literacy. So if you did happen to have an edge in another curriculum area you can forget about that… You won’t have time to pursue that strength as much with all your spare learning time being spent on remedial reading.

    PACT assumes tidy year levels – the implication of this is factory model pedagogy.

    PACT allows NS to take the focus of assessment. The implications are that newer teachers are now only assessing to the National Standards (at, below, above) and not to curriculum levels.

    PACT (NS) values Pakeha ways of Knowing over Maori and Pasifika pedagogies. The implications of this is assimilation. (Whose standards?!)

    PACT values other assessment tools that assume achievement and success is something that is carried out in isolation (eg. eAsttle writing). Or worse assessments that reward one clean correct answer (STAR, PAT) . The implication of this is a future population who have been rewarded for rote learning and problem solving (in timed isolation) independently – as opposed to collaborative, creative, critical thinkers.

    I could go on and on (in fact I probably have) the point being there are all sorts of costs going on – money being the least of my worries.

    By taratj on Jun 17, 2013

You must be logged in to post a comment.