It’s possible to get a bit of a feel for the planning zeitgeist from some of the email requests we receive. Increasingly over the last months we’ve had requests for help from people wanting to understand their costs, their performance and whether they stack up in relation to their peers. We are currently working on a fairly chunky piece to do just this – and this post will help explain a little of what we’re up to and perhaps more importantly what we are *not* doing. I’ve covered some of this ground before, but my thinking seems so out of step with the world I’ll repeat it until someone explains what I’m missing.
What is a “good” planning authority ?
We don’t have the luxury of delving into the back-office systems of all planning departments for the real info. From our external perspective there isn’t much to go on. We have decision-making stats that obscure more than they illuminate (more still to come on this hobbyhorse). We have appeal decisions that are low-volume and only pick up errors of omission (not comission). We have periodic applicant feedback (last done in 2006 and if there is a collated version I can’t find it)
Beggars can’t be choosers, so what can we do with a series of imperfect stats ? I was swimming recently at our local and newly renovated pool. On the snack machines there is now a small card that gives the nutritional breakdown of each product. From memory, there were levels of sodium, sugar, saturated fats and something else that was considered sub-optimal. Most importantly the quantity wasn’t just displayed, but it was ranked against the rest of the contents of the machine. Finally, all the options were ranked in descending order – from the truly-terrible-for-you consistently placed highest at the top to the one that was only-a-bit-bad-for-you that appeared towards the end of the rankings at the bottom. I was quite struck with this idea – imperfect I’m sure – that allows you to compare the relative evilness of a wheat crunchie and a mars bar. [as an aside, I remember being struck by how evil the Marathon bar was – despite being named after a really, really, really long run]
[I’ve used the most up-to-date statistics (not very) from CLG and PINS for the following graphs. I’ve had to exclude some of the tinier authorities, and those without complete stats. I’ll make the workbook available if there is any demand.]
Back to our LPAs, alongside each authority are the four columns – majors, minors, others and appeals. Majors, minor and other – ‘high’ is good; appeals – ‘low’ is good. We’ll rank their performance in each category, then rank them again treating each ranking as if it were a number. Come first in each category your ranking will add up to ‘4’ and you’ll be first. Come last in each category and your ranking will be 4 x 329 = 1316 and you will not have done at all well.
It sounds a bit fiddly, but takes longer to explain than do.
Treating all these things as being equal (as opposed to weighting one type of performance as being more important than another) you can simply add up the ranks.
And then when you rank the ranks – you get to what everyone seems to want – a league table. According to this – imperfect – ranking process the best 10 planning authorities in England are:
- South Bucks: Majors 93% Minors 96% others 98%. Allowed appeals 23% from 53
- Hambleton: Majors 97% Minors 92% others 96%. Allowed appeals 27% from 48
- Runnymede: Majors 91% Minors 88% others 95%. Allowed appeals 22% from 50
- North Kesteven: Majors 82% Minors 91% others 96%. Allowed appeals 19% from 42
- Rochford: Majors 93% Minors 91% others 99%. Allowed appeals 29% from 28
- Cheltenham: Majors 93% Minors 91% others 94%. Allowed appeals 26% from 34
- Barking and Dagenham: Majors 82% Minors 90% others 96%. Allowed appeals 24% from 25
- Wigan: Majors 80% Minors 92% others 95%. Allowed appeals 24% from 34
- Thanet: Majors 89% Minors 90% others 95%. Allowed appeals 27% from 62
- Rushmoor: Majors 94% Minors 79% others 96%. Allowed appeals 15% from 13
NB While these performances are undeniably, empirically fantastic (walk tall South Bucks) I don’t publish them thinking that in any meaningful way this ranking represents reality. And not just because it fails to account for statistical foibles like the unjustly treated City of London who lost their 1 appeal and so are ranked last with 0% allowed. Read on ! Not much more to go !
What else do the stats show us ?
The temptation (at least in my head) is to look at a league table as if it represents an evenly distributed set of performances. This would imply that a difference of 10 places between 1st and 11th is roughly the same as the difference in performance between the 100th and 111th. Looked at another way, is it as easy to go from half-way to top-quartile as it is from bottom to half-way ?
“no”, would appear to be the answer. Although the distributions aren’t entirely ‘normal’ (apart from the suspiciously uniform appeal stats) you can see that there are two tough realities here. Let’s look at just the majors:
The first (and probably hardest) lesson is that we can’t all be top quartile. No, really. It still happens with dispiriting regularity – and usually framed in a “I’ve been told we need to be top quartile” way. My suspicion is that it is a quite difficult thing to pin down – do you mean that your performance at some point in the future would be good enough to place you in the top quartile as currently calculated? Or do you just know what everyone elses performance is going to be, and so you can magically predict what the quartile performance is going to be ? Or because it’s framed as being true at some point in the future it relies on someone remembering to come back and check whether it happened. It’s just cobblers, and I reserve the right to get quite cross with this type of thinking.
The second is how little it takes to gain or lose a significant number of places in a league. Although the majors distribution is the most gentle, a difference of only 10% separates the intra-quartiles. (i.e. you can go from being in the top 75% to under 50% by dropping just 10%. This is at times bound to happen given the small numbers of major applications these figures track – these numbers are volatile despite the heavy aggregation.
So what’s the answer ?
None of this is rocket science – school league tables have provoked much thought and better thought-through criticism than I’m able to generate. In short order:
- the perverse outcomes from using NI 157 will not go away, and any further increases to the targets will make things worse. To call for 100% compliance with the targets (as some people who should know better have recently done) is just to go from bad to worse. Our work should not add any further emphasis on this indicator.
- thinking about performance requires an acknowledgement of the different viewpoints of the stakeholders. What is important to an applicant will be different to a rate-payer or English Heritage.
- to try and retrofit any aspects of performance onto the publically available data is bordering on pointless. We need access to application-level data held in the back office, not an average of an average compliance with an arbitrary target.
Therefore the work we are currently involved in tries to address these issues. It doesn’t have a name particularly – it is being talked of as “Managing Excellent Planning Services” inevitably abbreviated to MEPS. Key headlines:-
- Cost – using the esd toolkit and rough-and-ready approach to costing espoused in the NPIP work last year (despite making systems thinkers mad) tracks how much money it takes to generate the performance
- Time – as experienced by the customer, including all the fun-and-games that happens pre-validation
- Free goes – what work are you doing for free
- Decision-making – what (and how) are you deciding
- And all measured using your own back-office data and benchmarked against peers, rather than just setting up some goalposts
We hope to bind it into other strands of work currently underway
- the performance management improvement work by the IDeA more generally (PMMI)
- the customer survey work by CLG
Then, in true PAS fashion, we will take it to the sector and listen (without getting too upset) as people rubbish, ridicule and improve it. Between us we will make something useful. Watch this space.
All very helpful Rich. I couldn’t agree more about the need to start measuring something helpful that reflects both the reality of the customer experience as well as the difference we are trying to make.
We are starting to report on average times for applications, numbers of refusals, invalids (terrifying) and withdrawals. I would in interested to know if others are either measuring this sort of stuff of planning to do so.
also v. interested in the development of some form of cost/value model. With the spectre of 10 lean years for public sector finances we really need to find an effective way to identify our cost drivers, recognise where value is added and share effectively with others to understand where we are and how we might work differently in the future.
We have been measuring numbers of invalids for a while and you are right, Ed, the numbers are terrifying. I am hopeful that the KP review changes will simplify the system so that agents and staff can understand the requirements.
We also measure % of refusals and % of appeals.
The good news that I was told today by a nice man in DCLG is that top % quartile “is not a target”. This message has not quite reached the outside world, though.