John Seddon is “irritating”, but that doesn’t mean “wrong”

I am a long-term fan of John Seddon’s brain. His first book “I want you to cheat” remains one of my favourites, and I’ve probably reread it every year since it was published. I also subscribe to his newsletter; this week he says:

“In the wake of the new government’s abandonment of central targets and specifications the IDeA (which, I have to admit, has changed its name, but I love calling it ‘no idea’) fills the void by recommending that we do the same wrong thing. The boss argues we need to establish a series of targets and benchmark services on unit costs. To support his argument he cites an opinion survey amongst ‘performance managers’ and ‘policy officers’ in local government; what would you expect them to say?”

Now PAS is about to launch our biggest ever project in November. What sort of project ? I’m glad you asked. It’s a benchmarking project. This isn’t the first time that John has lambasted “activity based costing” – what follows explains why I’m not losing any sleep over his latest angry little missive.

Our planning benchmark
In case you missed it, our benchmark runs in November. A really short explanation of it is:

  • A timesheeting exercise against a set of 40-odd standard categories of activities.
  • A run-through of the direct and indirect costs spent by the authority delivering its services
  • Some questions about volume of work, fees and income
  • Some questions about the authority’s approach to producing strategic plans
  • A short questionnaire that captures how applicants feel they were treated

What falls out the far end, as results for the authority, are

  • some unit cost information (cost / volume)
  • A picture of where the spend lies (as a percentage)
  • An idea of what the nett cost of processing some aspects of the work are (given that some aspects of service attract fees and other income)
  • An imperfect user questionnaire
  • An imperfect summary of their use of resources to make strategic plans

All these things are useful by themselves. Of course, they become an order of magnitude more useful when compared with a group of peers chosen by the authority as being similar.

Why benchmarking remains legitimate
I didn’t read Rob’s article in LGC (no link – it is subscriber content). I’m not here to defend the new boss – he strikes me as someone who can fight his own battles. I am happy to defend my own approach to benchmarking for the following reasons.

We don’t set targets : We don’t (ever) lay down rules about what “good” is. We don’t use this process to make recommendations, or have opinions about what the authority should think or do. This isn’t because the measures are pointless (far from it!) but because it doesn’t make sense to create arbitrary targets and apply them to the variety of organisations all doing a similar but different job. Let the authorities decide. Do they need to improve? It’s up to their priorities and their resource allocation decisions. We are in interesting territory – the results of benchmarking could lead to authorities decided to lower standards to save cost.

We are cheap : Our benchmark is cheap to do. We have the luxury of being grant funded (and therefore free at the point of consumption), benchmarking scales really well and the bulk of the input is opportunity cost. What that means for the authority is that they only have to write a cheque for about 600 quid.
What can follow, in terms of thinking / testing / challenge can make it more expensive, and the benchmarking doesn’t itself change anything. But £600 for detailed knowledge about yourself alongside places that are like you. It just makes sense – leaving aside the fact that the process might help you understand the implications of possible changes to the fee regulations.

We recognize that all we’re making is a snapshot, not providing simple answers : We work to a maxim of “roughly right, not exactly wrong”. Common sense, pragmatism and a sense of common purpose have been our guides – right from the very earliest days with our lovely pilot authorities.

Benchmarking = mediocrity ?

I continue to value John’s brain, but we’re continuing benchmarking not through blind faith or because we’re afraid to acknowledge sunk costs (or even that we’re wrong-headed). Actually I think John is probably correct – many bad things could flow from putting faith in unit costs. But this is the straw man. Blind faith in benchmarking, like handing a consultant a blank cheque, is an abdication of management responsibility (as well as being wrong).

To take an example from our world. It is very likely that a planning authority could process units of work for 10% less than some kind of target we set. Would they be “good” ? Well, if they achieved this stellar performance by breaking up a development into lots of mini-pieces, or they applied lots of conditions all of which required their own subsequent discharge application their customers might not think so.

Will our benchmarking catch this latest kind of perverse behaviour ? Not this year. But next – when we’ve built our high capacity stats engine – all sorts of clever things will be available. And it will continue to be cheap, targetless and a snapshot to be used as part of a bigger process.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s