‘…how much responsibility for agent’s poor performance lies at the council’s door – are we managing services that reward agents that do the wrong thing’?
Some agents are great. Others not so great. The poor ones submit shoddy schemes that require rework – and the cost of that rework is presently paid for with public money. [as an aside, if local fees happen, that rework will be paid for by the good agents. But that’s for another day.]
Earlier this year I wrote a piece asking if publishing data on how quickly and successfully different agents get planning decisions would lead to better quality applications. It’s an uncomfortable way of looking at the world – it’s a council’s-eye view, it bashes agents and leaves councils unaccountable for their part.
A better (balanced) picture
I have, instead, decided to look at the world from the customer’s (agent’s) end. I’ve asked: What’s my experience working with different councils? Do I have to play by different rules each time? Why does council ‘A’ decide my application quicker than council ‘B’?
What we’ve done
Taking data from some neighbouring district councils we’ve looked at processing times and success rates of agents that have submitted 10 or more applications across the councils. And, because not all work is equal; we have grouped similar work into 3 bands (A=quick/easy (certificates, NMAs), B= more complicated (minors, householders, C= Difficult (Majors etc.). We can now compare agents that do similar work and their experiences of working with different councils.
What we’ve found
The agent-centric view – no surprises; there are good and bad agents!?
[Notes: This is a list of agents working across a pair of District Councils. The ‘Days’ is total end-to-end days taken to receive a decision and the numbers across A-C are the total number of decisions within each band. If you were a punter choosing an agent who would you call first ?]
Question 1: if some agents can get things through quickly and successfully, why can’t they all?
Question 2: Which agents get quick permissions (and do they provide a service for my ‘B’ type application?)
Question 3: Is the difference down to good/bad agents or good/bad/ processes?
Same agent, different story
This box plot compares validation and decision days at two different (neighbouring) councils.
Question 1: Why does this agent experience a difference?
Question 2: The application going through Place A ‘catches up’ when you compare the Decision days. Is there something these councils can learn form each other that has nothing to do with the agent?
Question 3: Is anyone bothered about validation taking twice as long if the decision is reached earlier?
As an applicant interested in a quick decision, it would be difficult for me to confidently choose this agent over another. I would rather choose which council my application got submitted to (!)
All Agent view
The box plot within each picture represents the agent’s (A – P) experience at two different places.
Same set of agents – decision times:
This is where it gets really muddy or clear depending on what way you look at it. Agents get similar levels of validation sometimes, a few experience something like consistency in the time to decision, and almost invariably the council that validates slowest, issues decisions quickest. What is more important? What costs more?
So what? Where are we going with all this?
You tell us. Who cares about this? We can demonstrate how clever we are by producing interesting pictures but why bother if all it amounts to is an interesting project? We are not interested in rides that have no destination. Here are the options.
The ‘nuclear’ option
Stack agents one on top of the other, publish the facts and let the market decide who to use. Quick, easy, not customer friendly, creates enemies and most scarily, might mean some agents no longer receive work.
The customer-centric option
Classic customer focus; understand the customer experience, act on the bits that I (the council) am responsible for, and, by taking this act of ‘leadership’ seek to influence and change the bad behavior of your customers (the agents). Councils improve, agents improve and overall things get better.
Both of the above options are real and possible. The first is unpalatable, and the second lacks ‘bite’ – councils have agent forums to deal with these sorts of issues, and yes, this data may help them target their conversations, but will things change significantly unless we know whether this is hurting us or not’ ?
The ‘Stellar’ moment
What is the cost saving if the poorest 30% of agents could be as good as the best 30% of agents ? Councils taking the lead and improving their end is absolutely the way to start influencing change among agents. But, and I am also absolutely convinced of this, that unless we can put a ‘£’ sign next the additional work that poor agents/processes cost us, it will be hard to get anyone to do anything significant about this.
So what’s next?
You tell me – I mean it. I am going to continue fine tuning the work to date with our council group and most importantly try and create the £ sign for all of this.
My next installment will be the one that tells us if we have an interesting project or something that will force some widespread change.
If you would like to see / hear more about this work or can help me make it better, let me know.