In advance of the official updates to our guide and templates, I thought I would share where I got to with my thoughts on the Housing Delivery Test and the very first set of Action Plans.
These thoughts are the result of the dozen workshops we held throughout spring and summer with representatives from 60 councils, along with some follow-up work on a mixture of action plans.
We will be doing some more robust testing and thinking when the councils action plans are published. This is a “seat of the pants” reaction to the many and various thought-provoking conversations I had with the wonderful delegates over the last 5 months.
To start with a few weasel words. This is the first ever cohort of action plans, and the timetable for their production couldn’t have been much worse. The clock started with without any proper lead-in on February 19th, and the six month deadline is therefore in mid-August.
Not only is August just a dreadful time to get sign-off and agreement on documents, but the fact that meaningful actions require the agreement and consultation with various other stakeholders means that the first iteration of these plans is likely to be low-key. Expect lots of “investigate further the reasons behind …” and “explore options with stakeholders” without much actual stakeholder engagement.
Part of what we asked councils to do was to understand their land supply and the causes behind slow and stalled sites. And it won’t be a surprise to anyone close to the work that the world of monitoring is not a happy place, with much of it manual, painful, and difficult to marshall into something useful at short notice. #plantech please come back all is forgiven.
Lastly, and this may be an obvious thing to say, there just isn’t a bunch of silver bullets lying around waiting to be fired by LPAs to “fix” the delivery of housing in a single bound. All councils are already trying a variety of approaches to boost housing, and most action plans will recount projects that are already to some extent underway.
Causes of housing under-delivery
“All happy families are alike; each unhappy family is unhappy in its own way.”Tolstoy – Anna Karenina
I can’t now remember where I read it, but I was expecting to find that while there may be broad macro reasons for under-delivery there would be a local and specific story for each unhappy major site. I was expecting a complex patchwork of causes making my job of drawing out some common themese difficult.
Instead what I found was that many councils view is that the cause of failing the test is that the test is using standard method numbers, rather than locally appropriate numbers.
[as a slight aside, what I took away from many of these conversations is that the approach of Part 1 strategic policies / Part 2 allocations is a bad idea for many places. The regulatory goalposts move too quickly – the higher level plans move too slowly – the electoral cycle revolves. The allocations appear late (or not at all) which makes it difficult to bring forward big sites and leads to appeal friction. Far better for many LPAs to move to a simpler all-in and review every 4 years approach in my view.]
Slightly depressingly, when taken at face value, this suggests that the cohort are behaving towards the test in a predictable way. Don’t like the test result? Change the inputs to the bottom of the fraction and get a better result.
In the longer term I’m sure a more nuanced picture will emerge. It’s possible that lots of it will never appear on public documents but we have had many fascinating and slightly hair raising conversations about how delivery is tested at examination and how the various incentives align to create a conspiracy of optimism that quickly unravels.
This is the first time around the HDTAP loop and we were all learning together. I’m afraid I was quite unflattering about some of the action tables in draft plans. I struggled to articulate something more helpful until we had a very useful debate in Manchester about who the audience was for the action plan. Once the audience and purpose of the plan is clear, many other questions become easier to answer.
In short – you are not writing the HDTAP for the benefit of a civil servant in MHCLG. There are no marks for completeness, and no one is interested in your “list of things to do”. However, if the purpose of the action plan is to get some people to do things differently, write for them in their language and keep it tight so they don’t get bored.
In our KHub you can see my “top tips” that I delivered at our final “grand finale” event at AECOM’s office in London on the hottest July day ever. As someone said at the time, it would have been good to know them at the outset rather than the close but that is not the way inventing the art works.
It is obviously early days for the Housing Delivery test as a policy. My informal canvassing at our workshops suggests that while the policy attracts criticism for being one-sided, the requirement to understand delivery from other perspectives [and, importantly, away from the context of a local plan examination] has been extremely useful.
The test could lead to a culture change more generally, where planners are forced to confront the fact that delivery is a team game that requires local knowledge, intervention and creative thinking. It might also have a useful knock-on effect on local plans – framing them more clearly as delivery plans.
However all good projects start with evaluation and I worry that the most important voice of all is missing. Taken simply – the purpose of the test is to push councils into taking action to enable and facilitate development that might not otherwise have happened. The people who are in a position to judge this are developers. We need some way of capturing their perspective to ensure that the HDTAP isn’t just another annual report that no one reads.
As mentioned at the top we learned alot along the way and we know how to make our guides and templates stronger. And even without any more input from us I’m sure the cohort of councils repeating the HDTAP process in November 2019 will be able to produce a better document than they could first time around, jut by virtue of having more time for meaningful consultation and input.
There is the updated PPG too, with a fairly clear suggestion of “assessment and delivery groups” and some other changes that need a bit more thought. The change of stance on annual position statements makes it clearer in my mind than ever that we need to rethink monitoring more generally, and get the most possible value from the least possible work. The current situation with a mishmash of different timetables and requirements is intolerable – I will ask some of my new, geeky, monitoring friends how we might reorganise and improve this shortly.