Agent Accreditation is wrong – discuss

Anyone who has ever seen me deliver presentations will be unsurprised to learn that I prepare what I think quite carefully, but not so much how I’m going to say it. At our opening PIPE event I mentioned (in an unplanned aside) that I thought agent accreditation was wrong. Several people challenged me on this, so I offer my opinion in two parts. Today I’ll try to set out why these accreditation schemes are fundamentally wrong. Shortly I’ll share some thoughts on what might be a better way.

Even more than usual I’m grateful to colleagues in local govt, several of whom have shared their thoughts and inside knowledge on agent accreditation schemes.

How do agent accreditation schemes work ?

Briefly, the premise is that there are good agents and bad agents. Good agents can submit good applications that are ready to proceed, bad agents submit sloppy applications that require rework before they can go on to consultation.

Good agents  submit a certain number of flawless applications to a particular authority – usually 3. Following this test, they receive a “gong” – they become an official accredited agent. Presumably their name is listed somewhere on the council website, and they can use this gong as evidence of some kind of competence in their own promotional material. Because they are now an accredited agent, any subsequent applications they submit bypass the initial validation process and go straight to a planner – reducing cost and time.

So what’s the problem with agent accreditation schemes ?

Before we begin, just take a short pause. We are (all of us) conditioned to think of process being bad. Process = red tape. Anytime a process can be removed, don’t think or question. Just strike it through. I’m asking you to set aside these Pavlovian responses for a moment. Let’s begin by examining the claims made for these schemes:

Accredited agents bypass validation. For those that are not close to the process of planning applications, this is the skinny: The validation process is almost the first thing that happens to an application. Recent news coverage would suggest that it involves collating the newt surveys, and ensuring that the design codes have been modelled in sonar for the bat consultation groups.

Actually, the bulk of the validation process is about ensuring that the proposed development is clear and complete so that neighbours can understand what it looks like to them. A suprising number fail to include all the sections, or use projections in a slightly interesting way.

It is the duty of the planning authority to ensure that the correct information is made available to consultees. So, I asked a long-term accreditor of agents, what happens when an unchecked application goes out for consultation and it’s wrong ? The temptation is to think that somehow the risk has been transferred from planning authority to agent. But that’s not so. When I pressed, it became clear that the council could not (and would not) bear the risk of maladministration. So, they gave accredited agents a “light touch” validation rather than bypass validation.

How much time is saved by moving from a “full and thorough” process to a “light touch” validation ? I suppose it depends on the application. I can tell you averages. But that’s not what’s important. What’s important is that when you discuss time and productivity with validators you’ll discover that they already know who is a good competent agent and who isn’t. They already offer a “light touch” service to agents that they know, and understand how that council likes things to be done. The accreditation scheme buys you nothing – your people are already differentiating between good and bad agents.

Accredited agent schemes reduce cost and time

It stands to reason that reducing process (no! let’s call it bureaucracy !) reduces costs. And, perhaps I’m wrong in the previous section. Maybe the validation process really is bypassed in councils and I’ve just found a couple of makework people in backwards councils.

How do we reduce costs? And has anyone done it ?

Let’s start with an awkward fact. The validation process takes up 4.2% of the resources of a planning department. Really. You think you can reduce headcount from this ? When most departments have a headcount of about 40 ?

And the agent ? The accreditation process demonstrates that the agent can jump through the hoops, not that there are fewer (or simpler) hoops. Moreover, she is now bearing the risk of the consultation process being found faulty and therefore really annoying the council*. I can’t see that this saves cost for her either. And nowhere does there appear to be critical review of whether the council has adopted a sensible approach.

And time ? I don’t know. It’s entirely possible. Let’s not guess. We already know whether online applications are quicker than paper. Someone will give me a dataset at some point in the future and then we’ll know.

Accredited agent schemes improve service

The accredited agent gong feels to me somewhere near nudge theory. Applicants will choose agents who have a gong, therefore all agents will want one and so the quality of applications improves.

But. Let’s be clear about this. We are talking about a small part of a whole. Applicants should not need to know (or care !) about the validation process. Does the gong demonstrate that applications are determined quicker ? Or that they’re more likely to be approved than the average ? [Spoiler: see more on this in my next post …]

More worryingly for me is the council-centric view of life that goes along with this position. This is a monopoly service that sets out what “correct” looks like and then marks agents against it’s own version of reality. The agent accreditation scheme is a reward for playing the game by our rules. And are our rules congruent with the needs and wants of our customers ? Ah.

And, in some other work that I’ve never polished up for publication, I’ve done the analysis comparing agents across councils. Some of my forward-thinking pilot people recognised that their validation requirements were open to interpretation, so put their own performance up for comparative review. The dataset is smaller than I’d like, but as you would expect there are agents who achieve good rates of “valid on day 1” at council A and poor rates at council B. Councils should not further exercise their monopoly by treating “them” as the problem and never “us”.

In conclusion

It’s not often we say “that’s wrong” in the improvement sector. It feels slightly non-collegiate. And it is true that I’ve written to provoke discussion. But let’s set it out in summary:

The gains are (at best) marginal. We don’t have to argue this out – someone will give me a dataset and then we’ll know. But my off-the-record sources tell me that it does not do what it says on the tin. Although I feel sure that the agent forums and other, more open forms of communication that accompany these schemes must themselves improve the situation.

No one has talked about removing accreditation. There is an inbuilt problem with accreditation, even if it is dressed up as impartial. If the gongs are valuable, then losing one is going to hurt. Councils have a terrible track record of coping with challenge, and if the gong does end up providing competitive advantage then a new source of challenge can be expected to appear.

This scheme has been set up with the wrong “customer“. The scheme puts the council’s validation service in the role of customer – the scheme delivers a reduction in work to the validators. This is the source of the problem. [as an aside, I have another improving book to commend to you – especially if you struggle to define your customers]. I’d say more on this, but this is going to be covered properly in my next post.

But, just so we’re clear, these are all fixable problems. And this is not a “pop” at the places who have introduced an accreditation scheme – I know most of the people at the heart of this and they are genuine in their struggle to make things better for everyone. And maybe I’m wrong – discuss.

Advertisements

10 thoughts on “Agent Accreditation is wrong – discuss

  1. We operate an accredited agent scheme and it was set up mainly for Householder applications to cut down on the number of “rogue” agents. It has always been about improvement. We work with unaccredited agents who wish to get involved to help them improve the standard of the submissions. This was in a time when work was plentiful and lots of people with little experience were declaring themselves as agents and providing a poor service for applicants.
    We add to the scheme by holding agents forums where we alert our regular agents-accredited or not, to forthcoming changes in legislation and guidance and include them in our consultation processes. Many of our agents are one man bands who appreciate the contact with other agents and the chance to keep up to date with changes in procedures and legislations.
    We do remove agents if we have to declare their applications invalid during the process of the application (2 critical issues or 4 minor issues a year). We did operate the scheme as a “Fast Track” when it first began but dropped that after the first 2 or 3 years.
    We clearly inform applicants when advising them of the scheme that these people have demonstrated a competence in validation legislation and Waverley’s local list, it does not mean that the application will be found more acceptable.
    However the closer working relationship this foster with planning agents does mean they can much more accurately point applicants towards what is likely to be acceptable in planning terms. It also means they are much more likely to call us if they hit a difficult application that they are not sure what they need to submit. This saves time, money and energy for all concerned..

  2. As we have been running an AA scheme on a trial basis for 9 months, a few thoughts on Richard’s blog. First up a few facts. Our scheme is for HouseHolder applications only, and during the 9 months we have decided around 650 HH apps, and of these 85 have been submitted by our AAs. For the AA apps we are getting 81% to the case officer within 24 hours and 67% of the applications are decided within 6 weeks (compared to 11% for all HH apps). The approval rate across all HH apps is 86% and for the AA apps only it is 93%.
    Our scheme (and from others I have looked at it is the same) doesn’t bypass the validation stage. What we do is have a streamlined validation process where we check, the fee (easy for HH apps as it is always the same), the form, the certificates and the site plan. We don’t check to see that it has the right supporting info or that the plans match what’s proposed. The reasoning being that to be accredited you need to have demonstrated that you are capable of getting these things right, so why check it.
    If I was to use an analogy, I would say that it is a bit like being in a supermarket and making the choice between waiting in a queue to have your shopping scanned and packed for you, or moving to a self-service checkout and doing it yourself. In both you would still pay, but the self-service is likely to be quicker.
    Turning to the why are we doing it question, the PAS benchmarking highlighted what we probably already knew, that the validation process took a lot of time and imposed significant costs on the service. I don’t disagree that its actually only a small % of the costs of running a planning application processing service, but would argue that the marginal gains of improving every part of your process even by a small percentage do add up.
    BM 2010 showed us to be above the average of our group both in terms of hours spent on validation and how much each hour costs. This caused us to rethink the way we did validation both re the process and who did what. BM2011 showed that although the average costs per application had increased across the piece, we were now below the average for the group – not sure whether it was our changes or better time recording throughout the BM process that made this difference.
    But, whatever the reason, the data still showed that we were spending significant resources on the process, with around 65% of applications being valid first time, meaning that we were sending around 1600 requests to agents asking for more information/plans etc.
    Looking at the reasons why applications were not registered first time, it was apparent that often it was because of simple errors, eg the wrong fee, the wrong form, the wrong certificate, missing required documents ( eg D&A’s) and plans not reflecting what was applied for or being inconsistent ( eg a floor plan not matching an elevation). When looking at data for how our regular agents performed, you could see that some were always getting their applications validated first time, and some (and these were making regular applications) were getting none validated first time.
    Given the requirements of the DMPO, simply sending the invalid applications back with an invalid stamp was not an option, so we looked at possible actions to improve the quality of applications as submitted, and an AA scheme was looked at.
    At its simplest, it does give an incentive to agents to achieve a standard when submitting their applications and although its been popular for agents (we have 26 AAs) the numbers of applications made by the AAs is still a small % of the total, and it is not going to make a real dent in the overall time and costs for the validation process or do much to increase the ‘validated first time’ number.
    We do think that AA HH applications will have less contact time than non AA applications, on the basis that an agent who understands the validation requirements is likely to have a good understanding of what will get consent, and are now time recording on a case by case basis to see if that’s the case. The higher approval rate for AA apps, might also reflect that, but as there are other factors at play, its not a claim you can make.
    One thing that you can’t count though, is the building relationships point that Val has touched on in her post. Although we have had agents forums for years, it does tend to be dominated by the bigger planning and architectural consultancies and our attempts to reach out to the smaller practices and those who specialise in smaller scale work have not worked. But with the AAs, we are starting to change this, and we now have a closer working relationship and this includes good attendance at our 3 monthly meetings, where the officers (planners and tech support) who deal with the AA applications and the managers who make the decisions (to grant or refuse) attend.
    On the ‘gong’ point, our scheme is clear what being an AA means, and I am unconvinced that an applicant will choose an agent just because they have a gong. Is it not more likely that they will look for ‘trip advisor’ type recommendations, and then follow it up with a bit of research online to see how that agent has performed, ie the quality of the application and the success rate (i.e. % of approvals). Richard does refer to a new source of challenge if there is a competitive advantage and this was discussed in the Planning Portal Directors blog ie whether such schemes are unlawful, illegal and discriminatory
    So in conclusion, and although I am waiting for Richard’s thoughts on the ‘better way’ (especially as we have a meeting in April to decided whether to carry on or not), I still think that such schemes have value, and do save money.

  3. Pingback: A replacement for the accredited agent scheme | Planning Advisory Service

    • A few thoughts on Richards suggestion.

      First, the essence of the scheme is that with full public information on how agents perform against certain criteria (ie % validated first time, % determined in 8 weeks and the % approved) prospective clients will make informed decisions on who to employ, and once they start doing that, then the bad agents (ie those who score the lower % ages) will either change or disappear.

      Whilst no argument with the logic that better information should enable better decisions (by potential clients), all of the information Richard is suggesting we publish is (and I can only speak for Bristol) already in the public domain. Admittedly it is not in one easy to read spreadsheet/league table, but a couple of clicks on the web will get you these details, and if you are looking at a couple of agent options, getting their last 12 months work is easy.

      But this all hangs on customers making decisions based on such stats (or indeed on an AA gong). And, I’m still not sure that clients would do this over and above a ‘trip advisor’ type recommendation.

      So where does this leave us. Well, it is something to feed this into our decision on whether to make our AA scheme permanent or not, so we will have a go at producing the stats spreadsheet Richard suggests, for agents making Minor and Other applications. We can then see what it looks like, and discuss it with our agents, both the Accredited Agents and the others, and get their take on it ie what would their view be on us publishing the data, and would they use it as a marketing tool.

      Meanwhile it would be interesting to hear the outcomes of Richards R&D (not sure the blog said what this was?) and of others experiences.

  4. “But this all hangs on customers making decisions based on such stats”

    Exactly. This is the thing to work back from, as all the data gathering and wrangling are trivial.

    And I reckon there are two things in our favour:

    – we have “.gov.uk” web sites that google really likes (try putting “best planning agent bristol” into google and you have the top two links)
    – the agents themselves will want this to work.

    Let us know what your agents say. I think a couple of schemes along these lines are going to go public in the next month or two.

    • Swindon Borough Council operates an accredited agent scheme. Like Bristol it is for householder planning applications only. We see the main advantage of the scheme is that it enables us to devote (a little) less staff resources to low impact householder apps and (a little) more time to bigger schemes. In this it has been a success. But the jury is out on whether the scheme has improved the quality of planning submissions in Swindon.

      My concern with our accredited agent scheme is the potential confusion it may be creating for the public. Swindon Borough Council’s Building Control Service is part of the LABC Partner Agent scheme, and our Trading Standards Service operates the Buy With Confidence scheme, which a number of local architects and plan-smiths have signed up too. Our Accredited Agent scheme is effectively a third Swindon Borough Council initiative to push another set of agents onto the Swindon public. This can’t be good.

      We have attempted to present our Planning Accredited Agents and Building Control Partner Agents in one place on our website at http://www.swindon.gov.uk/planning/householder but we are conscious that this approach is far from perfect. Does any LA out there operate a joined up approach to flagging up good local agents to the public?

      • I started at the other end and asked google – thinking that this is what most “real” people would do to start.
        The good news is that when I search for “good planning agents in swindon” your page comes second.

        I’m not a real webmaster but I’d suggest you’d get better results by having a smaller, shorter page that uses less formal language and covers only the choice that an applicant has to make.

        The other suggestion is prompted by the fact that the RTPI online directory of planning agents is the 5th search result. We should be working with them on this.

        I hadn’t thought about the multitude of standards regimes until you brought it up. I suppose that my instinct is to dump the others (which don’t really have any meaningful quality feedback loops) but that’s easy for me to say.

  5. We have had a look at Richards suggestion on producing ‘league tables’, and discussed it with our Accredited Agents and the wider Planning User Group.

    First, on the stats/number crunching and we focused on agents who had made Minor and Other applications only. In my mind this was just down to pushing a button, but the reality was that it took a lot of effort and data cleansing to get some data that we felt was reliable.

    We used the more than 5 application per year threshold and looked back over 12 months. We put all the data in a spreadsheet with 56 agents who meet the 5+ applications per year. We then did 3 ‘sorts’ of the data, and found quite a range of performance, ie :-

     % Validated first time – 92% to 0%
     % Approved – 100% to 33%
     % decided within 8 weeks – 100% to 31%

    Occupying the top 10 positions in each category were 19 agents. One agent appeared in all three categories, and 4 appeared in 2 categories.

    Turning to the big question, i.e. what did the agents think, the best way I could describe it was unenthused. The discussion at the user group is captured in the minutes (http://www.bristol.gov.uk/page/planning-user-group) and we will discuss it again when the agents have had more time to reflect on what this means, and look at some of the trials that are going on.

    One comment recently received perhaps typifies the reaction was:-

    ‘I’m not sure about the league table idea as I think while it’s nice to have our names on a list on your website, in most cases clients probably choose Architects or designers by other recommendation….”

    • Thanks Bryan, I think you’ve laid out the issues very clearly.

      Thus far, I think we’ve established that this is technically possible, that there is a real issue here and there will be a reward if we crack the problem.

      But, it feels there are some concerns about going to the next step of being able to write:

      “The head of planning in Bristol today named the top 3 agents for domestic work, saying they consistently achieve excellent performance for their customers. If you want a good planning agent in Bristol, I recommend one of these guys, ”

      If we can’t effect the client’s choosing behaviour, then this will never really fly. (although there is a part of me that hopes that agents are professionals, and the peer pressure thing will work to a degree)

      I think there are probably two routes from here:-

      – Councils individually become more relaxed about recommending Agent ‘A’ over Agent ‘B’. Speaking more clearly. More findable for people making choices.
      – Councils collectively build a brand that has something approaching national coverage that promotes good agents. Maybe with the Portal.Or LABC.

      A rough reckoning from the benchmark says that we collectively spend £70m on receipt and validation, so it’s worth struggling on with. Any other thoughts?

  6. I agree that there is something in this.

    But I would caution that some of the things that LPAs might think are important will not be what the customer sees as important. Nor will they be things that other Council Service Areas consider to be important.

    For example Building Control is likely to say that an agent’s ability to produce accurate structural calculations is more important than the agent’s ability to get an application validated first time. I would agree with my Building Control colleagues on this.

    Any collective Council approach to promoting particular agents will need to consider this. We have attempted (albeit very crudely) to have a joined up approach between Building Control and Planning to promoting good architects and agents in Swindon within our Planning Guidance for Householders webpage. It applies to domestic extensions only. It is far from perfect, but it’s a start.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s