Online self assessment (or not)

I have a slightly Jeckyll & Hyde relationship with is the PAS online self-assessment tool. It was being developed when I was working for a local authority, and I’d volunteered to be part of its user testing panel. As a punter, I thought it was great – a useful evolution of the paper-based self assessment we’d previously been doing. It (and systems like it) may about to be getting a bit of a shot in the arm.

The online self assessment (I’m struggling not to call it ‘OLSA’) has been on my list of things to sort out for a while. It’s reached the kind of “kill or cure” moment that ultimately all products that aren’t commercial need to have artificially invented for them. I was prompted to think of it again recently when I read and talked to John Hayes at the IDeA about the plans being made at capital ambition.  What follows is my reflection on our experiences and a few ideas.

What is the purpose ?

To begin at the beginning, why should we bother generating or anyone else bother taking part in this sort of thing ? In line with traditional expectations I’d suggest that participants should expect to get

  • a sense of shared values and critical success factors
  • a standard against which they can periodically assess themselves and their progress
  • an external challenge that they can use to parcel up some change initiatives
  • a sense-check that they’ve covered all their important bases and are approaching things in a way that recognises risk / reward

Similarly, we like it because

  • it can help knit together our work into a consistent framework
  • it can be used to identify high flyers that might want to take part in a case study
  • results can be aggregated to give us a snapshot of the entire sector
  • it can be a good can-opener and a way of cross-selling other types of support

What have we learnt so far ?

There are many points that we could make about making an online self assessment a success – most of them are fairly obvious:

  • it might look like a technology project, but it isn’t.
  • it will not run itself, and people will need held / support / encouragement (this is not just a one-off investment)
  • the benefit of being able to benchmark against peers requires a critical mass – these early adopters are crucial and are your marketing collateral
  • the longer it takes for users to receive any value the more likely they are to drop out
  • it can be lonely and boring – don’t make it worse by asking people to input information that is already in the public domain

One of the tricky questions put by John is deceptively simple. Underpinning the whole OLSA is the well understood process

create benchmark

assess against benchmark

gap analysis

project plan


(repeat until perfection is obtained)

Is this model still useful and relevant ? I’d argue the answer is “depends on the benchmark”.

When is a benchmark not a benchmark ?

Back in the day, PARSOL published a set of standards for planning services that evolved into our planning service benchmark. This was in the ‘pendleton’ era when points meant prizes, or more accurately offering a suite of e-planning components resulted in a cash award called planning delivery grant. It was meant to accelerate the take-up of eplanning services by removing the need to demonstrate a ‘business case’ usually required for capital expenditure (and did).

The latest incarnation of the benchmark has been updated to incorporate the principles of the compresehensive area assessment (CAA). We’ve received positive feedback on it, and I think it is genuinely useful.  However, it’s not a benchmark. Benchmarking is the comparison of the performance of similar processes across organisations. Often they are similar (I compare my validation process with yours), they can also be illuminating when not (I compare my use of customer feedback with Amazon). This is not a quibble about words. Benchmarking requires comparative measurement, and not just measurement against an arbitrary target (13 weeks anyone?) but against a real-world process carried out by real people for real customers.

This is important because it guards against the big risk for organisations like ours that help define standards. The law of entropy states that all systems decay and lose energy, sometimes expressed as nature tending to replace ‘order’ with ‘disorder’. In local government, I’d suggest that our systems decay from being focussed on outcomes to become focussed on internal process. Even while we’re aware of this tendency, and models like OBA provide a model for stamping it out, it feels inevitable that something that might start out as a good standard becomes bloated as subsequent hot potatoes (climate change, credit crunch) are stapled onto the side.  (I must do something about my use of metaphor)

The new ‘notepad’ and associated metrics is therefore a fantastic improvement on our existing benchmark. For me, this move away from what is only one step away from opinion is the most exciting part of the capital ambition idea. It may not be new – wasn’t CIPFA supposed to do something similar years ago ? – but the combination of the OLSA cut with real metrics feels like a winner. This does however require metrics to be collected at a departmental level – a local authority may spend 9% of its staffing budget on agency staff. This figure might mask a 60% spend in one department as against a 3% spend in another. Some types of agency spend are probably associated with risky outcomes, others not. The OLSA should hide some of the statistical mashing needed to be able to benchmark (say) the london planning services or to compare (say) the authorities in the north-east with those in the south west.

My £0.02

There are three parts to this venture: some standards, a community and a place to capture and compare data. 

I use ‘money saving expert‘ when I have one of my occasional attempts at being a grown up about money.


This system runs on peer-generated trust – would you bother to answer Phyzelda ? Well, they are a recent but active contributor who has also been helpful to other members. The site uses a hierachy (based largely on usage) along with an easy way to capture and record ‘thanks’. It also captures how many eyeballs are on each post (useful for helping people know that the site is active even when there isn’t a response). Lastly, the informality of using nicknames is problematic for some people but removes the risk of contributors feeling they might be seen as speaking on behalf of their employers – essential for energising debate. We need a system like this to be able to ask a question like “who is respected by their peers and already demonstrating that they are happy to contribute ?”.

One of my earliest posts was about manyeyes

appeals and refusals

appeals and refusals

 This is a readymade  solution that feels like the right mix of open and free access to data with the ability to create custom views and user editable datasets. We need a way of combining datasets already in the public domain with some user-driven data to fill in the gaps. Flexible systems like this allow peer-driven benchmarking (rather than a top-down series of standard benchmarks or the infuriating infatuation with chasing the top quartile) and are open-ended enough for us all to learn on the job what statistics actually matter.

What about the standards ?

Lastly, then, I would suggest that the ‘standards’ follow all this activity rather than starting it all off.  One of my lunchtime read-arounds is a US blog called signal vs noise. One of their claims to fame is a book called ‘getting real’. It’s about software development but it covers many issues that we’ll all recognise from the design and delivery of public sector improvement programmes. It is based on their experiences setting up and running a design and software company and I challenge anyone to read it without grimacing as we recall over-specc’d and under-delivered projects. In particular, they hold very strong views on getting a working product to customers in the shortest possible time. Transposed to the situation that capital ambition are in, its message would be something like:

Get some people benchmarking something as soon as possible

Don’t waste time writing a whole suite of standards

Don’t delay until the whole package is in place

Learn from what real people are prepared to pay for (even if the investment is only their time) 

Looked at in this way (alongside “embracing the constraints”) the lack of time and budget might even work to this system’s advantage. I look forward to seeing it in action.


One thought on “Online self assessment (or not)

  1. What I’m drawing out of this is:
    – get something to market ASAP as you have really no idea of how the market or the system will behave until you have real experience to base decisions on
    – trying to make those decisions in advance is fooling yourself
    – having stats does not knowledge make and in any case there are complex interactions between variables which can nullify the meaning
    – provide value to users first and foremost

    I agree with you and add that strategic direction embodied in the principals upon which the system is based should be discussed. This is not to restrict or define the ways in which people should do things but to add to decide for now on:
    – what do we want to achieve
    – how do we think it can be achieved
    – what do we offer in light of that at least as a first best guess

    The alternative is to just to offer in another form what we already have. No! Not more web forms!

    Your earlier point on benchmarking alludes to this. As you point out we need decisions on what benchmarking is in this context.

    It may be contexts and not measures, which allow groups of operators to make decisions about where they are in relation to others and where they might want to be (and hopefully how to get there). This begins to sound remarkably like a social group.

    If it is contextualised data and mapping of an organisations position in relation to others that has direct consequences for how systems are built. Establishing these principals does not need to be the familiar and torturous process we all know and hate. It can be quick, straight forward and ensure everyone knows what to expect. Moreover it provides a basis for other ideas and a reference point for future development. We must take a position to begin with and modify that position in light of what we learn later when as you say, we see it in action.

    Thanks for the interesting and enlightening thinking.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s