Case study: Supported identity verification for benefits claimants

Case study: Supported identity verification for benefits claimants

When someone uses government services, they may have to prove they are who they say they are before accessing the service.

There’s a big push within government to have people do this online, but in researching this process with benefits claimants, we found that some people – particularly those with low digital skills, confidence or literacy – find this very difficult.  

Our research and design team set out to clarify user needs and identify potential opportunities for these users.  

Establishing user needs

I spent multiple days in benefits centres across England, talking to and observing users and support staff. – in all, we conducted about 120 interviews/observations. 

We gained incredible insight into the unique problems of our user group, and I lead the team in a group analysis session that took place over several days, in which we were able to narrow our findings down to 12 user needs specific to our user group. 

Ideating solutions

We ran an ideation workshop to come up with as many broad, “blue sky” concepts as we could, and then narrowed these down to those feasible within our identified constraints. We then mapped the remaining concepts to user needs and evaluated them as a team based on factors like cost, impact and risk.  

The remaining concepts were storyboarded and then validated with users in focus groups as well as one-on-one interviews. 

Prototyping and validating solutions

We set out to prototype the concepts that met user needs and had been evaluated as viable. One concept was a digital ID verification booth that people could access in person, and we were able to tap into existing technology at the Post Office to try this concept with users. The other concept was one-on-one identity interviews on a tablet, done with a support rep at a community hub. I ran around 20 sessions with users over four 2-week iterations, and we improved on each concept with each round of testing. 

Up next: a pilot?

Research indicated that both concepts could meet user needs, but we felt a full pilot was necessary to prove this with certainty, which wasn’t viable in our timeline due to constraints with budget and technology. Also, things move fast in an agile environment, and by the time our project was wrapping up, the programme’s priorities had shifted. 

So, our team took time to document and record each decision, iteration and finding from the project, should it be picked up by future teams. 

So what did I learn?

  • I learned the value of stroytelling when presenting research findings. Team members were much more passionate about helping users when I included some (non-identifying) details of their situation than when I presented them as an anonymous participant. 
  • I learned the value of a team retrospective after each research round – by actively seeking my team’s feedback on my research method, we were able to refine and improve the work we did. 
  • I learned to not get too attached to solutions – what I like may not be what users need. 
  • I learned that team members like getting involved, as long as you are enthusiastic about the project and take steps to get them involved (i had some team members take on acting roles, and they loved it!)
  • I learned, or re-learned, how great it is to try something new – we used some different methods in this project which a few colleagues questioned, but it really helped us narrow down our approach. I’m glad I took the risk. 
  • I learned the importance of saying no – I took on too many research activities in this project in the hopes of meeting my objectives – it almost burnt me out, but more worryingly, it also affected the junior researcher I was managing. It was a much-needed lesson in being better about time management, both for my own sake and the sake of my team.