I’m fortunate to be currently working on a contract with an excellent product manager. Despite his years within healthcare remains pragmatic and open about new challenges. He frequents NHS Hackdays and at times I wonder if he does anything other than solve the digital challenges currently facing healthcare providers. One of these recent challenges was the correct and safe capture of gender. Looking at the majority of online forms on the internet, gender is usually gathered by a toggle switch of ‘male’ or ‘female’. More progressive organisations may have changed to using a drop down box with additional options such as ‘rather not say’ or ‘other’. This isn’t a solution that meets the user needs of society nor of clinical systems.
The solution he came up with was not only based on the need of clinicians, but more importantly that of the user. Questions were asked in a manner that explained why it was being asked in what is currently, a non-standard way. The work was recognised as being unique and thankfully the solution was demonstrated at one of our project reviews. With project and stakeholder buy in, it will be build it into our user research model and be optimised further.
A question did emerge quite quickly from the research:
"What were the Information Governance and Information Security implications of how we capture and store gender."
The Security Aspect of Gender
The more information you gather about a ‘user’, the more likely you are able to identify that entity down to a single person. It doesn’t take many data items to be able to say with enough confidence that you know who a user could be.
In law the ‘gender’ field is just one more data item that helps identify a person. A surname combined with a date of birth could be one or many people. By adding gender captured as ‘male’ or ‘female’ mathematically reduces the number of people it could be by half. If we stop the use of ‘male’ or ‘female’, the addition of gender captured in more precise form vastly increases the likelihood matching to an individual person.
Society is also changing and there is the acceptance of genders other than ‘male’ or ‘female’. This is progressive, but does increase the chances of information being used that could cause harm or distress. It is possible that storing gender in healthcare systems as ‘male’ or ‘female’ is no longer compliant with the fourth principle of the Data Protection Act.
The Labelling of People
You may have noticed that so far in this blog item I have put ‘male’ or ‘female’ within quotation marks. This is because it is being used as a label and digital forms only capture labels. Information Systems can only capture information that fits within a predefined parameter, placing everyone with a labelled box. We capture surnames the same way, but we allow for a billion types of box labels. Classifying gender using something other than the traditional two ‘labels’ requires careful consideration. Migrating previously captured data could introduce inaccuracies and destroy data integrity. We need to be more security savvy about how we store gender data. The use of inappropriate new ‘labels’ have the potential to cause great harm and allow for more refined identification.
The Clinical Aspect
The solution I mentioned at the start of this article was developed to assist within a digital triage process. It is anonymous at the time it needs to start asking about gender – and hopefully its use might stay anonymous unless the user needs referring to an NHS service.
We identified why we had the need to collect this information. Everyone should be doing this to comply with the first and third principles of the Data Protection Act.
The point of asking the user to select their gender is to be able to able to identify what may possibly be the reason behind an illness. Symptoms are mapped against anatomy and for that reason, we need to identify the genetics of the person being triaged. A user may identify themselves as being ‘female’, but may be fully biologically ‘male’ (and vice versa). Additionally, a patient may be in the process of transition from one gender to another and for those purposes we need to capture what their genetic gender was – as well as what part of their transition process they may be in.
The task of creating and predefining ‘labels’ for gender became that much harder.
For our solution we worked closely with our clinicians and stakeholders. We aimed to reduce the number of ‘labels’ down to those that actually made any difference to the outcome. We carefully generated questions that allowed for the gathering of relevant information with the levels of sensitivity required. Most importantly, we avoided the above mentioned blunt implementations.
In terms of the Data Protection Act we are still gathering personal identifiers, but is no longer be true to say that we are capturing ‘gender’. Instead we asking the person about what could biologically be wrong. It has something more sensitive than gender and we have recognised that. By identifying the risks associated with the information we are gathering, we can implement the appropriate controls to protect our users.
Potentially we could have alienated our users and over complicated the solution. This was prevented by using a secure best practice approach allowing us to introduce/amend our data items quickly, safely and securely.
- Identify the true need of capturing a ‘new’ data item
- Use user research to determine it scope
- Define how it must be captured, stored and processed
- Perform a Privacy Impact Assessment
- Determine the risks to the data subject
- Implement controls to reduce or eliminate risks
- Review, assess and amend
The new EU/US collaboration to replace Safe Habor has been launched this week. Given...