With enterprises constantly on the lookout for better security, biometrics is increasingly seen as a better alternative for authenticating digital identities than outdated authentication methods such as passwords.
And while InfoSec teams are studying up on the latest in biometrics breakthroughs, IT professionals need to be aware of several issues that crop up when storing and securing all that highly personalized data. The irony is this: Biometrics is making the enterprise more secure, while creating more data that, if exposed, can have dire consequences.
First, let's be clear what biometric identification is: Any technology identifying you by using a part of your body or using that data to authenticate your identity. Doing this requires the creation and storage of a record of something unique to who you are.
It can be anything from facial or voice recognition to iris scans to a finger or ear, tongue prints and much more. Since this is literally a part of you, there is no risk of you forgetting or losing them. However, because they are unique to you, they can't be reset like a password. If your biometric data is compromised it is a much bigger problem than if a password is.
Even so, there are currently no specific laws in the US and European Union concerning the storage of biometric data. Overall, laws such as the EU's General Data Protection Regulation (GDPR) and Illinois' Biometric Information Privacy Act -- the most stringent law in the US about protecting and using personal data -- require that people and organizations handle it, like all personal data, with security measures appropriate to the damage the loss of that data could cause.
As with any data, biometric information is only as secure as the system that protects it. There is nothing inherent in raw biometric data that makes it more secure. However, it is significantly more difficult than a password, for example, to use if stolen.
There are three places this data can be stored: servers, an end-user's device, or a distributed-data model which stores part of it on the device and part on servers.
Let's consider each of these.
There are two significant problems with storing all the data in centralized servers. First, it is a very rich target for hackers who need only break into one place to get many people's information. This happened in 2015 when 5.6 million sets of fingerprints were among the personal data taken in a breach at the United States Office of Personnel Management (OPM). The other problem with centralized storage is the risk of inappropriate cross-linking of data across systems which can put organizations in legal jeopardy for illegal use of information.
Keeping the data on the end-user's device solves both those problems. Unless hackers are looking for information about a particular person, an individual device is of far less interest to them because it requires a tremendous amount of effort for a very small return. This method has the added benefit of simplifying organizations' compliance with data privacy regulations, like GDPR, by placing responsibility for security on the individual.
The most secure method is probably the distributed data model.
Using visual cryptography, biometric data is broken up into files of miscellaneous noise upon enrollment. Some of that data is stored on the user's device and some on distributed servers. To prove identity the data on the device is checked to see if it matches up with the data stored on the network. This is the digital version of two people who don't know each other using halves of a torn document to prove who they are.
Of course, biometric identification can fall prey to spoofing.
There are facial and iris recognition devices which have been fooled by photographs. And in 2016, researchers at Vkansee, a mobile-security firm, unlocked an iPhone with fingerprints collected with Play-Doh. To counter that, fingerprint scanners can now detect a pulse, and facial-recognition software can measure depth of field. No doubt there will need to be more improvements as more ways of spoofing are discovered.
However, these methods work on an individual and not wholesale level. This underlines another strength of biometric security. Right now, there are no known instances of biometric data being used to commit large-scale fraud -- as has been done with passwords. Surely, if those 5.6 million fingerprints stolen from the OPM were being used en masse for fraud it would have come to light by now.
For the average person, using biometrics like a fingerprint to secure their smartphone or personal computer makes it effectively totally secure. The chance of a hacker going to all the trouble of getting a person's fingerprint, making a model of it, and then getting access to the device itself is infinitesimal. Unless that person is a high-profile or net-worth individual or has access to incredibly valuable information, the return on investment of time and money isn't great enough for a criminal to bother with. Biometric security also makes it harder for hackers to simply use huge amount of computing power to find a correct password.
And it severely restricts, if not totally eliminates, the usefulness of nearly all phishing attacks.
Biometric security systems, with properly secured data, can make it so difficult and time-consuming for hackers that it is likely a great many will find attacking them not worth their while.
— John Callahan is the chief technology officer of Veridium.