After the Millennium Bug, a side-effect of 9/11 was the quest for a “silver bullet” in global and personal security.
Biometrics (see the article on Wikipedia to start your quest on what it means).
Pardon my over-simplification: I will consider biometrics, be it the actual measurement of physical, unchangeable characteristics of a human individual, or the profiling of behavior, as a part of a common trend over using technology to remove human complexity from the decision making process.
The funny part about automation is that the production and collection of each bit of information is almost free, once a general collection and distribution system is in place.
Therefore, once you start collecting… you just keep piling up.
I characterize the current debate on the various “flavors” of biometrics as something shifting between two extremes- the gullible and the paranoid.
Before you ask- I have no certification, but for various reasons since the early 1980s I got involved in applications of technology to security and “expert systems” (or other ways and means to “profile” or “model” the decision-making patterns of human beings, either as individuals or groups).
Therefore, I kept abreast of technological and conceptual development- something that eventually became handy when I needed those conceptual frameworks for consulting on, say, setting up a certification authority or compliance with privacy and data access regulations and best practices.
Conferences (online and offline)? Quite a few- but probably the only one consistently worth the cost is InfoSecurity in London (and now also its constant stream of webinars; if you plan to get one of the certifications, attending webinars qualifies for the required annual credits).
And now, after settling the score on why I have a couple of thoughts about biometrics, I will summarize in this article some (hopefully money-saving) observations and practical suggestions.
Paranoid vs. gullible
Or the other way around.
The paranoids are obviously those constantly seeing a privacy threat- some of them getting a decent living by writing or being invited as “professional scaremongers” on TV, panels, and so on.
The gullible are those who think that allocating the budget to buy the most expensive security technology will automatically give you the “edge” against an assorted albeit undefined “enemy”.
Both share a characteristic: the source of their advertised anxiety and complacent relief is the continuous stream of announces from a fledging industry about new security solutions.
If you look around you in any G8 country (or just open your fridge and wallet): you are surrounded by sensors and other gadgets linking your behavior- from the way you walk, to the usual timing, size, and frequency of your cash withdrawals, and their linkup with your other expenses via various payment instruments.
And if you live in London, it has been quite a long time since you are used to read on newspapers that your mobile, your car (via its license plate), you cards, and the CCTV scattered everywhere allow to build a detailed profile of your daily activities- from closet to wallet.
As in many other cases, I believe that the paranoids should get more practical, and focus on equal rights to access information, and the gullible should simply remember some basic lessons about real human behavior.
Far, far away from the rational homo oeconomicus (again, see Wikipedia) used by most legislators, but ignored by political and corporate marketeers.
In my perspective, the real issue about biometrics is the linkup between security and knowledge: from supermarkets, trying to profile you to restructure their shops so that you will add more products in your basket, to unelected security officers cross-linking databases to get closer and closer to a “preventive justice” model (yes, like in minority report).
Well, it is nothing new- it is actually a XIX century model applied to XXI century information technology.
Maybe you do not know- but in quite a few countries, any published paper had to register with the authorities, and you still need a police authorization to hold a public event.
The basic failure that I see is really coupling this XXI century information piling up and cross-checking with a reduced human element.
Yes, I know that it sounds funny, written from somebody from the left of the political spectrum (see in my profile where I came from).
But I think that biometric technology way too often is focused on the “static” element: your retina, your fingerprints, your immutable behavior, and a “common wisdom”, acceptable behavior.
And who decides what is acceptable? Certainly not elected officials.
For and against models
Any technology processing data is based on algorithms, “rules” on how data are transformed, and any biometric technology (physical or behavioral) results in a model of reality.
And models have a tendency of being built by simplification.
Reality is cumbersome, and therefore, to reduce the cost and the number of data items kept, models usually are a crude representation of reality- or of individuals.
It is acceptable- if the model is evolving and it is open to public debate.
But most proponents of biometric technologies try to shift the focus toward “improved efficacy and greater efficiency”, stating that their technology has to become “the” standard, to reduce the cost.
As the model is built on a first, limited set of information, usually in a controlled environment, focusing on just one model increases the “tunnel vision” of the analysis produced by the model.
Certainly, nobody expects the proponent of a model to state: “our solution is the best- but we suggest also to add X Y Z technologies to have a better perspective, and to tune up the model periodically”.
Why? Because it would become too complex. And complexity does not work well in 30 seconds sound-bites.
The basic failure
Biometrics (and not just the fingerprints, retina scan) could actually improve your access to services and products.
And by removing the human element could also improve access to those who do not fit the “common wisdom”- if you want, remove social prejudices that reduce your access to what is your right.
Or- what is supposedly your statutory right.
Over the last decade, all around Europe I still saw way too often differential access to services and products really based on how the human agent delivering the services or product considered the potential customer.
As I wrote in the previous sections, the fallacy lies in the inappropriate use of the mix of the technology and human judgment.
I will explain with a small example, focusing on the use of biometric for security purposes, and assuming that you are mixing, say, physical measurement (from fingerprints to the retina scan to tracking your own GSM via AGPS) with behavioral analysis of your movements.
We humans, as individuals, have been adapting to our environment for a long time.
And when seen as a group, we adapted the environment to our needs.
Probably still the best “pattern matching” equipment on the planet.
Sorting it out, the human way
If we see that a behavioral pattern does not produce the desired effects, we change our behavior- interestingly, in my experience in change management, the change is easier when you have less experience (or less status to lose).
If we remove humans from the decision making process, probably we replace the human agents with a software mirroring their thinking process, i.e. the patterns that they use to identify potential security risks in specific behaviors.
But also a complex model would be based on the “average judgment”- converting human decision-making processes into committee-decision-making.
And while committee are wonderful at identifying all the details about any issue… the old joke about the horse designed by committee (“a camel is a horse designed by committee”) still holds.
Learning new patterns and unlearning old ones?
When the old patterns are designed by committee there is no possibility of expressing a value judgment, as the committee has no “background” to use in discarding its own decisions and adding new ones.
If you got lost by now- do not worry, it was intentional.
Because I wanted to show the reason why software-based decision-making and human decision-making are not mutually exclusive.
Making it work
It is really not difficult- just use your vote.
What is required is just a variant of the three laws of robotics (see on wikipedia) from Isaac Asimov.
The basic idea? If biometrics is supposed to enhance our security or possibility to choose, then it should make our life simpler- and safer.
Therefore, in one of the scientific/commercial/political committees, build basic rules- and require that any technology that embeds a model embeds the rules as a litmus test for any new “learned” decision-making pattern.
Lining up behind common wisdom is sometimes dangerous, more so when common wisdom requires to add a radical change (e. g. removing the human critical judgement), while the alternative is still untested.
In my experience there are areas where full automation (e. g. taxing) could create a seemingly more intrusive but in reality more privacy-oriented and fairer (i.e. simpler) allocation of resources.
But if your only tool is an hammer, every problem looks like a nail.
Get a larger toolset.
It would require more than 30 seconds to be explained.
But it would last longer than the time needed to the companies that created the miracle technological biometric solutions (a.k.a. XXI century snake oil) to recover their investment…
… or the current electoral cycle.
In information technology, I remember seeing computer software used not for the five years it was supposed to last, but decades longer- so long that the programmer had become the CIO of the company.
Happy new year and decade!