Very late, but better than never - Some notes from a workshop I attended earlier in the year.
2nd Multidisciplinary workshop on Identity in the Information Society, London School of Economics, 05/06/09
Kevin Bowyer – Notre Dame – what happens when accepted truths about IRIS biometrics are false?
Showed how some of the accepted truths about iris biometrics are false, but that this will not hugely change the field. Post 9/11 there was an inflation of biometrics. Showed a video of the use of Iris biometrics by the UNHCR in Afghanistan to prevent aid recipients making several claims.
Gave a good account of how Iris biometric technology work. Including taking a circular image and making it a straight line that can be digitised and turned into a code of 1’s and 0’s – it should be the case that everybody’s coding is statistical different. These images use near-IR illumination and therefore look quite different to normal light photos (this is because some people have dark irises hard to distinguish from the pupil).
Governments are interested because of claims to massive accuracy of this technique, with small error rates, and these extreme claims about performance appear to have a good theoretical background. However, there is often a confusion of types of errors (which may be the result of advertising copy written by marketers rather than scientists or engineers). To be able to tell how accurate the biometrics are, need both match and non-match distributions.
(How often do you match when you shouldn’t, or don’t match when you should)
Comparison of two images of the same eye, never give zero difference. There is some level of difference due to engineering and control of the environment. Engineering decisions place a threshold between two types of error distributions. An equal error rate is the threshold with the same numbers of errors on either side. So you essentially have to trade off the two types of errors.
Then Kevin got stuck into several accepted truths in Iris Biometrics.
· Pupil dilation doesn’t matter (greater differences increase the match rate)
· Contact lenses don’t matter – you can wear them or not (contact wearers about 20x likely to be false non-match e.g. don’t recognise you as you)
· Templates don’t age – you can have one enrolment for life (enrolment becomes much les accurate the longer enrolled, with a measurable increase in non-match frequency)
· It’s not a problem when you upgrade your sensors
Kevin identified that that was approximately, a 1 in 1.2 million chance of a false match. However, this was for a zero effort imposter, randomly chosen. It did not include anybody make any effort to try and beat the system.
He also asked why biometric systems did not automatically update with every successful access. For example, after having enrolled in a building access system, if the system recognises me as me (and lets me into the door) then, if you have confidence in the system’s accuracy, why shouldn’t the picture taken then replace the one in the database?
He also asked which problems does the government plan to solve with biometrics – ease, access or security? As this will affect the design of any future systems.
Roger Clarke – a sufficiently risk model of (id)entity, authentication and authorisation
Identity expert Roger Clarke presented his system attempt to rework the vocabulary used in identity and identification issues. This was an attempt to be positive rather than simply critical. There is a need for a deep technical discourse, and our ordinary language terms are not sufficient for this, because of the baggage and multiple, unfixed meanings that they carry. He was frustrated by the language, and wanted something internally consistent and useful for analysis purposes.
It includes 50 concepts. (my personal favourites are 'entifier' and 'nym')
I think I was sceptical about the assumption in this that there was/is a ‘real’ identity that language is confusingly covered. I’m also cautious that this would result in a really technical jargon that would be so distanced from ordinary usage that it was elitist and inaccessible by ordinary folk. The way that people think and talk about identity is important. It is also, as Clarke points out, messy and confused. This confusion causes some problems, but I don’t think that the response is to create an entirely new language. This is opting out, rather than being engaged in a discursive environment.
Seda Gurses (Leuven) Privacy Enhancing Technologies and their Users
(research page)
Seda set out the story of privacy in computer science since the 1970s/80’s retelling a story from surveillance studies as a software engineer. She claimed to be nervous beforehand, which didn’t come across at all. For software engineers, security is confidentiality, integrity and anonymity.
She argued that PETs (privacy enhancing technologies) were in general, poorly named. They are mainly anonymity systems, aimed at making the individual indistinguishable in a set, using a probabilistic model. PETS are based upon technocentric assumptions, not solving all privacy problems, but they are still essential. They are technocentric in that technology leverages a human act, it performs an instrumental function, the technology thought to be is exogenous, homogeneous (assumed to work everywhere), predictable, stable, and perform as designed.
PET assumptions are that 1) there is no trust on the internet, 2) users are individually responsible for minimising collection/dissemination of their data, 3) if they know your data, they know you, 4) collection and processing of personal data have a chilling effect, 5) technical solutions are preferred to a reliance upon legal solutions.
Seda countered this by drawing in a surveillance studies perspective, of networks, categorisation and construction and feminism. Data receives meaning through it’s relation to others. It is a creation of knowledge of a population. Statistical data reveals about individuals who don’t participate in data revelation (see Wills and Reeves, 2009 for more on exactly this). Social network structures are more difficult to anonymise so the very idea of individual responsibility is problematic (data is not private property?) but it is difficult to create collective counters. She drew attention to the idea of a digital commons.
No comments:
Post a Comment