Wednesday, 13 June 2012

State of Surveillance - Living In Surveillance Societies conference, Barcelona


State of Surveillance. LISS-COST conference 3 – Barcelona.
28th May to 1st June 2012

I was in Barcelona this week for the third ‘living in surveillance societies’ conference, and as is becoming fairly routine, I’ll blog some of my notes from the conference.  There are a few summative thoughts at the end of this. 

William Webster (Uni. Sterling) opened the conference with  a talk about surveillance studies as an ‘x-ray’into the state, potentially offering a different perspective on government, public admin, and the nature of ICTs. This drew upon Jack Taylor’s theory of informatisation as an x-ray. In order to understand the state, we have to understand its information flows, how these interaction technology and existing practices. We can’t understand the state w/out understanding information and IT, and this will give us a deeper understanding of government.  Information is seen as  critical resources, rapidly taken up by the state, and made central. Institutional arrangements are shaped by and shape informatising powers.  William proposed surveillance as a  concept for interpreting the state, as a factor of production, as a normal part of life, but an increasing part, as an understanding that all ICT’s have a latent surveillance potential (I like this framing), and that surveillance shines light on why information is collected, how, for what purposes, and what are the informational relationships between the surveilled and the surveyor. For Webster, states are always concereend with surveillance, the capture of information for the provision of services and protection is part of the core purpose of the state (e.g. tax). The state is information intensive and collects and processes vast quantities of information. Surveillance platforms emerge from state investment in the technology. The state normalises surveillance, it habituates us to the exchange of information for services, which is then used elsewhere. There is also explicit surveillance for protection of the state, which then enters the domestic sphere. The state also plays a key role in regulating surveillance in terms of rights and protections.
William addressed a few potential concerns with this perspective, including the misrepresentation of a dystopian picture of the state, the strong normative base of the field, top down views of relations between state and citizens, that eGovt is not surveillance, that the theory is the same as informatisation, and that there is nothing new about the surveillance perspective.I was concerned about making a distinction between the argument that we need to understand information or surveillance in order to understand the state form, from the argument that we automatically have a privileged perspective by using surveillance. I think there’s certainly something to this focus upon the information and surveillance practices, and how they make up the state. Although the question and answer session after the paper suggested that there were a lot of different perspectives across the different disciplines in surveillance studies. I’d quite like to go over this paper from a political philosophy perspective. I have a sneaking suspicion that some of the strong sociological threads in surveillance studies might be leading to some reification of the state, which might well be understudied given the sympathy for the subjects of surveillance. 

Michael Nagenborg spoke about Anonymous, collective identities and hacker practices – something I’d looked at a little in my own paper ‘This is not a Cyber-war....’.  Michel drew quite strongly upon the work of Gabriella Coleman on Anonymous, which he strongly recommended reading. He was not offering a defence of anonymous, but argued that they are not ‘others’, outsiders of western liberal traditions, but rather make relatively familiar claims to freedom of the individual, free speech and against censorship. However, one divergence from this ethical tradition is the rejection of visibility of an identifiable person behind political statements and ethical acts. He drew upon the Hacker Ethic (1984) and its moral core. He argued that this set of ethics is still recognised in places, including, as an example Mark Zuckerberg’s claims that information wants to be free. There are tensions now between the ethic of early hacking and the digital mainstream of today – the powerful centralising effect of facebook for example. The hacker ethic included a strong egalitarian position in which hackers would be judged on their hacking rather than on irrelevant personal identity characteristics. Michel also pointed to circular media reactions to hacking (often starting with a young male caught for computer crime) and how early hackers became a threat to national security almost by accident. This includes outright misrepresentation, such as news reports that suggested people would be unable to use their credit cards due to anonymous, when only the public facing website of MasterCard was affected by DDoS attacks, and was presumably unconnected to the payments systems. He drew attention to the meme that using the internet anonymously is morally irresponsible – this allowed him to move to his second theme – the ethics of the mask.
The mask allows the wearer to be present in space, but harder to identify. It also prevents reading of the face. It shifts power to the wearer of the mask, but does not silence the wearer. It prevents non-presence. The pseudonym is similar, allowing comment or action without identification. In hacker culture bragging is an important feature, and there is a need to attribute hacks with their authors. Michael engaged with Agamben’s work on persona to look at the space between the actor and the act that the mask allows and how this might itself be an ethical space allowing possible reflection. He connected this to ‘Being Anonymous’ where some people using the Guy Fawkes wake up to a larger identity and awareness through this practice. Michael seemed to be seeing this as an individual process, but on further reflection, it sounds a little like some of the group consciousnesses that can emerge from participation in collective action. Michael did identify the totalitarian potential within anonymous masked collectives- which would also resonate with some of those powerful group subordinated identities. Trust dynamics seemed to be important, but also incredibly problematic given anonymity. This suggests to me that there might be some value in a language for practices of trust in situations of anonymity. I’m guessing that this might potentially emerge from within collectives like anonymous, rather than be written for it by somebody outside, but it’s certainly something to look for.

Ben Wagner, from the European University Institute spoke about internet censorship and surveillance before, after and during the jasmine revolution in Tunisia. He started with Ben Ali’s final speech, which was full of concessions, but included the opening up of the internet, with no censorship and no surveillance. Whilst these concessions were ineffective (Ben Ali fled the country within 12 hours of making the speech) Ben thought it was important to ask why he had felt it necessary to include these particular concessions. Ben spoke about how the infrastructure of Tunisian internet surveillance simply stopped, was turned off within two hours – as evidenced by the explosion in Tunisian access to YouTube. This ability to turn things off hints at a strong degree of centralisation. Ben also spoke about the role of Tunisian telecoms providers which attempted, post-revolution, to position themselves not as censors and surveillers, but just as service providers.  Ben’s historical account of developing surveillance and censorship through the late 1990’s to 2011 was an interesting picture of privatisation, increasing bandwith, and shopping expeditions for surveillance technology. The Tunisian regime apparently bought similar, overlapping pieces of surveillance technology from several different providers so as to prevent being locked-in to any particular vendor. This was combined with a typical authoritarian move of slipllting up surveillance and censorship roles across different agencies, although the boundaries blurred under pressure. Ben remarked on the local inability to develop these systems, leaving countries like Tunisia dependent on international markets. Centralisation of architecture and institutions makes the regime possible, functional differention protections, stabilises and allows the regime to adapt. International markets crucial for access to technology, consultants and systems.
Concluding the first panel session was Pete Fussy, talking about the centrifugal and centripetal governance of UK counter-terrorist surveillance. Counter-terrorist practice is conflicted and fragmented and not as coherent as we might think. This brings its own problems of accountability. The state acts as both the ‘risk-holder’ and a target of terrorism, but responsibility is dispersed (for example to the private sector or local government). There is surveillance in different forms throughout the CONTEST strategy, and this is often in tension with other parts of the strategy. He drew upon a number of terrorism events that have been particularly influential on UK counter-terrorism strategy (notably the Herhausen assassination by red army faction in Germany and the bombings in London by David Copeland). These events produced narratives about what is learned from these events. This now leads to a focus on upstream preventing, hostile reconnaissance, owning suspiscion and the normality of space (including attention towards ‘matter out of place’). Drawing upon policing literature, Pete suggested that when discretion increases in policing, more stereotypes are used, and there is a greater play upon ‘respectable fears’. He used the case study of Operation Champion (the placing of CCTV cameras for counter-terrorist purposes around ethnic minority communities in Birmingham as an example of the levels of government involved. This project was cancelled due to public backlash.

In the second session Peter Lauritsen spoke about research into video surveillance in Danish police work. This highlighted the challenges of establishing video surveillance, police hesitating in adopting and using the new technology despite legal reforms and political pressure for them to use it. The solution (CCTV) was politically determined prior to the specific problem it was to be used to solve. Police were not convinced that CCTV would make their work more effective, and rarely used it in solving crimes. Lauritson described these issues as ‘oligoptic bugs’ and highlighted the fragility of surveillance systems. Opinion on the usefulness of surveillance tech is ‘yes’ but not for serious crime or safety. Help in understanding the sequence of events and dealing with regular normal events (e.g crowd flows at football). Tom de Schepper and Paul de Hert spoke about use of CCTV in Flemish cities and municipalities, arguing that efficiency objections (cctv doesn’t do much) will have to be looked at again (possibly constantly) as the technology improves and developes. There is a need to know details about this, but also to open discussion on comparative numbers and legal cultures. This was a fairly quantitative paper, which attempt to draw some large scale models for the likelihood of cctv use across different jurisdictions. Tanguy le Goff spoke about his ethnographic research on municipal CCTV workers in France. This work fits into a tradition of cctv control centre ethnographies which is getting pretty developed now. Tanguy focused upon the relative social invisibility of the workers in the control centre compared to their cameras, and also to the subjects of surveillance.

Kevin Macnish gave a philosophical paper on authority and surveillance. This was based around a definition of authority, with strong links to context, persons in roles, appropriate delegation or attribution of authority. I liked the way that Kevin broke up the potential sources of authority into top-down/peer/bottom-up(democratic).I had a couple of thoughts about this approach. Firstly, I think the routinisation of surveillance makes some changes to the actual perceptions of necessity and authority. Secondly, this approach of holding ‘all other things’ consistent in order to focus upon authority is probably analytically necessary, and does introduce some clarity into the political theory of surveillance. It sits uncomfortably for a lot of people, because firstly there’s a sense that ‘all other things are not equal’ and that surveillance actors are often not legitimate, necessary or other qualifiers. Secondly, because there’s a sense that a scheme like this might legitimate surveillance. It will, because it is intended to, and exists in a world in which there is some legitimate surveillance, carried out by legitimate actors. What this does is opens up the whole set of institutions in contemporary society for challenges about the sources of their authority to act, and how accountable this is – it is not just surveillant institutions that might be lacking in just authority. ‘Authority’ in practice just doesn’t play out in analytically neat ways, but in complex, negotiated, challenged, contested power and politics.
The second day of the conference kicked off with an early panel on ‘Scandinavian Exceptionalism and surveillance studies’ which I saw part of.  I think this revolved around the extent to which several Scandinavian countries could be thought of together in relation to their experiences and attitudes towards surveillance (contrasted perhaps to ‘anglo-saxon’ or ‘continential’ models and traditions). Looking at the discussion, the answer could well have been... maybe, sort-of, but in a very careful and contingent way, that recognises the significant differences even within that grouping. 

The next panel session had a presentation from Massimo Ragnedda on student attitudes towards Facebook in Italy. He was drawing upon a similar model of research by PVNETs and also Christian Fuchs to assess student knowledge about surveillance in society, their practices and concerns about personal data. He found that students were much more worried about personal surveillance by their peers than by institutions or for marketing purposes. Massimo described this as ‘strong against the weak, and weak with the strong’, and identified what he saw as a underestimation of the use of data and targeted advertising. Jason Pridmore spoke about sign-on surveillance and transitions in consumer surveillance. This included the shifts from knowing customers and relationship marketing with web based information platforms for companies, to social networking. Social sign-on includes the log-in with facebook, and includes the reliance on 3rd parties to say who you are. This is a delegation (Jason’s drawing upon Actor Network Theory to make sense of this). Marketers have delegated data acquisition to social networks, while social sign-on creates active participation, towards the dream of a targeted market. This creates a particular personalised world at the intersection of social media and big data. Bence Sagvari gave a presentation on children’s online safety. He linked this to an ideology combining a desire for a risk-free environment, a notion of childhood innocence, and moral blindness. A culture of fear drives disproportionate reactions, combining the fear of the new, exaggerated media panics, and reverse socialisation. Monitoring software is constructed as the thing for responsible people to use, the use is binary or not with reduced room for negotiation, communication and development over time. He related this to trust, suggested that trusting leads to trustworthiness over time. He looked at other alternative strategies in use for managing safety online including co-use, active mediation, restrictive mediation, moniroting, technical restrictions. Across the EU, there was relatively low use of monitoring but it was most widespread in the UK, Poland and Ireland. Bence identified a shift from a paradigm of self regulation and strict state/governmental control towards a more flexible and faster co-regulatory regime. 

After lunch, Evgenia Alexandropoulou and Maria Nikita spoke about the Greek regulatory framework for personal data protection, with specific attention to the way that this affected the placement of CCTV within various different locations in Greek society.  All the general European data processing requirements (consent of data subjects, appropriate security measures, notifying the DPA) all apply to video surveillance, except when used for security reasons. The framework seems much stronger than in place in the UK, with some quite sensible distinctions between different types of place, but I was unable to find out how they got to this situation, what the politics of it were.  One particularly strong element was that the Greek constitution prohibits the use of ‘evidence’ collected by systems not in compliance with data protection (unless this is the only way you might prove your innocence of a crime). Eleni Crysopoulou spoke about surveillance and the investigation of organised crime, Lilian Mitrou presented on naming and shaming in Greece as a form of social control. Public stigmatisation raises feelings of guilt and shame, and shaming practices imposed psychological and social costs. She gave a short history of branding and similar practices of ‘spoiled identities’ as a mechanism of preserving social order. The growth of a strong central state and increased mobility shifts this model, but she believes we now see a rebirth of shaming, including as a tool of law enforcement. Anti sex-offender legislation in 2007 was based around the concept of a right to informed living – the right to exercise informed choice about those you associate with. The previous criminal behaviour of a person was not counted as part of personal information, and doesn’t benefit from the protection associated with it. Now the naming of tax evaders, as a comeback of the scarlet letter as part of an acknowledgement of the supposed limits of traditional methods. Mitrou described this as ‘social control by the man next door’ and identified that there was no evidence of the usefulness or efficacy of shaming in dealing with crime. I wanted to make a distinction between public identification of criminals, and public shaming (including mainly the direction of identification, the purposes of knowledge-release, and the forms that takes). 

The final session of the day introduced work by Andreas Pap on the practice and political philosophy of regulation of public access to criminal data. This involves asking how technologies constitutional requirements – for example of access to the court room. Is there a legal difference between physical presence and online access/broadcasts? What are the publicity expectations of testimony? Arguments for courtroom transparency including controlling the judicial process, courts as community norm upholders, presenting the law in action, providing legitimacy for law and law enforcement, judicial offices being public places. Arguments against include the invasion of privacy, identity theft, victims being afraid to report crimes, cameras intimidating jurors, and secondary victimisation. Andreas identified some different tensions in different countries including frespeach vs free trial in the US, privacy in a transparent democracy in Sweden, and democracy vs privacy in Hungary. In the US the general rule is that reporters are no different to any single individual member of the public. There are also business making money out of providing searches of court records on people, with significant regularity. In Sweden, criminal data is not available, but employers can ask individuals to get it on themselves and show it.  In Hungary, privacy is the heritage of the dictatorship, but there are complications in that there are no official hate crimes in Hungary, because the police ‘don’t know’ minority status (as a protection). The transparency deficit is also a democratic deficit.
Heidi Mork Lomel gave a really interesting presentation on the role and impact of faulty statistics in surveillance policy debates. Drawing upon case studies from Norway, she looked at the way that controversial surveillance initiatives are legitimated with the help of persuasive but dubious statistics (that won’t go away). Proposals for open street CCTV in 1993 looked to the UK, and claimed 30-60% crime reduction effects. These claims were not challenged, but rather made it ‘almost impossible not to try [CCTV] in Oslo. After introduction, the success criteria change from deterrence of crime to detection of crime. There is a shift from number to belief and convction. The new numbers may not document some achievement, but police officers and security agents believe it, despite what ‘researchers’ might claim. Later sober and critical evaluations from the UK of CCTV, didn’t have much effect on the debate. Similar effects for the expansion of the DNA database (alongside looking towards the UK for initial statistical evidence). Success criteria shifted from detection rates to the number of registrants in the database, and how many hits when the database was used. Interpreters do not say that researchers are ‘wrong’ but rather that they still believe in the questioned practice.  The persuasive power of numbers is used to bolster weak arguments and doubt statistics of opponents. But nothing happens when numbers are proved wrong. Statistics that prove what you think. Further research requires a focus upon the preliminary stages, what numbers, what sources, and how are they used. How do politicians use research.  The numbers allow the making of decisions without seeming to decide, not ‘we want this controversial thing’. These are ridiculous numbers that play a central political role, especially when linked to concepts of proportionality. Some of the reasons for this include a lack of mathematical competence, technology optimism, lack of scepticism and critical reflection, and a state of emergency/necessary evil. I really enjoyed this presentation because I’d never really thought about numbers in relation to discourses of surveillance before, they’re an intimate part of legitimating and representing surveillance practices. 

I was at the parallel Doctoral School session first thing on the morning of the third day of the conference, looking at the PhD work of Maria Murphy and Philip Shultz, so I missed a panel of surveillance and ethnography.  When I rejoined the main conference stream, Rosamunde Van Brakel was presenting on using the concept of play to better understand some forms of surveillance and our interaction with it. Separating her work from playful representations of surveillance, she ran through a number of interesting projects and made a strong argument that further work on the relation between surveillance and play was necessary. Louise Norgaard Glud and Sofie Stenbog spoke about the work of Chinese artist and dissident Ai-Wei Wei who makes use of surveillance as a device in his art but who has also been put under surveillance by the Chinese authorities. This presentation touched on issues of self-surveillance and empowerment, the multiple audiences for surveillance, and the way that the camera (and presumably other surveillance technologies) can act as as a sign of surveillance as much as a technology of it. Susanne Wigort Tngvesson, spoke about perception, surveillance, logics of seeing, interpretation and various other aspects of the theory of vision, particularly drawing upon the work of Merleau-Ponty. Finally in this session, Kat Hadjimatheou spoke about the research the Detecter project at Birmingham had done with counter-terrorist professionals and their perceptions of the practical and ethical factors in their use of surveillance. This included the perception of technology as a double edged sword, that both enabled and could protect against invasions of privacy, reducing the effectiveness of oversight and make legal regulation obsolete, both increase and reduce trust in CT practitioners, and allow the maintenance of ‘back doors’ useful to police but at the same time the source of security breaches. Technology was represented by one participant as ‘the only alternative to repression’. Police officers felt prevented from doing ‘normal’ technological things, for example using mobile phones to send MMS to each other.  There was also concern for a ‘CSI effect’ in which high public expectations of police technologies were not met, with the concern that if things were not recorded, they were not happening. Security practitioners worried about collateral damage in terms of the non-suspected caught up in surveillance operations, but not too much about the false positives.  In terms of what they thought about ethics per se, Kat reflected that ethics, primarily meant proportionality, which meant The Law.  In the questions following, Pete Fussy drew parallels with the police studies literature, and the sense of beleaguerment  running through cop culture. 

After lunch, Jerome Ferret spoke about policing in the risk society and panoptical violence. This drew upon the sociology of the state, something he felt was underrepresented in ango-american sociology but more present in the French tradition. Two points Jerome highlighted were the different between terrorism policing and risk policing, and the practice of symbolic distrust of the police by politicians, which he interpreted as the state saying to its ‘troops’, ‘you are not working well, I’ll turn to the private sector’. Franciso Klauser spoke about the surveillant management of space at sporting mega-events, using the Euro 2008 football tournament as a case study. Francisco drew upon Foucault’s work on security, territory and population to talk about space as a mediator of power, not just in fixed isolated space such as the panopticon, but also flows in open space. With the interrelation of terrorism and mobility systems, the challenge is how to secure control without breaking those mobility systems.  There is a temporally and spatially dynamic pathwork of access and passage control points, monitoring, restricting and filtering, but also facilitating and speeding up different forms of movements. I was also in this session, giving my own talk on aerial photographic reconnaissance during the two world wars, and what surveillance studies could draw from military history, which I’ll write about separately.

Picking between two parallel sessions, I attended a talk by Ian Tucker, talking about visibility in new media, and the constant engagement and informational interaction. Ian’s approach is social psychological, with a conception of subjectivity as fluid, transformed, produced through relational processes. Ian was interested in the relationships between power and affect, and how affects combine.  This might include the way that we use technology and new media for our benefits, to enhance our capacities to act, but these technologies may also limit other capacities over time. All new media technology is affective – this engages with its ability to after ‘power to act’ but also retains the non-deterministic, notion of the human. Foucault’s ‘care of the self’ suggests something strategic, but its really isn’t, there’s too much information. Ian suggested we’re fundamentally still learning how to live surveillantly.  Darren Ellis took a dive into the literature on trust, both emotional and psycho-social, in relation to citizens perception. In relation to privacy and technology, he found a relatively unsophisticated understanding of trust, linked to polls and surveys, and not accounting for trust dynamics. Darren wanted to break down the opposition of trust and distrust on a single continuum, and also challenge how certain levels of trust were interpreted as distrust. Similarly, trust is often thought of as good, something we should do, with distrust bad, a psychological disorder (and this certainly has a politics). He drew upon Luhman to suggest that both trust and distrust were potentially coexistant, and were both mechanisms for managing the complexity of information, uncertainty and complexity. It could potentially be dangerous to increase one without the other (vulnerability and paranoia). With impersonal trust through the functioning of a system, this requires something of a leap of faith and suspension of doubt, in which we accept assurances or look for further safeguards. The question is how to do this in surveillance contexts? Turning to Giddens, Darren looked at the way trust in abstracts systems is achieved through ‘access points’ where facework and impression management occurs, and the suspension of doubt is managed. Surveillance systems often have very restricted access points, continue to remain faceless. This leaves a gap in how to negotiate distrust. David Harper gave a presentation on conspiracy and urban myths in relation to surveillance, looking at rumour, contemporary legends and the public understanding of the use of personal information. He drew upon the literature on folklore and contemporary legends, showing several urban myths about surveillance, but asked why there were not more conspiracy stories about surveillance, and suggested that this was actually due to an absence of information.  He understood conspiracy stories as a form of social epistemology – a collective attempt to solve problems. The origin of urban myths was strongly linked to media portrayals, and linked into classic fears about cameras and screens that were very culturally available (even showing a screenshot from 1990’s gameshow Noel’s House Party to demonstrate this. Urban myths are strongly socially stratified, make use of strong arguments from analogy, corroboration and initiations to empirically verify them. Part of this area of cultural, social engagement with the knowledge of surveillance is managing our own labelling as paranoid – rhetorical inoculation against later hostile accusations. Urban myths allow us to be seen to be in the know, possessed of a counter knowledge, and resisting authority.  Listening to this presentation, I honestly wondered how much of this we do in surveillance studies. 

Concluding Thoughts:
I can away from the LISS-COST conference with a few ideas in my head, which probably say as much about the way I listened and interacted as they do about the topics people wanted to talk about. 

1) The state – The conference theme was The State of Surveillance, attempting to capture both potential meanings – the current state of surveillance, but also the role and relation of the state in surveillance (often thought of in terms of surveillance societies). I’d hoped for some discussion of this and found a bit. I think it’s one of those areas that slightly complicates the interdisciplinary interaction that typifies surveillance studies. There’s a strong sociological tendency which, as Jerome Ferret described, often puts the state off to one side, thinks of it as a single entity separate from society, but acting upon it. There are research traditions which tend to focus their attention upon the subjects of surveillance, out of a genuinely well placed concern for the impact of surveillance upon people.  I attempted to do some of my own thinking about the state and surveillance in ‘Surveillance and Identity’, where the state forms part of the subtitle, and is engaged with through governmentality theory. There was a good representation at this conference by political scientists of various types (which is often a mix in itself) and lawyers, who tend to engage with the state in more detail, as a result of the history and traditional focus of their respective fields. Kevin Macnish’s work on authority (and what I see as the inevitable step backwards towards the legitimacy of institutions) also points in a direction of one way that surveillance studies needs to engage with the state – in the role of various institutions that comprise the state in performing legitimate social functions.

2) Discourses of surveillance – another regular interest of mine, that came to mind a couple of times during the conference, particularly in technologies as signs of surveillance, and the role of numbers and statistics in justificatory discourses of surveillance. 

3) Institutional Learning and knowledge formation – This came up a few times too – the process and practices through which institutions (the state, the police, the military and others) make sense of the world and come to believe certain things as truth. Academia is probably implicated in this in some way, but I’m interested in the penumbra of institutional research (it’d be ‘operational research’ in a military context) and especially its relation to security politics. Examples at the conference included Pete Fussy’s work on UK counter-terrorism, and the way that several key cases studies and the ‘lessons learned’ from these shaped future CT policy and strategy. I suspect this is part of any coherent understanding of contemporary governmentality (ways of seeing and making sense of the world).  But I also suspect that governmental discourses play some role in which cases are included for examination, and what lessons are drawn from 
 them.

No comments:

Post a Comment