Tuesday 10 April 2012

'Watch This Space' Surveillance Studies Network conference, Day 2

Day two of the conference saw more parallel sessions, and it looks like I spent more of the time in the 'Internet Theme'. The first panel was Lucas Melgaco (VUB), William Chivers and myself. Lucas spoke about surveillance in educational spaces, both in terms of security motivated surveillance, but also of the surveillance potential built into virtual learning aids such as moodle or blackboard. His over-riding theoretical framework was looking at surveillance in terms of rationalisation, counter-rationality and rationalism. Rationality was the focus upon predictability, calculability, efficiency and control. Rationalism was the reduction of reality to perfect rationality, with a focus upon cause and effect relations and no complexity - often with a focus on fixing one problem or deficit to the exclusion of others. I'd question about how we distinguish rationalisation from other attempts to effect change, but I'd be happy to understand rationalisation as a particular strategy or tactic of governance. Secondly, the concept of rationalism would certainly fit as a form of ideology (in the sense of a reduction of contingency). I'd suggest caution about speaking about general (global) processes of rationalisation, and rather go for specific processes in specific contexts first.  Wil Chivers (Cardiff) is working on conceptualising resistance to surveillance combined with digital research methodologies. He spoke about WikiLeaks as an example of information politics, speaking truth to power, and with an awareness of surveillance (spy files, global intelligence files etc). Wil's argument was that resistance is a networked behaviour, and that whilst the individual level is important (for example in the motivation of WikiLeak's sources to leak material) networks are pivotal, and that it is networks that are amplified by the internet. Wil gave a few examples of the way that social network analysis (using software such as NodeXL) to examine links on twitter around groups such as No2ID. I think there's some potential for conducting social movement and political communication research using tools like this, and it's part of the reason that I'm interested in data visualisation. I'd also be concerned about the way that these visualisation exclude important type of information, and the idiosyncrasies of the way that we learn to read these computer generated network maps (worst case scenario, we only read what we already know, or that generating a graph is not the end of the research process). I'd assume that the best way to make use of this would be as part of multi-method triangulation research design.

I was speaking at this conference about the games design for privacy education research that the VOME project has been working on, talking about the reasons we adopted games as an alternate method of communication and way of allowing people to interact with privacy and online personal information issues, as well as the theoretical framework and the research process.  More information here. There was quite a lot of interest from the audience (including some interest into translating the game into other languages and in getting hold of copies of the game to play with students). I hadn't ever thought about translation possibilities before, although I don't think it would be too hard to do. My concern would be that the game was based upon research conducted in the UK and built accordingly. I would be interested in seeing the reaction to a straight translation in other countries though, and I'd guess that some issues would resonate more strongly than others. I suspect that if one wanted to adjust the game to make it more local then it would be the event deck (where privacy events culled from the news over the last year or so crop up to mess up player plans and introduce some randomness into the game) that would need alteration. The game is Creative Commons though so I'm very happy to see what people do with it.

After the break Val Steeves and Jane Bailey gave a very engaging presentation on 'doing 'girl' online', their research into online presentation of gender. Using the publicly available facebook profiles of Canadian late teens and young women, they produced a 'composite' facebook profile for 'Tiffany'. Tiffany was a very stereotypical sociable young woman, expressing herself through socialisation, her relationship with a boyfriend, and using a familiar range of photographs, including swimwear and 'duckface' photos. Steeves and Bailey used this composite in interviews with young women in the same demographic. Their finding was that whilst young women did not want to 'be' Tiffany, they found they had to continually negotiate between their own online identity and that of the stereotype, to make space to be something different. The degree of 'openness' of the profile was part of this, with a large number of friends or an open profile, being associated with youth and then later on with the potential for slut-shaming. Participants had a sense of facebook as a commotidised space in which (at least in part) they were the commodity - pictures (including the potentially riske) were important in selling a product. In deciding what to depict, they drew upon media representations of women. Participants were also conscious of putting others under surveillance, with two forms of stalking - the 'creepy' and the normal everyday, everybody does it variety. Steeves and Bailey found serious social implications to exercising privacy and sharing controls, and that the openness that results from this is put to the back of the mind, with the social network being actively imaged as a smaller number of people. The exposure that gives status when young is costly and damaging when older, and young women considered the relationship management activity to be heavily gendered. They conjectured that more of girls' social interactions might be captured by facebook because it is a social, communicative media - they suggest that they found that boys deal with their disagreements and conflicts offline. Liisa Makinen spoke about webcam surveillance, involving themes of participation, membership, and uncritical acceptance of self-installed camera surveillance.

In the same panel, Jennifer Whitson presented her paper on the relationship between surveillance and games, particularly the concept of 'gamification' growing in the marketing and business literature. The concept behind gamification is that it takes mechanisms and models from games and applies them to other contexts to take advantage of fun and engagement that games can generate. Jen described this as the 'Mary Poppins' effect. She was rather critical of the gamification movement, which she saw as largely taking things that were tangential to games (scoreboards, points, achievement badges) and using those to quantify achievement and success within institutions, with clear parallels with workplace and educational surveillance practises. I was minded to think about the use of such games, and practises in terms of behaviour change and self or other-directed limitations. A service such as 750words which aims to help you write more is a self-chosen goal (although with some hidden underpinnings in code and architecture as well as the inevitable sell-user-data-to-others social media business model) where as being forced to play a 'game' at work, in which decisions about your employment status, remuneration etc will be evaluated through is clearly other-directed. There's obviously a step behind this, which is 'why do you want or need to write so many words, and why do you want to look like Tiffany?. I'm very sympathetic to Jennifer's perspective on gamification, which might seem a little odd given that I've been working on game design recently, but I can definitely see the difference between a game with a purpose and gamification. One of the primary differences is that the privacy game is explicitly intended to encourage discussion over the value and architecture of privacy, not to just reinforce and assess it.

Minas Samatas and Mike Zajko shared a panel on telecommunications. Minas spoke about interception scandals in capitalist democracies, suggesting that media scandals only occur when established interests are invoked or challenged (for example when a newspaper hacks into the voicemail of a politician or an actor), rather than the 'real scandal' of the whole telecommunications industry and information capitalism. He spoke about the powerful and celebrities in this context, but I think it's worth broadening that out to some account of symbolic capital in a media environment, being necessary to effect a public debate around an issue. Minas used a phrase, which I'm not sure if it's a 'theory' or not, 'Security Capitalism' which seems particularly evocative and potentially useful. That said, Minas spoke about the dual primary motives of security and profit, when these motives can occasionally be strongly in tension. Another thought that arose from this talk was due to Minas repeatedly saying 'there is no privacy'. Given various countries with a 'reasonable expectation of privacy' element to their law, that's a problematic statement for a field to go around making. I think I'd prefer the phrase 'there is surveillance' - it keeps the mechanical/technical dimension that the speaker is aiming for, and has a normative dimension, but it doesn't reject privacy as a potential legal or social mechanism for responding to that surveillance. Mike Zajko was putting together a theoretical framework around responsibilisation and governmentality, including the extension of state control through intermediaries. He identified two forms, state directed and state leveraged. The first is unfolding of the state into civil society, the other is enfolding of civil society into the formal political sphere. He's trying to capture conventional governance alongside the type of influence that media content companies have exercised over copyright reforms, where the state has been effectively captured by these industries and used to responsibilise other private actors (ISPs, youtube etc).  I did wonder if states have ever really been able to govern without private actors (the example of the British East India or Hudson Bay Trading companies come to mind). I also wondered if all areas of the state were equally liable to capture by external interests (I'm guessing not, and that this would be an empirical question) or if there were different forms of capture operating in different areas. For example, defence might seem less liable to capture, but might be more vulnerable to regulatory capture by specific parties. There's likely a question a technological and knowledge asymmetry in this politics. Mike's work is reminiscent of the book I'm currently reading Imagining Security by Jennifer Wood and Clifford Shearing, which looks at nodal governance in security and policing.

In the final panel session of the day Marie Griffiths and Maria Kutar (University of Salford) spoke about their ongoing research on the day in the digital life project, and attempt to understand exposure to surveillance over a 24 hour period. It's an ambitious project, and they acknowledge the difficulty, especially in capturing those elements of the digital footprint that occur far from the subject (for example in corporate or government databases). The project seems primarily exploratory - how might we do this? is it possible? what are the biases and systematic exclusions. They're generating loads of data (in the GB/TB sense) and managing it is a problem. for me this makes the research almost agit-prop - in demonstrating the difficulties that even a geared up, dedicated research team have in understanding the extent of an individuals' data profile, they demonstrate as a fiction the idea that a normal individual, also living their own life, could understand their data profile in a systematic way, let alone 'manage' it. They're also looking at ways to visualise or illustrate the data they acquire. I asked a question about the concept of the digital identity and the focus upon the 'digitial' when various important components of one's surveillant identity (didn't use the term in the question) are paper based rather than digital (even when they have digital components). Following this, David Philips spoke about his research on the Quantified Self movement - these are people who use all manner of data sources and measuring devices in order to better understanding themselves. This therefore bought up topics of accessible, democratic surveillance infrastructures, and surveillance as a technique of knowledge production. David's theoretical position draws from surveillance, queer theory and infrastructures. His talk gave an interesting overview of the practices and tools, purposes and goals, institutionalisation of the QS movement. The various technologies are used for a range of purposes around self-reflection, sense making, goal reaching, self-knowledge, auto-ethnography, self improvement. David's take on the motivations behind this drew attention to the goal of being a healthy, energetic, productive member of an entrepreneurial economic order (see the parallel between this and gamification, and the pursuit of even seemingly self-directed goals?). There was a preponderance of a-political, really normative endeavors in a 'geeky and nice' way. Externally, David showed how even this self-directed surveillance tied into more complex external practices, for example the intense interest of the healthcare industry in the data produced by QS enthusiasts, the relationship between the service providers and data servers, including hacking the technology. QS seems to fit into a long philosophical tradition of knowing one self as a positive goal but is is very interesting how that fits in with a broader politics. I'm not sure how many of my own personal illusions I'd want shattered though!

The closing plenary of the conference was a talk by Kevin Haggerty (co-written with Dan Trottier) on Surveillance and/of Nature - monitoring beyond the human. This was interesting, and an attempt to outline the scope of an area we tend to miss in surveillance studies -the surveillance of the non-human. I can see why this is important, but I can also see why I personally tend towards surveillance of the human - it's a disciplinary and professional, political/social thing for me - that's what I'm drawn to research first. From the perspective of a broader field, its important though, and especially for theoretical completeness. The early part of the talk situated the rest, and also acknowledge the constructed and contested concept of 'nature' which reminded my of the classes on ecological politics I took with Mathew Humphrey at Nottingham. For Haggerty 'nature' is culturally important because it sits on one side of a whole load of (unstable) dichotomies (culture, science, society, technology) and its characterisation has important material consequences. Kevin drew out four areas where surveillance intersects with nature. The first of these was the area of learning, dominating and conserving, often associated with science, where visibility regimes and new ways of seeing are part of the process. Part of these processes are making nature/animals/environments more amenable to governance, but also implicated is the relationship between science and entertainment that finds its expression in the nature documentary. There are drives for knowledge for conservation, but as part of this, an extension of 'man's domination over nature' and the maintenance of animal populations etc at ideal thresholds for humans. Secondly, there are animal agents of surveillance, where the gaze of animals is instrumentalised and incorporated in different governmental agendas. Thirdly, directing and capitalising upon sensing abilities that animals have developed that humans have not, generally by training an animal to signal when it senses something, or by learning animal signals. Amber Marks has written about this sort of activity in her book 'Headspace'. The final intersection is biomimicry, the growing area (or idea) in science of drawing examples from nature. The literature here often has a environmental, progressive tone, but heavy involvement with military-industry. There are also potentials in bio-mimicry for resistance to surveillance, drawing upon camouflage and crypsis techniques. Haggerty concluded that surveillance of/and nature should be on our agenda, perhaps causing a change in our definitions of surveillance, including things that other disciplines would happily call surveillance, and push further back into the social construction of nature and technology - by looking at the ideational phases of technology developments - which I took to mean the ways that problems and concepts of what is necessary or desirable are developed. 



Overall, an interesting and useful conference. I'm left thinking about surveillance in general, and the next steps for the privacy games project, but also about the politics of surveillance, in terms of self/other directed activities and structures, although this is clearly a contingent distinction that will definitely break down in many places.

1 comment:

  1. Thanks for the thorough summary and the tip on "Imagining Security" - will check it out.
    As for telecom governance, I did write brief a section explaining that these sorts of state-private arrangements are nothing new (although it appears that it wasn't until the early 20th century that the value of these networks for intelligence gathering/interception was widely recognized). In the 19th century, states recognized the value of private companies for gaining access to other nations where an explicitly foreign state-run telecom would have been blocked.

    -Mike Zajko

    ReplyDelete