Tuesday 13 December 2011

Responsible Innovation in ICT

Last week I attended an afternoon event at the Oxford e-Research centre - the first wider network meeting for the Framework for Responsible REsearch and Innocation in ICt (FRRIICT). This is an EPSRC funded research project into how the ICT research community views and enacts responsibility. Part of the project will to be fund a series of case studies into responsible ICT innovation and store these case studies within an 'observatory' making them available to others and starting a growing resource on responsible innovation. The process was driven by the EPSRC's percieved need for greater research ethics guidelines (akin to those of the other funding councils) given that funded engineering and physical science projects are increasingly interdisciplinary and involve doing research with people and communities. The FRIICCT project will therefore feed directly into future research ethics guidelines.

Bernd Stahl from the Centre for Computing and Social Responsibility at De Montford University gave an introduction to responsible innovation. The core of this was, for Stahl, communication with stakeholders (scientists, funders, policy makers, civil society) and the consequences of innovation. This issue was important now because of the uncertainty of the future, the severity and importance of outcomes, globalisation, and the fragmentation of authority. Important principles were those of ethics, human rights and democracy. Responsible innovation had two mechanisms, or points of contact: Product and Process. There was also discussion about whether responsibility was for or in ICT. Science was seen as 'systematically irresponsible', with a need to broaden the converation, encourage reflexive thinking in design, education and training of computer scientists including, for example, value sensitive design and building reflexive space into research proposals. Also important were the choices to be made in the application of technology (too often rolling from the military world to the consumer world).

Derek McAuley from the Horizon lab at the University of Nottingham gave a talk on Personal Information Repositories, an attempt to create a way for people to control their own data and make informed decisions about the ways and under what contexts they would disclose that information to companies or other requestors. He alluded to an important distinction between information we 'give' and information we 'give off', but also spoke about the use that granualar data about their lives could be of use to individuals, but could also be very sensitive. I didn't know this previously, but I also learnt that you can now get Mosaic geo-democraphic software on the iphone. The model presented involve the individual holding data themselves, and companies asking if they could run a supplied algorithm on it and return the result (if they wanted to). McAuley stated that informed consent has failed on the internet, and probably everywhere, and because of that other models (such as consumer protection) were needed.

In one of the group discussion sessions (which were generally informative and stimulating for me), one of the groups came up with a very interesting set of questions, initially applied to the Personal Information Respositories, but applicable elsewhere.

What things are being assumed about a technology in design?
  1. about predictability and control
  2. about users (who? how? diversity or discrimination?)
  3. about usability (is the user bombarded with requests or information overload?)
  4. about trustworthiness
  5. about ownership (of algorithims, of the system)
  6. about what the system is (a computer system or socio-technical system).
  7. about the desired future (what futures are supported or closed down?)
  8. about purposes and benefits
  9. about flexibility (can we change the system if it plays out in a way we don't like?)
  10. about responsibility (litigation, personal, co-responsibility)
  11. about the problem?
I found this list really powerful, especially the idea of imagined futures.

Marina Jirotka from the e-Research Centre at Oxford spoke about Informed Consent. The main model game from biomedical research as the main mechanism for protecting individuals from the power of medical researchers, becoming codified as part of the Nuremburg process after the second world war. The applicability of this biomedical model to other areas of research has been critiqued -especially in the social sciences, which may have a different research tradition, but also a different (although not non-existant) set of power dynamics between researchers and participants. Social science might feature a lower order of risks, different processes where it is not possible or indeed feasible to predict all risks, and be more open ended and uncertain. Biomedical research is often one way and paternalistic, where the social sciences can involve ongoing and negotiated consent to participation. Social science also privileges the engagement with and movement towards dilemmas, rather than trying to anticipate and remove them all in advance. It has also been criticised as problematic in relation to massive data sets where getting individual consent from all individuals featured in the data set would be impossible. The purpose of valid consent and ethics review boards are to be mechanisms of protection, to fulfil the imperative to do no harm - however even the exact meaning of the latter can be uncertain. Jirotka felt it was important to avoid encouraging a tick-box approach to informed consent.

I our following group discussion about this, we played around with a set of alternatives (or supplements) to the role of informed consent as the central pillar of research ethics. These included some form of co-design (in earlier problem-identifying stages, allowing for failure of a research project, and the status of partner not participant, even if this meant partners carrying responsibility for ethical behaviour too), cultural norms and expectations which would be visible and accessible to non professionals, and mechanisms for challenging research projects on ethical grounds (both internally and externally).

Friday 9 December 2011

This is not a Cyber War

I've a new article out in the first issue of the new journal The International Journal of Cyber Warfare and Terrorism. The article is called 'This is not a Cyber War, it's a...?: WikiLeaks, Anonymous and the Politics of Hegemony' and is available here.


What I wanted to do with this article was apply Securitization theory from International Relations, Gramsci's theory of Hegemony, and Laclau's concept of Democratic demands to the back and forth hacktivism and contestation between WikiLeaks, Anonymous and various banks, financial services and governments in the wake of the diplomatic cable releases. The paper actually follows on theoretically from the forthcoming Securing Virtual Space article which will be coming out in Space and Culture in the new year. It should hopefully stand on its own though. It's in response to some of the more alarmist accounts of online conflict.