Bernd Stahl from the Centre for Computing and Social Responsibility at De Montford University gave an introduction to responsible innovation. The core of this was, for Stahl, communication with stakeholders (scientists, funders, policy makers, civil society) and the consequences of innovation. This issue was important now because of the uncertainty of the future, the severity and importance of outcomes, globalisation, and the fragmentation of authority. Important principles were those of ethics, human rights and democracy. Responsible innovation had two mechanisms, or points of contact: Product and Process. There was also discussion about whether responsibility was for or in ICT. Science was seen as 'systematically irresponsible', with a need to broaden the converation, encourage reflexive thinking in design, education and training of computer scientists including, for example, value sensitive design and building reflexive space into research proposals. Also important were the choices to be made in the application of technology (too often rolling from the military world to the consumer world).
Derek McAuley from the Horizon lab at the University of Nottingham gave a talk on Personal Information Repositories, an attempt to create a way for people to control their own data and make informed decisions about the ways and under what contexts they would disclose that information to companies or other requestors. He alluded to an important distinction between information we 'give' and information we 'give off', but also spoke about the use that granualar data about their lives could be of use to individuals, but could also be very sensitive. I didn't know this previously, but I also learnt that you can now get Mosaic geo-democraphic software on the iphone. The model presented involve the individual holding data themselves, and companies asking if they could run a supplied algorithm on it and return the result (if they wanted to). McAuley stated that informed consent has failed on the internet, and probably everywhere, and because of that other models (such as consumer protection) were needed.
In one of the group discussion sessions (which were generally informative and stimulating for me), one of the groups came up with a very interesting set of questions, initially applied to the Personal Information Respositories, but applicable elsewhere.
What things are being assumed about a technology in design?
- about predictability and control
- about users (who? how? diversity or discrimination?)
- about usability (is the user bombarded with requests or information overload?)
- about trustworthiness
- about ownership (of algorithims, of the system)
- about what the system is (a computer system or socio-technical system).
- about the desired future (what futures are supported or closed down?)
- about purposes and benefits
- about flexibility (can we change the system if it plays out in a way we don't like?)
- about responsibility (litigation, personal, co-responsibility)
- about the problem?
Marina Jirotka from the e-Research Centre at Oxford spoke about Informed Consent. The main model game from biomedical research as the main mechanism for protecting individuals from the power of medical researchers, becoming codified as part of the Nuremburg process after the second world war. The applicability of this biomedical model to other areas of research has been critiqued -especially in the social sciences, which may have a different research tradition, but also a different (although not non-existant) set of power dynamics between researchers and participants. Social science might feature a lower order of risks, different processes where it is not possible or indeed feasible to predict all risks, and be more open ended and uncertain. Biomedical research is often one way and paternalistic, where the social sciences can involve ongoing and negotiated consent to participation. Social science also privileges the engagement with and movement towards dilemmas, rather than trying to anticipate and remove them all in advance. It has also been criticised as problematic in relation to massive data sets where getting individual consent from all individuals featured in the data set would be impossible. The purpose of valid consent and ethics review boards are to be mechanisms of protection, to fulfil the imperative to do no harm - however even the exact meaning of the latter can be uncertain. Jirotka felt it was important to avoid encouraging a tick-box approach to informed consent.
I our following group discussion about this, we played around with a set of alternatives (or supplements) to the role of informed consent as the central pillar of research ethics. These included some form of co-design (in earlier problem-identifying stages, allowing for failure of a research project, and the status of partner not participant, even if this meant partners carrying responsibility for ethical behaviour too), cultural norms and expectations which would be visible and accessible to non professionals, and mechanisms for challenging research projects on ethical grounds (both internally and externally).