Κυριακή 1 Σεπτεμβρίου 2019

Data science ethical considerations: a systematic literature review and proposed project framework

Abstract

Data science, and the related field of big data, is an emerging discipline involving the analysis of data to solve problems and develop insights. This rapidly growing domain promises many benefits to both consumers and businesses. However, the use of big data analytics can also introduce many ethical concerns, stemming from, for example, the possible loss of privacy or the harming of a sub-category of the population via a classification algorithm. To help address these potential ethical challenges, this paper maps and describes the main ethical themes that were identified via systematic literature review. It then identifies a possible structure to integrate these themes within a data science project, thus helping to provide some structure in the on-going debate with respect to the possible ethical situations that can arise when using data science analytics.

Objections to Simpson’s argument in ‘Robots, Trust and War’

Abstract

In “Robots, Trust and War” Simpson claims that victory in counter-insurgency conflicts requires that military forces and their governing body win the ‘hearts and minds’ of civilians. Consequently, forces made up primarily of autonomous robots would be ineffective in these conflicts for two reasons. Firstly, because civilians cannot rationally trust them because they cannot act from a motive based on good character. If they ever did develop this capacity then the purpose of sending them to war in our stead would be lost because there would be no moral saving. Secondly, because if robot forces did offer a moral saving then this would signal that the deploying government could not be trusted to be committed to the conflict. I disagree with both claims. I argue firstly that there are less demanding grounds that could allow robot forces to be trusted sufficiently to be effective whilst still achieving a moral saving over the deployment of human ones. Secondly, that this moral saving would not necessarily signal that the deploying body lacked commitment because its interpretation would be highly context-dependent. I conclude therefore, contra-Simpson, that robot forces could plausibly be effective in counter-insurgency engagements in the foreseeable future. I suggest therefore that there may be a case for developing a more finely grained understanding of the opportunities for, and challenges of, their use.

Ethical challenges of edtech, big data and personalized learning: twenty-first century student sorting and tracking

Abstract

With the increase in the costs of providing education and concerns about financial responsibility, heightened consideration of accountability and results, elevated awareness of the range of teacher skills and student learning styles and needs, more focus is being placed on the promises offered by online software and educational technology. One of the most heavily marketed, exciting and controversial applications of edtech involves the varied educational programs to which different students are exposed based on how big data applications have evaluated their likely learning profiles. Characterized most often as ‘personalized learning,’ these programs raise a number of ethical concerns especially when used at the K-12 level. This paper analyzes the range of these ethical concerns arguing that characterizing them under the general rubric of ‘privacy’ oversimplifies the concerns and makes it too easy for advocates to dismiss or minimize them. Six distinct ethical concerns are identified: information privacy; anonymity; surveillance; autonomy; non-discrimination; and ownership of information. Particular attention is paid to whether personalized learning programs raise concerns similar to those raised about educational tracking in the 1950s. The paper closes with discussion of three themes that are important to consider in ethical and policy discussions.

Reassessing values for emerging big data technologies: integrating design-based and application-based approaches

Abstract

Through the exponential growth in digital devices and computational capabilities, big data technologies are putting pressure upon the boundaries of what can or cannot be considered acceptable from an ethical perspective. Much of the literature on ethical issues related to big data and big data technologies focuses on separate values such as privacy, human dignity, justice or autonomy. More holistic approaches, allowing a more comprehensive view and better balancing of values, usually focus on either a design-based approach, in which it is tried to implement values into the design of new technologies, or an application-based approach, in which it is tried to address the ways in which new technologies are used. Some integrated approaches do exist, but typically are more general in nature. This offers a broad scope of application, but may not always be tailored to the specific nature of big data related ethical issues. In this paper we distil a comprehensive set of ethical values from existing design-based and application-based ethical approaches for new technologies and further focus these values to the context of emerging big data technologies. A total of four value lists (from techno-moral values, value-sensitive design, anticipatory emerging technology ethics and biomedical ethics) were selected for this. The integrated list consists of a total of ten values: human welfare, autonomy, non-maleficence, justice, accountability, trustworthiness, privacy, dignity, solidarity and environmental welfare. Together, this set of values provides a comprehensive and in-depth overview of the values that are to be taken into account for emerging big data technologies.

Digital health fiduciaries: protecting user privacy when sharing health data

Abstract

Wearable self-tracking devices capture multidimensional health data and offer several advantages including new ways of facilitating research. However, they also create a conflict between individual interests of avoiding privacy harms, and collective interests of assembling and using large health data sets for public benefits. While some scholars argue for transparency and accountability mechanisms to resolve this conflict, an average user is not adequately equipped to access and process information relating to the consequences of consenting to further uses of her data. As an alternative, this paper argues for fiduciary relationships, which put deliberative demands on digital health data controllers to keep the interests of their data subjects at the forefront as well as cater to the contextual nature of privacy. These deliberative requirements ensure that users can engage in collective participation and share their health data at a lower risk of privacy harms. This paper also proposes a way to balance the flexible and open-ended nature of fiduciary law with the specific nature and scope of fiduciary duties that digital health data controllers should owe to their data subjects.

Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios

Abstract

Through modern driver assistant systems, algorithmic decisions already have a significant impact on the behavior of vehicles in everyday traffic. This will become even more prominent in the near future considering the development of autonomous driving functionality. The need to consider ethical principles in the design of such systems is generally acknowledged. However, scope, principles and strategies for their implementations are not yet clear. Most of the current discussions concentrate on situations of unavoidable crashes in which the life of human beings is existentially affected. In this paper, we argue that ethical considerations should be mandatory for any algorithmic decision of autonomous vehicles, instead of a limitation to hazard situations. Such an ethically aligned behavior is relevant because autonomous vehicles, like any other traffic participants, operate in a shared public space, where every behavioral decision impacts the operational possibilities of others. These possibilities concern the fulfillment of a road-user’s safety, utility and comfort needs. We propose that, to operate ethically in such space, an autonomous vehicle will have to take its behavior decisions according to a just distribution of operational possibilities among all traffic participants. Using an application on a partially-autonomous prototype vehicle, we describe how to apply and implement concepts of distributive justice to the driving environment and demonstrate the impact on its behavior in comparison to an advanced but egoistic decision maker.

Privacy challenges in smart homes for people with dementia and people with intellectual disabilities

Abstract

The aim of this paper is to analyse the ethical issues relating to privacy that arise in smart homes designed for people with dementia and for people with intellectual disabilities. We outline five different conceptual perspectives on privacy and detail the ways in which smart home technologies may violate residents’ privacy. We specify these privacy threats in a number of areas and under a variety of conceptions of privacy. Furthermore, we illustrate that informed consent may not provide a solution to this problem. We offer a number of recommendations that designers of smart homes for people with dementia and people with intellectual disabilities might follow to ensure the privacy of potential residents.

From privacy to anti-discrimination in times of machine learning

Abstract

Due to the technology of machine learning, new breakthroughs are currently being achieved with constant regularity. By using machine learning techniques, computer applications can be developed and used to solve tasks that have hitherto been assumed not to be solvable by computers. If these achievements consider applications that collect and process personal data, this is typically perceived as a threat to information privacy. This paper aims to discuss applications from both fields of personality and image analysis. These applications are often criticized by reference to the protection of privacy. This paper critically questions this approach. Instead of solely using the concept of privacy to address the risks of machine learning, it is increasingly necessary to consider and implement ethical anti-discrimination concepts, too. In many ways, informational privacy requires individual information control. However, not least because of machine learning technologies, information control has become obsolete. Hence, societies need stronger anti-discrimination tenets to counteract the risks of machine learning.

The disciplinary power of predictive algorithms: a Foucauldian perspective

Abstract

Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that “governance by discipline” has given way to “governance by risk”. The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian discipline. Compliance to a norm occupies centre stage; suspected deviants are subjected to close attention—as the precursor of possible sanctions. The predictive modelling involved uses personal data from both the focal institution and elsewhere (“Polypanopticon”). As a result, the individual re-emerges as the focus of scrutiny. Subsequently, small excursions into Foucauldian texts discuss his discourses on the creation of the “delinquent”, and on the governmental approach to smallpox epidemics. It is shown that his insights only mildly resemble prediction as based on machine learning; several conceptual steps had to be taken for modern machine learning to evolve. Finally, the options available to those subjected to predictive disciplining are discussed: to what extent can they comply, question, or resist? Through a discussion of the concepts of transparency and “gaming the system” I conclude that our predicament is gloomy, in a Kafkaesque fashion.

Privacy in the digital age: comparing and contrasting individual versus social approaches towards privacy

Abstract

This paper takes as a starting point a recent development in privacy-debates: the emphasis on social and institutional environments in the definition and the defence of privacy. Recognizing the merits of this approach I supplement it in two respects. First, an analysis of the relation between privacy and autonomy teaches that in the digital age more than ever individual autonomy is threatened. The striking contrast between on the one hand offline vocabulary, where autonomy and individual decision making prevail, and on the other online practices is a challenge that cannot be met in a social approach. Secondly, I elucidate the background of the social approach. Its importance is not exclusively related to the digital age. In public life we regularly face privacy-moments, when in a small distinguished social domain few people are commonly involved in common experiences. In the digital age the contextual integrity model of Helen Nissenbaum has become very influential. However this model has some problems. Nissenbaum refers to a variety of sources and uses several terms to explain the normativity in her model. The notion ‘context’ is not specific and faces the reproach of conservatism. We elaborate on the most promising suggestion: an elaboration on the notion ‘goods’ as it can be found in the works of Michael Walzer and Alisdair Macintyre. Developing criteria for defining a normative framework requires making explicit the substantive goods that are at stake in a context, and take them as the starting point for decisions about the flow of information. Doing so delivers stronger and more specific orientations that are indispensible in discussions about digital privacy.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου