The normative significance of identifiabilityAbstract
According to psychological research, people are more eager to help identified individuals than unidentified ones. This phenomenon significantly influences many important decisions, both individual and public, regarding, for example, vaccinations or the distribution of healthcare resources. This paper aims at presenting definitions of various levels of identifiability as well as a critical analysis of the main philosophical arguments regarding the normative significance of the identifiability effect, which refer to: (1) ex ante contractualism; (2) fair distribution of chances and risks; (3) anti-aggregationist principles that recommend the distribution of bad effects and the concentration of good ones. I will show that these arguments, although connected with interesting philosophical problems regarding e.g. counterfactuals, aggregation, or probability, are unconvincing.
|
Privacy in the digital age: comparing and contrasting individual versus social approaches towards privacyAbstract
This paper takes as a starting point a recent development in privacy-debates: the emphasis on social and institutional environments in the definition and the defence of privacy. Recognizing the merits of this approach I supplement it in two respects. First, an analysis of the relation between privacy and autonomy teaches that in the digital age more than ever individual autonomy is threatened. The striking contrast between on the one hand offline vocabulary, where autonomy and individual decision making prevail, and on the other online practices is a challenge that cannot be met in a social approach. Secondly, I elucidate the background of the social approach. Its importance is not exclusively related to the digital age. In public life we regularly face privacy-moments, when in a small distinguished social domain few people are commonly involved in common experiences. In the digital age the contextual integrity model of Helen Nissenbaum has become very influential. However this model has some problems. Nissenbaum refers to a variety of sources and uses several terms to explain the normativity in her model. The notion ‘context’ is not specific and faces the reproach of conservatism. We elaborate on the most promising suggestion: an elaboration on the notion ‘goods’ as it can be found in the works of Michael Walzer and Alisdair Macintyre. Developing criteria for defining a normative framework requires making explicit the substantive goods that are at stake in a context, and take them as the starting point for decisions about the flow of information. Doing so delivers stronger and more specific orientations that are indispensible in discussions about digital privacy.
|
From privacy to anti-discrimination in times of machine learningAbstract
Due to the technology of machine learning, new breakthroughs are currently being achieved with constant regularity. By using machine learning techniques, computer applications can be developed and used to solve tasks that have hitherto been assumed not to be solvable by computers. If these achievements consider applications that collect and process personal data, this is typically perceived as a threat to information privacy. This paper aims to discuss applications from both fields of personality and image analysis. These applications are often criticized by reference to the protection of privacy. This paper critically questions this approach. Instead of solely using the concept of privacy to address the risks of machine learning, it is increasingly necessary to consider and implement ethical anti-discrimination concepts, too. In many ways, informational privacy requires individual information control. However, not least because of machine learning technologies, information control has become obsolete. Hence, societies need stronger anti-discrimination tenets to counteract the risks of machine learning.
|
Democratizing cognitive technology: a proactive approachAbstract
Cognitive technology is an umbrella term sometimes used to designate the realm of technologies that assist, augment or simulate cognitive processes or that can be used for the achievement of cognitive aims. This technological macro-domain encompasses both devices that directly interface the human brain as well as external systems that use artificial intelligence to simulate or assist (aspects of) human cognition. As they hold the promise of assisting and augmenting human cognitive capabilities both individually and collectively, cognitive technologies could produce, in the next decades, a significant effect on human cultural evolution. At the same time, due to their dual-use potential, they are vulnerable to being coopted by State and non-State actors for non-benign purposes (e.g. cyberterrorism, cyberwarfare and mass surveillance) or in manners that violate democratic values and principles. Therefore, it is the responsibility of technology governance bodies to align the future of cognitive technology with democratic principles such as individual freedom, avoidance of centralized, equality of opportunity and open development. This paper provides a preliminary description of an approach to the democratization of cognitive technologies based on six normative ethical principles: avoidance of centralized control, openness, transparency, inclusiveness, user-centeredness and convergence. This approach is designed to universalize and evenly distribute the potential benefits of cognitive technology and mitigate the risk that such emerging technological trend could be coopted by State or non-State actors in ways that are inconsistent with the principles of liberal democracy or detrimental to individuals and groups.
|
Just research into killer robotsAbstract
This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems (LAWS). Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions are satisfied by at least some potential LAWS development programs. More specifically, since LAWS will lead to greater force protection, warfighters are free to become more risk-acceptant in protecting civilian lives and property. Further, various malicious motivations that lead to war crimes will not apply to LAWS or will apply to no greater extent than with human warfighters. Finally, intrinsic objections—such as the claims that LAWS violate human dignity or that it creates ‘responsibility gaps’—are rejected on the basis that they rely upon implausibly idealized and atomized understandings of human decision-making in combat.
|
The disciplinary power of predictive algorithms: a Foucauldian perspectiveAbstract
Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that “governance by discipline” has given way to “governance by risk”. The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian discipline. Compliance to a norm occupies centre stage; suspected deviants are subjected to close attention—as the precursor of possible sanctions. The predictive modelling involved uses personal data from both the focal institution and elsewhere (“Polypanopticon”). As a result, the individual re-emerges as the focus of scrutiny. Subsequently, small excursions into Foucauldian texts discuss his discourses on the creation of the “delinquent”, and on the governmental approach to smallpox epidemics. It is shown that his insights only mildly resemble prediction as based on machine learning; several conceptual steps had to be taken for modern machine learning to evolve. Finally, the options available to those subjected to predictive disciplining are discussed: to what extent can they comply, question, or resist? Through a discussion of the concepts of transparency and “gaming the system” I conclude that our predicament is gloomy, in a Kafkaesque fashion.
|
Meaningful human control as reason-responsiveness: the case of dual-mode vehiclesAbstract
In this paper, in line with the general framework of value-sensitive design proposed by the authors, we aim to operationalize the general concept of “Meaningful Human Control” (MHC) in order to pave the way for its translation into more specific design requirements. In particular, we focus on the operationalization of the first of the two conditions the authors investigated: the so-called ‘tracking’ condition. Our investigation is led in relation to one specific subcase of automated system: dual-mode driving systems (e.g. Tesla ‘autopilot’). First, we connect and compare meaningful human control with a concept of control very popular in engineering and traffic psychology (Michon 1985), and we explain to what extent tracking resembles and differs from it. This will help clarifying the extent to which the idea of meaningful human control is connected to, but also goes beyond, current notions of control in engineering and psychology. Second, we take the systematic analysis of practical reasoning as traditionally presented in the philosophy of human action (Anscombe, Bratman, Mele) and we adapt it to offer a general framework where different types of reasons and agents are identified according to their relation to an automated system’s behaviour. This framework is meant to help explaining what reasons and what agents (should) play a role in controlling a given system, thereby enabling policy makers to produce usable guidelines and engineers to design systems that properly respond to selected human reasons. In the final part, we discuss a practical example of how our framework could be employed in designing automated driving systems.
|
Trust and resilient autonomous driving systemsAbstract
Autonomous vehicles, and the larger socio-technical systems that they are a part of are likely to have a deep and lasting impact on our societies. Trust is a key value that will play a role in the development of autonomous driving systems. This paper suggests that trust of autonomous driving systems will impact the ways that these systems are taken up, the norms and laws that guide them and the design of the systems themselves. Further to this, in order to have autonomous driving systems that are worthy of our trust, we need a superstructure of oversight and a process that designs trust into these systems from the outset. Rather than banning or avoiding all autonomous vehicles should a tragedy occur, despite these systems having some level of risk, we want resilient systems that can survive tragedies, and indeed, improve from them. I will argue that trust plays a role in developing these resilient systems.
|
Introducing the pervert’s dilemma: a contribution to the critique of Deepfake PornographyAbstract
Recent technological innovation has made video doctoring increasingly accessible. This has given rise to Deepfake Pornography, an emerging phenomenon in which Deep Learning algorithms are used to superimpose a person’s face onto a pornographic video. Although to most people, Deepfake Pornography is intuitively unethical, it seems difficult to justify this intuition without simultaneously condemning other actions that we do not ordinarily find morally objectionable, such as sexual fantasies. In the present article, I refer to this contradiction as the pervert’s dilemma. I propose that the method of Levels of Abstraction, a philosophical mode of enquiry inspired by Formal Methods in computer science, can be employed to formulate at least one possible solution to the dilemma. From this perspective, the permissibility of some actions appears to depend on the degree to which they are abstracted from their natural context. I conclude that the dilemma can only be solved when considered at low levels of abstractions, when Deepfakes are situated in the macro-context of gender inequality.
|
Splintering the gamer’s dilemma: moral intuitions, motivational assumptions, and action prototypesAbstract
The gamer’s dilemma (Luck in Ethics Inf Technol 11(1):31–36, 2009) asks whether any ethical features distinguish virtual pedophilia, which is generally considered impermissible, from virtual murder, which is generally considered permissible. If not, this equivalence seems to force one of two conclusions: either both virtual pedophilia and virtual murder are permissible, or both virtual pedophilia and virtual murder are impermissible. In this article, I attempt, first, to explain the psychological basis of the dilemma. I argue that the two different action types picked out by “virtual pedophilia” and “virtual murder” set very different expectations for their token instantiations that systematically bias judgments of permissibility. In particular, the proscription of virtual pedophilia rests on intuitions about immoral desire, sexual violations, and a schematization of a powerful adult offending against an innocent child. I go on to argue that these differences between virtual pedophilia and virtual murder may be ethically relevant. Precisely because virtual pedophilia is normally aversive in a way that virtual murder is not, we plausibly expect virtual pedophilia to invite abnormal and immorally desirous forms of engagement.
|
Medicine by Alexandros G. Sfakianakis,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,00302841026182,00306932607174,alsfakia@gmail.com,
Ετικέτες
Τρίτη 3 Δεκεμβρίου 2019
Αναρτήθηκε από
Medicine by Alexandros G. Sfakianakis,Anapafseos 5 Agios Nikolaos 72100 Crete Greece,00302841026182,00306932607174,alsfakia@gmail.com,
στις
12:00 π.μ.
Ετικέτες
00302841026182,
00306932607174,
alsfakia@gmail.com,
Anapafseos 5 Agios Nikolaos 72100 Crete Greece,
Medicine by Alexandros G. Sfakianakis,
Telephone consultation 11855 int 1193
Εγγραφή σε:
Σχόλια ανάρτησης (Atom)
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου