Κυριακή 24 Νοεμβρίου 2019

The Hidden Dimensions of Human–Technology Relations
The original version of this article unfortunately contains an incorrect sentence.

Correction to: A Tale of Two Deficits: Causality and Care in Medical AI
The original version of this article unfortunately contains an unconverted data in footnotes 5, 9 and 13.

Autonomous Vehicles: from Whether and When to Where and How

Drawing Inferences: Thinking with 6B (and Sketching Paper)

Abstract

This article discusses the epistemology of design as a process, arguing specifically that sketching and drawing are essential modes of thinking and reasoning. It demonstrates that the commonly accepted notion of a spontaneous and intuitive vision in the mind’s eye—encapsulated in the cliché of the napkin sketch—obscures the exploratory inferences that are made while scribbling with a pencil on a sheet of paper. The draughtsperson, along with their work tools (such as the 6B), modes of notation, specific techniques, and epistemic strategies as well as the resulting design artefacts form milieus of reflection that facilitate complex processes of exploration. Case studies, including the genesis of the Mini by Alec Issigonis, samples of work by Alvar Aalto, and a reinterpretation of student sketches from a classical design study by Gabriela Goldschmidt, serve to illustrate how drawing inferences with pencil and paper occurs.

Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

Abstract

We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.

The Design of Socially Sustainable Ontologies

Abstract

This paper describes the role of information architecture in the design of socially sustainable pervasive information spaces. The framing of information architecture as an essential part of Design Thinking extends current and historic notions of the field of information architecture. The discussion introduces the notion of the ‘contrived ontology’ which can be understood as the intentional meaning that design infuses in its artefacts, services and systems. Further, we argue that contrived ontology aligns with central themes within humanistic frameworks which view reality as subjective construct. This forms the central theoretical meditation herein: we contend that while design is always an act of interpreted cultural determination, at the scale of Floridi’s infosphere, the immediacy and immersive social reality of technology will become frictionless within our human experience. As this occurs, there is a moral and ethical imperative to ensure social sustainability and to this end that the meanings and intentions that inform the mature design of our human-made world are visible and accountable. It is towards this end that information architecture can make a valuable contribution.

Logic of Subsumption, Logic of Invention, and Workplace Democracy: Marx, Marcuse, and Simondon

Abstract

Through a comparison of the logic of socio-economic and technical development in Marx with the logic of technical invention in Simondon, I argue the thesis that worker’s democracy is the forgotten political form that offers a viable alternative to both capitalism and Soviet-style Communism, the dominant political régimes of the Cold War period that have not yet been surpassed. Marx’s detailed account of the capitalist technical logic from handwork through manufacture to industry is a logic of continuous concretization in Simondon’s sense. Its immanent teleology is the exclusion of living labor through automation such that freedom is understood as free time apart from labor and technical activity. A post-capitalist society would require a conception of freedom in labor, such as that held by the early Marx, that demands a leap from this logic of concretization to a new technical object. Such a new technical object would require workers to engage in technical activity that continues the activity of invention in Simondon’s sense. Through these interpretive and argumentative links, Simondon’s possibility of transindividual technical activity and knowledge can be seen as, in socio-political terms, aiming at workplace democracy. In philosophical terms, it aims to displace the priority of thought and imagination over activity and to locate both within an ongoing impersonal task which contains the possibility of individual and social self-realization.

Building General Knowledge of Mechanisms in Information Security

Abstract

We show how more general knowledge can be built in information security, by the building of knowledge of mechanism clusters, some of which are multifield. By doing this, we address in a novel way the longstanding philosophical problem of how, if at all, we come to have knowledge that is in any way general, when we seem to be confined to particular experiences. We also address the issue of building knowledge of mechanisms by studying an area that is new to the mechanisms literature: the methods of what we shall call mechanism discovery in information security. This domain offers a fascinating novel constellation of challenges for building more general knowledge. Specifically, the building of stable communicable mechanistic knowledge is impeded by the inherent changeability of software, which is deployed by malicious actors constantly changing how their software attacks, and also by an ineliminable secrecy concerning the details of attacks not just by attackers (black hats), but also by information security defenders (white hats) as they protect their methods from both attackers and commercial competitors. We draw out ideas from the work of the mechanists Darden, Craver, and Glennan to yield an approach to how general knowledge of mechanisms can be painstakingly built. We then use three related examples of active research problems from information security (botnets, computer network attacks, and malware analysis) to develop philosophical thinking about building general knowledge using mechanisms, and also apply this to develop insights for information security. We show that further study would be instructive both for practitioners (who might welcome the help in conceptualizing what they do) and for philosophers (who will find novel insights into building general knowledge of a highly changeable domain that has been neglected within philosophy of science).

Taking Stock of Engineering Epistemology: Multidisciplinary Perspectives

Abstract

How engineers know, and act on that knowledge, has a profound impact on society. Consequently, the analysis of engineering knowledge is one of the central challenges for the philosophy of engineering. In this article, we present a thematic multidisciplinary conceptual survey of engineering epistemology and identify key areas of research that are still to be comprehensively investigated. Themes are organized based on a survey of engineering epistemology including research from history, sociology, philosophy, design theory, and engineering itself. Five major interrelated themes are identified: the relationship between scientific and engineering knowledge, engineering knowledge as a distinct field of study, the social epistemology of engineering, the relationship between engineering knowledge and its products, and the cognitive aspects of engineering knowledge. We discuss areas of potential future research that are underdeveloped or “undone.”

Autonomous Driving and Perverse Incentives

Abstract

This paper discusses the ethical implications of perverse incentives with regard to autonomous driving. We define perverse incentives as a feature of an action, technology, or social policy that invites behavior which negates the primary goal of the actors initiating the action, introducing a certain technology, or implementing a social policy. As a special form of means-end-irrationality, perverse incentives are to be avoided from a prudential standpoint, as they prove to be directly self-defeating: They are not just a form of unintended side effect that must be balanced against the main goal or value to be realized by an action, technology, or policy. Instead, they directly cause the primary goals of the actors—i.e., the goals that they ultimately pursue with the action, technology, or policy—to be “worse achieved” (Parfit). In this paper, we elaborate on this definition and distinguish three ideal-typical phases of adverse incentives, where only in the last one the threshold for a perverse incentive is crossed. In addition, we discuss different possible relevant actors and their goals in implementing autonomous vehicles. We conclude that even if some actors do not pursue traffic safety as their primary goal, as part of a responsibility network they incur the responsibility to act on the common primary goal of the network, which we argue to be traffic safety.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου