related posts

ALGORITHMS II: ALGORITHMS, AI, AND THE RESHAPING OF OURSELVES / Vol. 86, No. 4 (Winter 2019)

May 13, 2019

Arien Mack, Journal Editor

 

Ebby Abramson

Dolunay Bulut

Endangered Scholars Worldwide

 

Part 1: The Turn to Quantification and Its Historical Appeal

 

Joseph E. Davis

Toward the Elimination of Subjectivity: From Francis Bacon to AI

The ardent embrace and imposition of algorithms, AI, and other technologies of quantification cannot be explained by their instrumental value or actual productivity. Their appeal is rooted in deeper commitments, moral and philosophical, that have been playing out over the whole course of modernity. These commitments—to achieve certainty and “objective” knowledge for the sake of human enhancement and social harmony—have been conveyed in both science and administration in a mode of representation and language of numbers that effaces persons and their subjectivity. Algorithms and AI represent these ethical and epistemological ideals in their purest form yet.

 

Daniel Doneson

The Conquest of Fortune: On the Machiavellian Character of Algorithmic Judgement

Algorithmic judgment is the latest, most comprehensive version of modern rational control, an effort to master Fortuna by means of science and its fruits, technology. The origins of this moral and political ideal date to the dawn of modernity and Machiavelli’s The Prince. Comparing Fortuna with a blind force, Machiavelli holds that the right stance towards her is not deference but audacious opposition. Mastery requires the unflinching exercise of our reason and innovation, and ushers in a profound simplification of our world and ourselves. It yields characteristic instruments and techniques, that, like algorithms, do not so much reflect our reason or require our virtue as seek to replace them.  

 

Part 2: Ethnographic Explorations of Quantification and Its Reshaping Effects

 

 

Natasha Dow Schüll 

The Data-based Self: Self-Quantification and the Data-Driven (Good) Life

The capacity to collect, store, and analyze data drawn from everyday physiological, behavioral, and geolocational experience is growing rapidly and spreading to an ever-wider range of social domains. A polarized discussion has unfolded around the threats that data technologies pose to human agency and the asymmetries it introduces between those who track and mine data and those who are tracked. Less explored is how data tracking might also serve as a means and medium for self-understanding and creative transformation. Such an inquiry allows a richer understanding of datafication and its dynamics, and more effective critique of its asymmetries and discontents.

 

Marie David

AI and the Illusion of Human-Algorithm Complementarity

Machine learning has proven powerful in some narrow contexts but is now being applied across any and every possible domain. The justification of this extension is greater control and efficiency, a safer and less risky world. This bright promise, however, rests on an illusion of human-algorithm complementarity. The direction of AI applications is not toward complementarity with humans’ abilities but with their devaluation and displacement. In submitting ourselves to machine judgment, we are undermining our own and depriving ourselves of skills, ways of learning, and means to face unforeseen events and conflicts. The world is not thereby safer, it is more fragile.

 

Justin Mutter

A New Stranger at the Bedside: Industrial Quality Management and the Erosion of Clinical Judgment in American Medicine

Since the late 1980s, quality management, a concept that originated in the manufacturing and engineering sectors, has dramatically altered the political economy of everyday clinical practice. In a departure from prior conceptions, “quality” was reconstituted hierarchically as primarily the province of regulatory experts and epistemically as a narrow set of process-oriented metrics. As clinical guidelines have been assimilated into bedside medicine through regulated performance numbers, this powerful reconstitution has transformed the exercise of clinical judgment. Quality as “information” now encourages a form of clinical reasoning that is preferentially statistical and algorithmic in nature, eliding critical contributions from provider-patient relationships.

 

Part 3: The Irreducible Value of Human Judgment

 

Paul Scherz

The Displacement of Human Judgment in Science: The Problems of Biomedical Research in an Age of Big Data

In the wake of the Human Genome Project, biomedical research has become ever more data-driven, to the point that some commentators confidently predict the end of hypothesis-driven, human-led research in favor of algorithmic data analytics. Despite the optimism for the possibilities of the new paradigm and the research funding behind it, the results of post-genomic biology have been disappointing. Cultural assumptions and structural arrangements have led to the difficulties, rooted in fundamental misunderstandings of the role of human judgment and tacit knowledge in discovery and interpretation in scientific research. A better research paradigm demands a better understanding of human judgment.  

 

William Hasselberger

“Ethics Beyond Computation: Why We Can’t (and Shouldn’t) Replace Human Moral Judgment with Algorithms”

“Morality algorithms” (e.g., the “death algorithm” of self-driving cars) are part of an effort to codify moral norms in algorithms. They raise two questions. First, can algorithms capture the structure of moral reasoning and ethical judgment? From an Aristotelean perspective, many “inputs” to moral deliberation are simply not quantifiable, including relevant features of a situation, the context, and the proper criterion of moral judgment. Second, should we rely on algorithmic decision-aids? Exploring the “outputs” reveals the crucial difference between performing an activity either formulaically or the basis of understanding what is good, worthwhile, or justified. Reflection on these questions reveals a practical, and not merely philosophical, problem with “morality algorithms.”

 

Jarret Zigon

Can Machines Be Ethical?: On the Necessity of Relational Ethics and Empathy for Data-Centric Technologies

There are good reasons why we are experiencing what I call an ethical demand made by the data-centric situation we now find ourselves in. Building on the phenomenological critique of artificial intelligence made by Hubert Dreyfus, this essay makes a critical phenomenological response to this ethical demand by arguing for what I call relational ethics. In contrast to the dominant data ethical arguments that call for the delineation of principles or rules, I argue that only if data-centric technologies were capable of empathic attunement could they legitimately be called ethical machines.

Please reload

follow us
search by tags