Arien Mack, Editor
This issue and the forthcoming winter 2019 issue focus on the subject of algorithms. While it is rare for us to publish two issues that look at a single subject, this particular topic appears to us important enough to warrant it. Given the widespread use of algorithms and our ever-increasing reliance on them, it seems more than timely to bring the discussion of algorithms out of the realm of technology and into the realm of critical social inquiry. By doing so we hope to make their use more transparent and comprehensible to those who are not tech mavens, while simultaneously examining their impact on society in general and on the way we as individuals live our lives and will be living them in the future.
The essay discusses the ethical challenges of algorithmic and artificial intelligence: from issues of freedom of will and human sovereignty over technology to the paradigm shifts from a deontological to a utilitarian perspective prompted by a technology that favors a quantifiable ethics. The treatment is philosophically framed by Hegel’s concepts of the Weltgeist and the cunning of reason and considers the inventors of artificial intelligence as sub-workers in the service of the self-development of the absolute spirit. The essay concludes with the notion that dataism could be deemed a new religion positioning AI as God and allowing humans to return to paradise.
In response to a cycle of ethical and political crises, Silicon Valley technology companies have begun placing significant resources into “ethics” initiatives, including assigning executive-level staff to coordinate product design practices and review policies across their organizations. These new “ethics owners” are tasked with responding to external challenges to the core logics of Silicon Valley that fuel its outsized power over individuals and society—meritocracy, technological solutionism, and market fundamentalism—by producing “ethics” practices that remain largely bounded by those logics. “Doing ethics” in tech companies consists of working through this tension.
The manifestation of artificial intelligence (AI) into services is leading to a scrutinization of this technology from an individualistic human rights perspective. However, AI is a technology standardizing and automating processes and with this creating immaterial infrastructure through software. It is disrupting society by disrupting the concept of infrastructure and permeating sectors where this infrastructural dimension was unthinkable. Infrastructure facilitates and distributes power, creates the conditions for societal inclusion and shapes the public space. In order to assess and harness the impact of AI a different thinking on the nature of AI is needed.
A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight risk does much better than human judges, in large part because the latter place an excessive emphasis on the current offense. Current Offense Bias, as we might call it, is best seen as a cousin of “availability bias,” a well-known source of mistaken probability judgments. The broader lesson is that well-designed algorithms should be able to avoid cognitive biases of many kinds. Existing research on bail decisions also casts a new light on how to think about the risk that algorithms will discriminate on the basis of race (or other factors). Algorithms can easily be designed so as to avoid taking account of race (or other factors). They can also be constrained so as to produce whatever kind of racial balance is sought, and thus to reveal tradeoffs among various social values.
Catherine Malabou writes of the accident as an “explosive transformation,” the becoming of “someone else, an absolute other, someone who will never be reconciled with them selves again.” I define three accidental transformations—the use of statistics to invalidate the signature of a multimillion dollar will, the use of statistics to objectify racial categories in the case of People vs. Collins, and the accidental algorithmics that led to the lethal collision of a Tesla autonomous driving vehicle—to demonstrate how statistics and algorithms are fundamentally transformative, resulting in the production of an epistemic other, a “someone else” that escapes our own metaphysical assumptions.
This paper examines potential impacts of autonomous weapons on critical social, economic, and political issues. Recent discussions have focused on military issues and risks to international stability raised by autonomous weapons. However, these weapons also present dire threats to human and civil rights, the ability of publics to organize politically, and democracy itself. Autonomous weapons could further empower tyrants, erode democracies, and lead to greater levels of social, economic, and political inequality than have previously been known with human police and military forces. The automation of violence through algorithms will best serve those most willing to deploy violence at scale.
The emergence of algorithmic culture is fundamentally changing the social production of time and our experience of the present. Algorithms can be understood as culture machines that produce experiences of temporality through carefully mediated illusions of immediate feedback and casual relationships in interactive systems. These mechanisms extend much older traditions of technologically ordered time, but extend them to new planes of temporal activity, on the scale of the microsecond and beyond. This radically changes our engagement with temporality and underlying concepts of causality.