PHIL Seminar: “Can Artificial Agents Be Independent Enough to Be Moral Agents?”, Zach Gudmunsen, 3:30PM November 3 (EN)

Paper Presentation by Zach Gudmunsen (November 3, 15.30-17.20) (H-232)

Title: Can Artificial Agents Be Independent Enough to Be Moral Agents?

Abstract: All artificial systems, however they are made and whatever they do, have a designer somewhere. This includes machine learning systems, which, for all their complexity and adaptation to the environment, ultimately turn to humans for their parameters, data sources and data quality, and function. This has led to the argument that, because they are always dependent on humans, artificial systems cannot be moral agents. This is the ‘independence argument’. I assume that there is some truth to the argument and that traditional artificial systems need (and lack) some kind of ‘independence’ to be moral agents. I argue that the best explanation for why the independence argument works is that artificial systems lack ‘externalist autonomy’ – a concept drawn from Alfred Mele. Under this explanation, ‘independence’ is found in the causal history of a system: a system is ‘independent’ when other agents haven’t unduly (causally) interfered with it in the past. With this in hand, I claim that the independence argument is more tractable for artificial systems than it initially seemed. Artificial systems produced through evolutionary techniques, such as artificial life, can have causal histories comparable to humans’, and are therefore likely to have enough externalist autonomy to avoid the independence argument.