Anne-Marie Fowler is giving her talk, Differentiation is Mechanics, Integration is Art: Particularity, Community and the Digital Mind at the Centre for Ethics on November 25 at 4pm. This is part of the series, Ethics of AI in Context: Emerging Scholars. Please register here.
Anne-Marie Fowler is a doctoral student in the Department for the Study of Religion in collaboration with the Anne Tanenbaum Centre for Jewish Studies. Bringing her professional background in banking, philanthropy, social entrepreneurship and public policy to her research, Anne-Marie is investigating the manner in which alteration to parameters such as time, which are native to both secular and sacred systems design, enable the technologies upon which global debt systems operate. She is proposing alternative conceptual and temporal lenses by which to view the necessary task of AI ethics. Her PhD supervisor is Bob Gibbs.
DSR: The title of your talk, Differentiation is Mechanics, Integration is Art: Particularity, Community and the Digital Mind is a mouthful! Can you give the Cole’s notes on the focus of your discussion?
AMF: Well, the terms “Differentiation” and “Integration” come from Calculus. But to put this less mathematically, I think an alternative basis for the question of ethical AI rests in the idea that integration, let’s call it the “whole,” bears something “extra,” or something not apparent in the sum of its differentiated “parts.” There is something indescribable in terms of its prior.
A correlation is present in the relationship between the whole and its parts, and this correlation generates something conceptually distinct, unique, and effectively “extra,” or “more.” For example, beginning from the whole of an infinite series, each number generated from the series is both itself and a member of the series; its identity is one of a relation that is creative of difference, exceeding that of itself alone. It also must be known in terms of the set to which it belongs, and to the infinite possibility, i.e. the ellipsis and ever-opening question by which the nature of the set is depicted.
I am bringing this interpretive lens of “more,” derived from the work of Hermann Cohen, among others, to the question of AI ethics. How can I present alternative ways to look at the learning network’s task? What if the network’s task was not only about efficient prediction based upon combinations of what has already been given, but was about the role of the network as a site of origin and indeterminacy? In other words, can a network compute the possible as opposed to the probable? Drawing from a Hebraic sense of time in which the anticipated future points to the present, thereby conditioning our ethical obligation to effect change in the present, can manipulations of the temporal parameter become a source of interruption that renders the foreseen in unforeseen terms, i.e., renders the invariant of the ethical obligation in the context of the variable, even infinite, possibilities of its ongoing ethical task?
DSR: If you look to AI in terms of origin and indeterminacy, rather than, in your words, in terms of “efficient prediction,” how does that change how we understand AI?
AMF: The question of origin relates to the question of agency in that it relates to questions of causality. Is causality wholly efficient, in terms of recombinant action upon what is already given, or does it accommodate emergence, in terms of generating what is not yet known?
Clearly, the learning network has been conventionally understood in terms of calculative probability; it predicts upon what is given. A question, then, would be whether the learning network could be a site not only of determining what is “true” or “false” but rather what is both or neither: that is, the possible and (as yet) unactualized uniqueness. Can a network compute the possible as opposed to the probable? It is an important question as this possible, unactualized, unforeseen space is precisely where the task of ethics resides. Ethics emerges to meet the new dilemma, the unforeseen question that gives new content to old rules. We don’t have ethical debates about what is already obvious, but rather about what is possible, yet undecided.
So when we say ethical AI, we’re not talking about predetermined content or a list of instructions as imposed via an external or prime mover. Ethics both generates, and tasks itself in, the open question. Thus, ethics, grounded in logic, is about the capacity to produce, or generate, particular conceptual content in the first place. This production is the product; the question opens the “possibility of possibility.”
I think that this focus shift involves a phase shift when it comes to the ethics of the learning network. Whereas AI ethics is often expressed in terms of correcting or controlling predictive bias at the delivery stage (i.e., post-error), and that should certainly continue, I am also considering the early stages of the network’s process as the site of ethical “formation.” For instance, how might the AI learn to interrupt (or at least internally converse with) its own learning process? I see this as being valuable to evaluating AI decision making in cases of, for example, outliers or rare events (many of which seem a lot less rare in current times!) If a network is trained and “coded” to see a certain set of features as “normal” it may ignore or even fail to name cases upon which it is unlearned, or with which it is unfamiliar. I am considering how to recast the network’s relationship to the unfamiliar, and I am considering the parameter of temporality as a way to apply this shift to the computational setting, to the definiteness of ones and the unlimited openness of the zero or nought.
Now one could trivially say that every combination and recombination the network performs is “new” because it occurs in a nanosecond “later” or “earlier” than another. Its chronological placement is different, and its context is thereby different. So it is new. But this is not remotely sufficient as an answer, as this applies the chronological sense of succession, or shared human “public time,” to the network for which such framings are irrelevant. The human receiving the results of a research inquiry experiences the results in succession in a manner that is not digitally native. Further, what is recognized as “discovery” effectively relies on the human perspective upon what measures, and thereby what constitutes, discovery. Can the digital mind conceive of unique discovery? What might that look like? Can we know?
DSR: You say that the “digital mind is not a human mind”. Do you believe that we fall into the trap of giving digital minds agency / human qualities? What are the consequences of doing so?
AMF: Actually, I come at that question from another direction. The digital mind is a computational space. It is a discrete space in which the fluid inseparable quality of human sensory “experience” is inapplicable. What quality (and plural qualities) can we accord to the digital space, and what sort of values might emerge from it? In looking at this space, I am asking about what aspects of the digital space might be open to indeterminacy.
While I officially came to this question during my past work in finance, I suppose it really happened much earlier. I remember the first time I encountered replicant Roy Batty’s iconic final speech in Blade Runner. (Be careful what movies you watch repeatedly as a teenager!) How did that spectacular moment happen? How did one who murdered with his hands become one who saved a life with same, one who effectively opened all possibility in the face of temporal closing? This narrative sticks with me.
And its idea returns to me in a lot of current (and much less dramatic) examples on social media. There are a lot of fun pop culture memes, for instance, that introduce an invariant structure within and from which content is freed to vary particularly, and in a sense infinitely. In one sense this is a computational act, and in another it is an indeterminate and interruptive act, one of creativity and arguably even “newness.”
And this has still another layer, which both your question, and the meme example, allude to. In the programming task, the digital “mind” interacts with the human mind. The human mind views the digital mind as its instrument, assessing its usefulness in human terms. So a learning network is often conceived and judged as “a robot,” as a species of subhuman; thereby it is judged upon whether it could “behave” in terms of an ideal human. But this is a stretch. The network is not a human subject; it does not experience human temporality or continuity. And while it is capable of learning and memory, it is not experiencing learning and memory as an embodied human. Still, it is held to ideals and standards as if it could do so, as if its job were to be a scapegoat. Anthropomorphizing the learning network is problematic for many reasons. Yes, a network trained upon biased data, and trained in terms of comparison to “universal” benchmarks is going to learn bias. This bias can and does lead to terrible decisions. But this is not a case of a biased network; a network is not a static thing. It is the case of a biased training process. Teaching a network how to be biased and then blaming it for being biased is not all that useful. Nor is only punishing and fixing it afterwards. What about the task and process, as considered from initial stages? A network is a process and an activity. Its ethics, and its “agency” is as well.
How might we recognize the logical and digital mind as other, in terms of collaborative capability and difference? Digital agency is not human agency. So how might we construct a lens by which to interpret digital agency apart from the familiar framework of the agentic subject? I’m asking questions like that.
DSR: What is your biggest concern as a researcher when it comes to AI?
AMF: I am concerned that we will continue to see the learning network in terms of the human errors that humans project upon it, instead of in terms of the learning network’s “own” lenses of necessity and possibility. The network is not a bad human. It is a network. It is not a being or consciousness into which we must only toss better content; it is a process, a “thinking” differently.
In related, I asked a question of a young speaker at a recent program. I asked whether it was indeed effective to look at AI regulation solely in terms of punitive, corrective, late stage fixes. Would it be helpful to look not just at content fixes, but also at early stage disruptions of how we assemble data, and how the network learns with and from it? She replied that I sounded “a little bit naïve.” She said that I needed to understand that this was “a matter of control.” I replied that if AI was learning alongside of us, rather than merely because of us, might we want to look at the ethical and regulative in terms of disruption and interruption within origin-ary stages as well? Was “control” the wrong approach to what may amount to teaching the network a process of diversity in community, of the relatedness of particular and possible, albeit in the mathematical language of set membership? She did not seem to want to answer, nor even understand that question. That worries me.