By Marcus Teo
The advent of Artificial Intelligence (AI) has taken the 21st century by storm, mechanising many processes that were once considered accessible only to human intelligence. Recent progress in AI see advents such as self-driving cars, representing AI’s involvement in the realm of moral cognition. For instance, the AI of self-driving cars must deliberate between the lives of the passengers and pedestrians should a dilemma arise. And often, utilitarian principles are applied in such instances. Given the exponential rate at which AI is growing, it seems reasonable to consider this as leading to greater instances of automating moral decision-making. Accordingly, I intend to argue that moral intuition, as highlighted by Jonathan Haidt’s Social Intuitionist Approach (SIA), suggests an immediateness in moral cognition that seems exclusive to humanity. Because of the role of moral intuition in our moral cognition, I postulate that it is metaphysically implausible, at least for the foreseeable future, to automate moral cognition.
For this essay, moral intuition shall refer to a judgment that appears in consciousness without any awareness by the moral agent. This contrasts moral judgement as deliberate reasoning. The importance of intuition in affecting moral cognition is widely recognised, and rightfully so – we cannot presume to accept ethical theories laden with intuitive discomforts; it is absurd to accept a theory demanding that moral agents act rationally contra their intuitions.
A flagship work in moral psychology highlighting the importance of moral intuition is Jonathan Haidt’s proposal of his Social Intuitionist Approach. Specifically, said work brings to attention the fact that – contrary to our preference for rationality – human moral cognition is heavily dependent on our intuitions. Haidt does this by presenting the following scenario:
Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love? (Haidt 1024)
Here, Haidt argues that we have no reason to think that this instance of incest was impermissible – rational worries such as inbreeding, damaging social relations, and legal concerns were accounted for within the parameters of the story. However, it seems that there remains a gut instinct that the siblings should not have had sex. Haidt reported that the bottom-line response was almost always “I don’t know, I can’t explain it, I just know it’s wrong”, arguing that these individuals have been “morally dumbfounded”.
Herein lies the importance of moral intuition. It seems that moral reasoning is a process beyond “cold” rationalisation, such that Haidt’s proposal of the SIA postulates intuition to be necessary to moral cognition. As seen in the aforementioned example, it seems that with rationalisation alone, a moral agent is likely to arrive at the answer that it is permissible for the siblings to make love – much to his discomfort from a nagging intuition that there is something simply “icky” about the idea of incest. This is further supported in literature on the neurobiological bases of morality, wherein Suhler and Churchland highlight that morality is, psychologically, a product of affect, reward, and neuroendocrine processes that do not require necessary mediation by rationality. Focusing on the descriptive claim that rationality is not the totality of human morality, we find ourselves conceding that intuition plays a significant role in all that is left.
To our understanding of moral intuition, it thus seems beyond our ability to code moral intuition into AI. This requires, first and foremost, an ability to replicate the human capacity of nonconscious thought crucial to moral intuition. As postulated earlier, moral intuition involves a judgment that is arrived at by nonconscious thought, or prima facie judgments. Here, it seems that “prima facie” is incompatible with AI – what is prima facie for a machine when prima facie refers to a nonconscious knee-jerk reaction antecedent to rationalisation? In automating AI, it is unintuitive to imagine a moral machine with a moral intuition anything like ours. In the context of the siblings, it seems that a machine made for moral cognition would then conclude that it is permissible for them to make love – there simply is no reason for the machine to think otherwise. Even if one imagines such a machine that argues for impermissibility in this context, it seems like the only justification for this computer to cry impermissibility is a pre-loaded conception of the absolute impermissibility of incest. In all possible outcomes, then, we find ourselves distrusting the computer’s conclusions due either to 1) the lack of moral intuition, or 2) the lack of satisfactory reasoning. It thus seems that the foreseeable future is not one in which AI can adequately perform moral cognition.
A plausible counterargument to the above is a functionalist account of mental capacities. Here, the functionalist account postulates that the essential feature of mental states is a set of causal relations that mediate sensory inputs and behavioural outputs. In this context, the functionalist would agree that the phenomenology of moral intuition cannot be captured by AI. However, viewing moral cognition as a web of input-to-output causal relations, we can replace moral intuition with something that fulfils the same functional role. If this is satisfied, it seems that phenomenology would be functionally irrelevant – moral cognition, when functionally complete, is still moral cognition. A good candidate from the AI standpoint, then, would be to revisit heuristics – fallible ‘rules of thumb’ employed by AI that distinguish promising moves among other possible moves. Here, it is arguable that heuristics can be employed as the functionalist alternative to an immediate process that aids the machine in forming its output.
The advent of heuristics in AI, albeit impactful in allowing AI to derive solutions more quickly, face considerable issues in the face of moral cognition, mainly in its lack of phenomenological character. In ethical considerations, it seems that the justifications of output states are more important than the output states themselves; while we seek answers of permissibility, a significant portion of moral judgment lies in justifying our verdicts. Insofar as human moral agents are concerned, our moral judgments are demonstrably affected by our intuition, thereby affecting our justifications, as presented above. To presume, then, that heuristics are functionally equivalent does little to justify a machine’s moral output.
To demonstrate, the moral machine’s justification here would consider the siblings in the form of Modus Tollens, justifying as such:
- If an instance of sex violates rule1… rulen, it is impermissible (if P, then Q);
- The siblings do not violate any of the rules (not-P);
- It is permissible for the siblings to have sex (not-Q).
Alternatively, should incest be added to the list of rules in premise 1, then that what the siblings did would be impermissible.
It seems here that even though heuristics may be purported as a functionally identical to moral intuition, it lacks the distinctively-human phenomenology involved in such decision-making processes. Clearly, heuristics fail to capture the “icky” qualia found in Haidt’s studies that characterises our intuitions. With a justification as such, both permissible and impermissible outputs do not sit well with us – if it was permissible for the siblings to have sex, this opposes our intuitions, highlighting the phenomenological difference that functionalism fails to capture. If the machine cries impermissibility, we may then ask why incest is necessarily impermissible to begin with, thereby leaving room for doubt of the machine’s moral capabilities. The functionalist possibility of employing heuristics as a functional parallel to intuition, thus, fails in justifying AI’s abilities in moral cognition.
The present work set out to explore the plausibility of mechanising moral cognition in AI. Using work done in the SIA, the importance of intuition as a prima facie reaction to moral situations has been elucidated. Insofar as AI is concerned, it seems implausible to programme these necessarily-human traits into a machine. Even as we may appeal to functionally replace intuition with heuristics, this is demonstrably problematic. For the foreseeable future, thus, it seems that AI cannot have moral cognition.
Image Credits: ted.com, documentarytube, Film4