By Ooi Tech Chye
In Moralizing Technology, Peter-Paul Verbeek argues that technology plays an active role in morality by shaping our moral decisions — it is a part of the world in which our moral decisions take place. He further argues that this implies two levels of agency; on a basic level, the agency we commonly attribute to people in day-to-day decision-making, and on a higher level, a sort of meta-agency, which involves decisions about how we allow our agency to be affected. In this paper, I will attempt to demonstrate the relevance of meta-agency by showing how technology, combined with the prevalence of the scientific method, steers us towards a consequentialist view of the world.
In order to understand how we may tend towards a consequentialist mentality, it is necessary to first understand the key features of consequentialism. Consequentialism is the view that the consequences of an act are the only ethically relevant factors in determining whether an act is right or wrong. This is in contrast to deontological ethics, which determines the moral status of an act on its adherence to certain principles or rules. Consequentialism is not usually concerned with reasons for acting (as deontology might be), instead, it is concerned with the outcome of an act. The only exception to this might be the forms of consequentialism that focuses on expected outcomes, as opposed to actual outcomes, but in both cases the prime concern is still the outcomes of the acts.
In order for a consequentialist approach to make sense and be feasible, one must be able to first determine what the various outcomes of an act are with some degree of accuracy, and also be able to quantify those outcomes. The quantification of outcomes is necessary because consequentialism is concerned with either maximising some parameter or ensuring that a certain parameter is at some minimally-desired value. For instance, in utilitarianism, a common form of consequentialism, utility, or pleasure, is to be maximised. Thus, a utilitarian determines the utility of each possible act’s outcomes, and chooses the outcome that has the greatest utility, hence maximising utility. Regardless of whether this utilitarian was concerned with actual or expected outcomes, it would not have been possible for him to make any sort of decision if he was unable to, at the very least, estimate the value of each outcome. The need for the ability to quantify outcomes, is in my opinion the primary reason why science and technology is able to and has been effecting a shift towards consequentialism.
Science and Technology
It is perhaps coincidental that utilitarianism was pioneered in the Enlightenment, but perhaps it was not. The Enlightenment was around the same time that science as a discipline became more systematic and focused, in part due to the preceding scientific revolution and Isaac Newton’s work on motion. According to Bristow’s summary of the Enlightenment, Newton’s work “encourages the conception of nature as a very complicated machine”. Specifically, Newton’s work showed that it was possible to predict natural phenomena with surprising accuracy and also that it was possible to quantify such phenomena.
The modern world takes these concepts to the extreme with the advent of computing and mathematics. The success of science in the natural world has even led to adoption of its methodology in other fields as well, such as economics. The use of computing to model outcomes and quantify them is prevalent in the modern world, which is obsessed with metrics. In governments and other major organisations, there is an emphasis on Key Performance Indicators (KPIs), which are a metric for determining the relative success of the organisations’ operations. Anyone who has filled out a customer satisfaction survey or something of the sort will be familiar with quantified rating scales such as “On a scale of 1 to 10, how happy are you with our service?”
Part of the reason for this obsession with metrics is because we now possess the ability to do so. In Moralizing Technology, Verbeek uses the example of obstetric ultrasound to demonstrate his point about technology and morality: the technology of obstetric ultrasound makes the use of the technology itself a moral decision because even the decision to not use the technology despite its availability is a moral decision. Where previously the inability to determine the health of a foetus (before ultrasound was possible) by default also made decisions regarding the health of that foetus non-decisions, ultrasound has brought those decisions into the realm of possibility and furthermore makes them moral in nature (Verbeek, 2011). The ability to quantify things is in a similar circumstance. Quantifying things allows for greater precision, more objectivity and less ambiguity, so why wouldn’t we want to do it? In fact, not using quantifiable metrics may be seen as irresponsible or even negligent in some cases: imagine hospitals where healthcare professionals were not assessed on quantifiable metrics – how could we be sure if the doctors and nurses were even providing adequate care?
Before the rise of modern computing, and in particular portable computing, quantification of many things was difficult, cumbersome and unfeasible, if not outright impossible. Take for example a metric such as well-being in the world, which is more or less analogous to “happiness”. In the 2015 version of the World Happiness Report, a chapter was devoted to subjective well-being by gender and age around the world, with the data being primarily from worldwide polls done by Gallup. Imagine a data collection project on such a scale without the aid of computers or the internet – it would be nearly impossible. The same goes for any sort of data collection on a reasonably large scale, such as ones a government body might do, or even ones for school assignments. A government might look at “happiness” metrics in determining if their policies were well-received or effective, or in deciding whether they should implement a particular policy or not. A government introducing an unpopular but potentially beneficial policy needs to know the extent to which it is unpopular, which goes beyond just “people don’t like this policy”. A government needs to know metrics, such as “80% of people voted no on this policy”.
Be it for governments or individuals, it is necessary to be able to reliably quantify expected outcomes in order to effectively apply consequentialism as an ethical theory. Without such metrics, consequentialism compromises on much of its scope because the value of outcomes become more indeterminate. The benefits of using quantifiable metrics are clear in most cases: it gives us a straightforward assessment of a particular situation, and allows us to definitively determine success and failure. The moral dimension enters the frame when we are forced to consider whether consequentialism as an ethical theory is adequate. Our tendency to rely on metrics, statistics and quantifiable parameters may in fact be contributing to a consequentialist bias in moral decision-making.
Take for example the death penalty, which is still in use in countries like Singapore despite strong disapproval internationally. Singapore maintains the death penalty because it is said to be effective in deterring serious crimes; this is a clear example of a utilitarian calculation, where the government, while acknowledging the severity of taking life, has weighed it against the potential benefits (of deterring crime) and deemed that maintaining the death penalty maximizes utility. A deontologist (specifically a Kantian) might disagree with the death penalty as deterrence on the basis that we are now treating the offender as a means (using him to deter others) and disregarding his autonomy by not treating him as an end in himself (assuming that people who are sentenced to death usually have no wish to die). The death penalty continues to be intensely debated globally, but the tendency to “let results speak for themselves” by appealing to how countries with the death penalty have low crime rates tends to encourage the sort of consequentialist thinking that leads to the adoption of the death penalty in the first place. In such a manner, one might wonder if perhaps the other view has even been thoroughly considered at all.
A Question of Chicken and Egg
It is worth considering whether it is the case that it is a perspective mediated by science and technology that has led to a consequentialist bias, or whether it is a pre-existing consequentialist bias that has led to an adoption of scientific methods and the technology that exists as an extension of those methods. I would argue that even if it is the case that humans have a pre-existing consequentialist bias, that bias in itself might not have been great, because consequentialism as an ethical theory is a fairly modern concept. Thousands of years of humans have existed functioning with concepts such as virtue ethics or deontological ethics. I argue that even if humans are inherently consequentialism-biased (a big if), our heavy tendency towards consequentialism (if perhaps not pure consequentialism, given that the implications of it are not something that people not working in philosophy think about often) is a result of a feedback loop. We might have begun having a slight tendency towards consequentialist analysis, which is supported and encouraged by science and technology, which makes us more likely to be consequentialists, which encourages our support for science and technology, and so on.
The relevance of meta-agency, which is also a driving point of Verbeek’s book, is that people are not consciously aware of the influence that technology has on us in the first place. Verbeek argued that technology influences our morality more than we commonly believe or think, and without the awareness that it does so, we are unable to exercise our meta-agency in determining how we allow technology to shape our moral choices. Similarly, if science and technology is in fact steering us towards a consequentialist view, then as a society, and as individuals, we must first understand the extent to which this phenomenon is occurring, and exercise our meta-agency by determining whether we want to allow technology to shape our moral attitudes in this way.
The prevalence of technology and the scientific method in the modern world steers us towards a consequentialist world-view by making quantitative analysis of the world feasible and in most cases, useful and efficient. Despite the concern that a consequentialist bias may have been pre-existing in humans, I argue that it is still the adoption of the scientific method, enhanced by the possibilities that modern computing brings, that ultimately tilts us heavily in the direction of consequentialism. We must first be aware of the effect that science and technology has on our ethical attitudes before we can exercise our meta-agency to determine how they affect us, and whether we want them to affect us in this way.
Image credits: Europawire, history, insidehr