Why write about the future? I’ve never seriously tried to predict the future, a fool’s game if there ever was one. Most science fiction writers are perfectly aware of the contingent nature of the future, and prefer to think about how new technology, and the new abilities it affords us, might alter our lives and habits and culture and institutions.
Today, 24/7 technology reporting offers us constant, hazy glimpses of possible futures. In one, we might downvote an obnoxious stranger at a glance with augmented reality glasses. In another, we can live, work, and sleep in an autonomous pod on wheels. The details don’t matter, like whether the pod is made by Google or VW or Ford – what matters is whether this vision provokes desire or distaste in us. And by ‘us’, I don’t mean humanity as a whole, but individuals, all of whom have some degree of choice about how they approach that future.
Some degree. One of the depressing realities of the 21st century is how we’ve become ensnared by global capitalism such that if you want to live, work, and socialise with your friends and family, you don’t have any choice about the technology you use. Sure, you can choose between Apple and Google, and Instagram and Snapchat, and Gmail and Outlook, but if you want a job, if you want to stay in touch with your friends and family, if you want to get invitations to birthday parties and weddings, you will use a smartphone, an instant messaging app, an email provider, all of which are made by the same three or four corporations.
Our seeming powerlessness runs head-on into the abuses of power by those very same corporations. Even if you are concerned about Facebook’s policies, what difference would it make if you deleted your account? Should you stop using Uber and use Lyft? Or not use ridesharing at all? Just how bad are we meant to feel about joining Amazon Prime and exploiting warehouse workers? If have no choice over what technologies we adopt, and if those technologies exert more and more power over our lives, how can we hope our lives will be better tomorrow than they are today, other than hoping that corporations won’t “be evil”?
I don’t know why Prof. Shannon Vallor’s book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, attracted so little notice when it was published in 2016. Perhaps it’s because she counsels a middle path between starry-eyed Silicon Valley techno-utopianism and deeply conservative techno-pessimism. Perhaps her formidable academic credentials are seen by journalists as inferior to working at Google as a design ethicist for a few years. I really couldn’t say.
Regardless, Technology and the Virtues is the most useful, thorough, realistic, and hopeful book I’ve read that explains how we as individuals, and as a global species, should evaluate how we should use and choose technology today and in the future. Vallor, a philosopher of technology at Santa Clara University, claims that today’s technologies are so powerful and pervasive that our decisions about how to live well in the 21st century are not simply moral choices, but that:
they are technomoral choices, for they depend on the evolving affordances [abilities] of the technological systems that we rely upon to support and mediate our lives in ways and to degrees never before witnessed.
a theory of what counts as a good life for human beings must include an explicit conception of how to live well with technologies, especially those which are still emerging and have yet to become settled, seamlessly embedded feature sof the human environment. Robotics and artificial intelligence, new social media and communications technologies, digital surveillance, and biomedical enhancement technologies are among those emerging innovations that will radically change the kinds of lives from which humans are able to choose in the 21st century and beyond. How can we choose wisely from the apparently endless options that emerging technologies offer? The choices we make will shape the future for our children, our societies, our species, and others who share our planet, in ways never before possible. Are we prepared to choose well?
This question involves the future, but what it really asks about is our readiness to make choices in the present.
Upon which principles should we make those choices?
Vallor argues that Kant’s categorical imperative, where the principle upon which you act in a particular case should function as a universal rule for everyone to follow at all times (e.g. if I don’t think everyone should lie to spare themselves trouble, then I shouldn’t lie in any situation ever) is unusable in a world where we cannot envision the possible worlds with enough clarity to inform our decisions. For example, without knowing how cyborg enhancement technology would work in practice in the short and long-term – because the technology has not yet been fully developed – we can’t say with confidence that we, and therefore, everyone, should practice cyborg enhancement.
John Stuart Mills’ utilitarian ethics is unusable for the same reason: the potential of any technology is so opaque and uncertain that we can’t assign reliable probabilities to specific outcomes, making it incredibly difficult to decide whether any technology will ultimately maximise global happiness.
Rather than throwing our hands up and leaving matters to the market, Vallor turns to virtue ethics – that is,
a way of thinking about the good life as achievable through specific moral traits and capacities that humans can actively cultivate in themselves
Vallor’s mission is to adapt the classical virtue traditions of Aristotelian, Confucian, and Buddhist ethics and use them as the basis of new ‘technomoral’ virtues for 21st century life. These virtues, if cultivated, would help individuals choose which technologies to use and to develop. For Vallor, unlike many pessimists, holds out hope that humans can flourish with technology, rather than without or in spite of it.
(I know, ‘technomoral’ sounds super-hokey. I agree but let’s not get hung up on it).
By denying fixed principles or rules can ever capture the ‘right action’ we should take in moral decisions, virtue ethics avoid the overly demanding requirements of Kantian and utilitarian ethics for individuals to impartially weigh up the competing interests of strangers and loved ones, not to mention the bizarre results that come from following rule-based systems too closely, like the blanket injunction against lying – even to the “inquiring murderer” at the door who wants to know if we’re sheltering the innocent he wishes to kill, or the seemingly consistent justification to kill a random scapegoat for the benefit of the many.
So how do virtue ethicists think we can choose right actions, if not through rules? By the cultivation of virtues that allow people to combine considerations of abstract moral principles – like those in Kantian and utilitarian ethics – with practical wisdom. This may sound like a circular argument, but in a nutshell, the point is that you aren’t likely to live a good life by blindly following moral rules provided by religious, political, or cultural authorities. You need to exercise judgement and care based on the circumstances.
What does any of this have to do with technology? Because technology is changing so rapidly (e.g. social media and elections), its powers have grown so vast (e.g. man-made climate change), and its ultimate effects become so unpredictable (e.g. artificial intelligence), it’s harder to defend a rule-based ethical approach. And while the media loves to pose black and white questions like “Is Twitter bad?”, Vallor suggests a more useful set of questions might be:
“How might interacting with social robots help, hurt, or change us?”; “What can tweeting do to, or for, our capacities to enjoy and benefit from information and discourse?”; “What would count as a ‘better,’ ‘enhanced’ human being?” It should be clear to the reader by now that these questions invite answers that address the nature of human flourishing, character, and excellence – precisely the subject matter of virtue ethics.
The first major part of Technology and the Virtues is given to an exhaustive justification of adopting ‘contemporary technosocial virtue ethic’ that can operate on a global level while catering to very different moral traditions and cultures, largely by identifying family resemblances between classical Aristotelian, Confucian, and Buddhist virtue ethics and using them as a foundation.
This was very hard going. Like a good PhD thesis, these chapters were perfectly comprehensible but mind-numbingly structured, delving through Aristotelian, Confucian, and Buddhist traditions again and again and again in an unchanging cycle. It makes for an excellent reference guide and it’s far more thorough and knowledgeable than any pop-science or philosophy book you’ll find on the bestsellers list, but it’s also much less accessible, which seems to have limited its audience and impact.
Eventually, scaffolding complete, we get to the part we’ve all been waiting for: the twelve technomoral virtues of honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom.
It’s crucial to note what Vallor doesn’t do. With regards to honesty, she doesn’t tell us that Edward Snowden is more or less honest than John Oliver or Bernie Sanders, but she does explain that:
…merely telling the truth, even reliably so, is never sufficient for the virtue of honesty, which requires that we tell the truth not only to the right people, at the right times and places, and to the right degree, but also knowingly, and for the right reasons. As philosopher Harry Frankfurt’s On Bullshit so concisely explains, honesty is not the same as mere true speech, which can be issued for any number of amoral or vicious purposes, including deliberate obfuscation.
With the rise of Wikileaks, doxxing, and ‘fake news’ – and with the new power of ‘citizen journalism’ to effect real justice – honesty is needed more than ever. It’s corny, but that doesn’t make it wrong.
Then there’s self-control, a virtue that can be eroded by compulsive game designs and never-ending notifications, and buttressed by Apple’s Screen Time and Google’s Digital Wellbeing, defined by Vallor as:
an exemplary ability in technosocial contexts to choose, and ideally desire for their own sakes, those goods and experiences that most contribute to contemporary and future human flourishing.
On humility, Vallor provides the clearest and most insightful summary of the problems with both techno-optimism (which I have been guilty of) and blind techno-pessimism:
Technomoral humility, like all virtues, is a mean between excess and a deficiency. The deficiency is blind techno-optimism, which uncritically assumes that any technosocial innovation is always and essentially good and justified; that unanticipated negative consequences can always be mitigated or reversed by ‘techno-fixes’ (more and better technology); and that the future of human flourishing is guaranteed by nothing more than the sheer force of our creative will to innovate. The other extreme, techno-pessimism, is equally blind and uncritical: it assumes that new technological developments generally lead to less ‘natural’ or even ‘inhuman’ ways of life (ignoring the central role of technique in human evolution), and that the risks to which they expose us are rarely justified by the potential gains. This attitude sells short our creative potential, our adaptability, and our capacity for prudential judgement. Humility, then, is the intermediate state: a reasoned, critical, but hopeful assessment of our abilities and creative powers, combined with a healthy respect for the unfathomed complexities of the environments in which our powers operate and to which we will always be vulnerable.
On courage, we receive a bracing warning:
…Mengzi [a Confucian philosopher] notes that a starving beggar will refuse food given with abuse. Why? Because his self-respect and dignity, already endangered by his low social status, in the end are more precious to him than mere survival. He can endure more hunger, even death, but he will not endure his own further debasement. How does this relate to moral courage?
In Mengzi’s view, the majority of human beings are numb to the threat to our own dignity posed by the ethically compromised ways in which we allow ourselves to live. We all share the beggar’s natural inclination to preserve his own dignity, but due to the relative comfort of our lives this inclination is no longer actively engaged, it is not part of our daily awareness. This failure to habitually care for our self-respect and moral dignity results in a lack of moral courage; when presented with a choice between giving up some material or social comfort to which we are accustomed, and surrendering even more of our moral respectability, our will to endure the former to save the latter is lacking. I expect that many readers will find Mengzi’s point no less resonant in our contemporary world.
We’re counselled to express empathy wisely, not blindly:
It makes a great difference to my character whether I am moved by empathic concern at the right time and places, by the right people, and with the right intensity. For example, if my brother is a malicious hacker facing harsh punishment for intentionally defrauding millions of vulnerable senior citizens, my own moral excellence in this situation requires my mental and emotional energies to be adequately attuned to and moved by the suffering of his victims, and not inappropriately consumed by concern for my brother’s troubles. On the other hand, to be wholly indifferent to my brother’s present and future suffering would also be vicious.
The chapter on care is particularly far-sighted with regards to the development of social robots and the general outsourcing of caring practices for the ill and elderly:
Consider how systems of social and economic privilege have long allowed individuals to divest themselves of the responsibility for caring practices by delegating these responsibilities to hired substitutes or, increasingly, by using technology to meet needs that previously could only be met by the active labor of human caregivers. On care ethicist Joan Tronto’s view, human beings who enjoy such privilege risk becoming less and less capable of competent care, less emotionally comfortable with close proximity to vulnerability and weakness, less attentive and responsive to need, and less responsible for themselves and others.
… It is hardly necessary to point out that less exhausted and stressed parents, fewer wounded medics, and fewer aid-workers taken hostage by militant warlords would all be good things. However, if we recall the view of care ethicists that we become moral selves largely by teaching ourselves to actively respond to and meet the needs of others, we see the attendant moral cost of a trend toward expanding technological surrogates for human caring. Without intimate and repeated exposure to our mutual dependence, vulnerability, weakness, concern, and gratitude for one another, it is unclear how we can cultivate our moral selves.
Vallor defines technomoral civility as:
a sincere disposition to live well with one’s fellow citizens of a globally networked information society: to collectively and wisely deliberate about matters of local, national, and global policy and political action; to communicate, entertain, and defend our distinct conceptions of the good life; and to work cooperatively toward those goods of technosocial life that we seek and expect to share with others.
with some very fine examples of the good habits of citizens, citing Sibyl Schwarzenbach, who believes “in a tolerant liberal society, civic virtue no longer entails a special interest in the private morality of others, but rather a narrower interest in their public moral character.” In other words, what you do in the privacy of your own home is your own business (providing it doesn’t harm anyone else) – but if you’re going to lie, cheat, and bully people, those with civic virtue must push back.
Flexibility is required for us to deal with “novel, unpredictable, frustrating, or unstable technosocial conditions”, and to work together as a culturally diverse species to combat global existential threats like climate change. That said, we shouldn’t be too flexible:
…the capacity for global technomoral agency is one plausible criterion for deciding which differences in cultural norms warrant mutual forbearance in the interests of global civility and flourishing, and which norms cannot safely be tolerated because they are objectively inimical to such agency and thus to the flourishing of our species and planet. For example, any local conception of ‘feminine virtue’ that is incompatible with the education or civic visibility of women is globally intolerable on this criterion.
I’ve already quoted too much, but there’s plenty more in the book.
Before her conclusion, Vallor is interested in three types of emerging technology and how they relate to her twelve technomoral virtues: Social Media, Surveillance and the Examined Life, and Robots at War and at Home.
These three chapters, especially Social Media, are necessarily more dated and uneven than previous ones, seeing as they relate to current (as of 2015-16) developments in technology. Her recommendations regarding Social Media’s sensible use will be familiar to anyone who’s read around the topic in the past couple of years, and some of her specific suggestions have already come to pass, such as Apple’s Screen Time and Google’s Digital Health features that help users limit the use of their devices; and hiding/muting features on Twitter and Facebook. Other “techno-fix” recommendations come across as under-researched or naive in their understanding of online publishing economics; we’ve already tried making people take quizzes to comment on news articles, and it just doesn’t work.
That said, I welcomed her defence of social media in general; her emphasis on developing not just individual but collective ways of seeking healthier digital norms is welcome; and her dire warning towards the current princes of Silicon Valley:
Too many inside the Silicon Valley ‘thought bubble’ are oblivious to the fact that ethics matters, not because academics or media critics say so, but because people will always want good lives, and will eventually turn on any technology or industry that is widely perceived as indifferent or destructive to that end. It only took a few decades for cigarette companies to go from corporate models of consumer loyalty and affection to being seen as merchants of addiction, sickness, and death, whose products are increasingly unwelcome in public or private spaces. Only extravagant hubris or magical thinking could make software industry leaders think they are shielded from a similar reversal of fortune.
And it’s when Vallor widens her gaze to the deep past that we understand the problems of our current time more clearly. We’ve always understood that civility is contextual in offline spaces, but now we need to understand the same is true for the online world:
A civil person will resist aggressive and unyielding impositions of her political worldview, but also resist the habit of polite docility that encourages silence on issues of critical public concern. Furthermore, civility is expressed differently at different times and places, in a situationally intelligent manner. What is virtuous civility at a wedding, where the mildly ignorant prejudices of one’s table companions are suffered in favor of social harmony (unless, perhaps, they are directed at other guests), may well be vicious if one tends to remain silent when those same common but harmful prejudices are voiced at town meetings or in classroom discussions.
… Neither Twitter nor the classroom are appropriate places for the kind of bland civility one might maintain at a wedding … Yet this does not mean that Twitter or academia should be ‘civility-free’ zones. Just as we would be right to deem a scholar civilly unfit for the academic profession if he or she were in the habit of using their fists at conferences, or shouting down students with profanities and threats, we must recognize that on Twitter one can behave in a sufficiently uncivil manner as to warrant having one’s tweets deleted or one’s account blocked.
Surveillance and the Examined Life begins by reviewing the debate between David Brin, author of The Transparent Society, and security expert Bruce Schneier. Briefly, Brin believes that ubiquitous surveillance will lead to a fairer and less corrupt society, while Schneier believes that in practice, surveillance is usually only for the least powerful – thus widening power differentials and corruption.
I was impressed by The Transparent Society as a student, but today it feels almost besides the point. We have ubiquitous surveillance now, via our email and phones and smart home devices. We’re drowning in data. Sure, there are stories about the rich and powerful being brought down by our new powers of citizen surveillance, but I would venture that in practice, state and corporate power extends much further, in regulating and punishing the most minor of infractions. This is all to say that the debate on surveillance has moved on considerably from the publication of the book.
The section on the Quantified Self movement, however, is more current. While the fervour surrounding the movement has abated somewhat, its effects reverberate with every Fitbit notification and every Apple Watch’s exhortation to ‘complete your rings’, no doubt much to Vallor’s disapproval:
The obsessive quality of many Quantified Self habits evokes the philosopher Mengzi’s warning against hyperactive, overly self-conscious efforts at self-improvement, which he compared to the self-defeating habit of the farmer who tried to help his plants grow faster by pulling at their sprouts.
…The moral dimensions of the self are among the most difficult to translate into numbers. In contrast with variables such as calories, muscle mass, or sleep hours, the moral features of my person seem to be difficult to formalize into a tidy list of variables, much less variables that can be assigned discrete values.
…It is difficult to see how the habits of self-tracking promoted by the Quantified Self movement can coexist happily with the philosophy and spiritual habits of an examined life. After all, each practice demands a considerable investment of time and mental energy; can one imagine a Quantified Self devotee faithfully cataloguing and analyzing dozens of daily datapoints on their behavior and mental states, while also exercising the daily habits of narrative self-examination practiced by the likes of Marcus Aurelius or Emerson?
…The most accurate and comprehensive recording of your past and present states would not constitute an examined life, because a dataset is not a life at all. As Aristotle reminds us, my life includes my future, and thus the examined life is always a project, never an achievement.
A very good, if brief, section on nudging follows, including artificial intelligence-driven ‘moral agents’ (hey, I wrote a short story about them!), which Vallor believes might lead to citizens performing ‘good’ acts but not understand why those acts are good.
Robots at War and at Home covers well-worn territory in terms of the dangers of military robots numbing us to constant war. But as in the previous section of the book, I was genuinely moved by her passages on care:
Caring requires courage because care will likely bring precisely those pains and losses the carer fears most—grief, longing, anger, exhaustion. But when these pains are incorporated into lives sustained by loving and reciprocal relations of selfless service and empathic concern, our character is open to being shaped not only by fear and anxiety, but also by gratitude, love, hope, trust, humor, compassion, and mercy. Caring practices also foster fuller and more honest moral perspectives on the meaning and value of life itself, perspectives that acknowledge the finitude and fragility of our existence rather than hide it. Thus if I transfer my full obligation of care to a carebot, I have “done something” to care, but not in a manner that will sustain my self as a moral, caring being.
…Rather than inviting us to be ‘liberated’ from care, if carebots can provide limited but critical forms of support that draw us further into caregiving practices, able to feel more and give more, freed from the fear that we will be crushed by unbearable burdens, then the moral effect of carebots on human character and flourishing could be immensely positive. Carebots in such contexts can sustain rather than liberate human caregivers, saving them from the degradation of their own ideal ethical self. Certainly caregivers need support to come in many more forms than smart and shiny robots—they need better support from extended family, friends, employers, lawmakers, healthcare providers, and insurance companies. They need better and more affordable care facilities. They need more financial security. They need access to effective caregiver education, training, and support networks. But in more than a few practices and contexts, caregivers might get significant ‘moral support’ from carebots as well.
And then finally, we reach a chapter on human enhancement – that is, research into immortality and the radical altering of the biological nature of humanity. Once again, Vallor walks the tightrope between rejecting human enhancement out of a woolly respect for human ‘dignity’, and transhumanist boosterism that immortality and suchlike will obviously be better for human flourishing than what we have now, for largely unstated reasons:
Most bioconservatives articulate a concern for biological integrity and for human striving. However, the tension between them renders such accounts problematic, both conceptually and practically. If the human aspiration to cultivate ourselves is the root of our dignity, and if human enhancement can open up new paths of cultivation and higher states of cultivated excellence, then at least some imaginable enhancements could reinforce our dignity by removing biological obstacles to those higher states. Thus bioconservatism appears incoherent as long as these disparate moral intuitions are conflated and packaged together under the amorphous heading of ‘human dignity’.
…of all the difficulties transhumanism faces, the real problem is knowing what it is that we ought to wish for. It is not that transhumanists simply wish for the wrong things. Rather, the libertarian philosophies that pervade the transhumanist community seem to preclude them from wishing for any clear ends at all, only the widespread availability of certain technological means, to be used however free individuals and groups see fit.
…We are told [by Ramez Naam, a transhumanist] not merely that we will expand and splinter, but that we will “blossom”—this is a normative metaphor that presupposes flourishing in the achievement of some appropriate state or end. Certainly the metaphor is compatible with a wide range of such ends, but still there must be criteria for what qualifies as ‘blossoming’ and what does not. Not all change or growth in living systems amounts to flourishing. Some change is rot. Some growth is malignant. What justifies Naam’s faith that the free and undirected pursuit of enhancement is more likely to promote human ‘blossoming’ than the alternatives? As he goes on, the incoherence becomes even starker. In the same passage he adds that though our future descendants might be so different “in ways we cannot imagine” that we would fail to recognize them, we may be reassured that nevertheless they “will have the traits most dear to us.” What could possibly ensure that result, given what has otherwise been said?
A book as ambitious and wide-ranging as Technology and the Virtues can be difficult to fully grasp. There are so many interesting ideas and insights that you could spend months just exploring them in turn. But here’s what I took away, in a nutshell:
Don’t believe in the pessimists. Don’t wallow in despair. Technology can lead to human flourishing, but it won’t do so through some ‘invisible moral hand’, nor will it get there by us simply caring more and voting the right political parties in. We’ll only get to that bright future by cultivating those virtues that we know have lead to human flourishing in the past. You may disagree with some of Vallor’s choices of virtues, but she’s brought formidable reasoning to her side.
This is not a self-help book. Just how you cultivate those virtues is an exercise left to the reader. But it’s clear that those virtues, once cultivated, will help us answer those ‘technomoral’ questions we face every day, whether that’s to use Facebook, take an Uber, buy your partner a Fitbit, or invest in ‘carebots’.
So, why write about the future? Vallor turns to that most beloved story of Silicon Valley – Star Trek:
Star Trek speaks to some basic human needs: that there is a tomorrow—it’s not all going to be over with a big flash and a bomb; that the human race is improving; that we have things to be proud of as humans. In the original series, the humans of the 23rd century repeatedly reference a critical leap in moral development made by 21st–22nd century humans; a cultural transition that enabled their narrow escape from self-destruction in internecine wars fueled by new technoscientific powers. In the future Roddenberry envisioned, humanity passes through its Great Filter not by inventing warp drives and transporters, nor by enduring a global apocalypse that erases our weakened cultures and broken institutions, but by consciously cultivating the technomoral virtues needed to improve them: the self-control, courage, empathy, civility, perspective, magnanimity, and wisdom to make humanity worthy of its greatest technoscientific aspirations. Such a future has not been promised to us; but it is the only future worth wanting.