When I learned that David Chalmers had written a new book, Reality+: Virtual Worlds and the Problems of Philosophy, I couldn’t wait to read it. Later I had the opportunity to spread the word by interviewing him. Chalmers is a rare philosopher: a deep thinker who is passionate about the field and can write glorious, evocative prose.
Reality+ is long (over 500 pages) but lacks filler and isn’t repetitive. A general audience can devour the text without getting lost or overwhelmed. How does Reality+ remain accessible and engaging cover-to-cover when Chalmers never dumbs down the material? He makes the ideas shine, peppers the prose with pop culture references, and includes the literary garnish of cartoony illustrations.
In the months that have passed since I conducted the interview, I’ve been thinking about what Chalmers does well and whether it matters that he and I approach technology differently. I’ve concluded that our differences do matter. Before I explain why, here’s a synopsis of his book.
Adapting Patricia Churchland’s characterization of her work as “neurophilosophy,” Chalmers calls the approach guiding Reality+ “technophilosophy.” “Technophilosophy,” he writes, “is a combination of (1) asking philosophical questions about technology and (2) using technology to help answer traditional philosophical questions.” To pursue (1), Chalmers discusses well-known philosophical ideas about reality, knowledge, consciousness, and values. For example, he revisits Plato’s allegory of the cave, Descartes’ scepticism, the brain-in-a-vat hypothesis, the experience machine and Monochrome Mary thought experiments, extended mind theory, and many other positions covered in undergraduate courses. To pursue (2), Chalmers juxtaposes these seminal ideas with philosophical considerations that arise when we reflect on the nature of virtual worlds. He emphasizes embodying digital avatars while exploring highly immersive digital environments and interacting with digital objects and digital entities controlled by both human and artificially intelligent consciousnesses.
Chalmers begins by reviewing the simulation argument – the position that, given all available evidence, it’s reasonable to believe we’re living in a computer simulation. Yes, we go through daily life thinking we’re real, other people are real, and the world itself is real. Nevertheless, future civilizations interested in studying their ancestors, or some other entity, may have created all of this. Chalmers concludes that “we cannot know” for sure and “should assign substantial probability” to the possibility. He further leverages the uncertainty as an intuition pump for analysing virtual reality – a domain which, he projects, eventually will become “indistinguishable from the nonvirtual world.” Arguing for the position of “virtual realism,” Chalmers contends that virtual reality is genuine reality, virtual objects are real digital objects, and relationships experienced in virtual reality are real, too.
Much of Reality+ revolves around Chalmers precisely explaining what makes something real. To do this, he provides detailed answers to five questions that constitute a “reality checklist.” “Does it really exist? Does it have causal powers? Is it independent of our minds? Is it as it seems? Is it a genuine X?”. While answering these questions, Chalmers covers extensive philosophical ground, including whether we can lead the good life in a virtual world, whether a digital entity can attain consciousnesses as complex as ours, whether digital entities can deserve moral respect, whether virtual and augmented realities lead to a type of relativism, whether deceptive information in virtual and augmented realities pose a fundamental threat to democracy, whether theological considerations apply to simulations, and whether reality has a mathematical structure.
Those who believe real experiences must occur in person express the most defiant opposition to virtual realism. They’ll argue things like a virtual reality hike up an immersive, engaging, digital version of Mount Everest doesn’t qualify as a trip to a real place. Crucially, though, the position runs deep and extends beyond virtual reality. Consider the following polemic about interacting on the videoconferencing platform Zoom.
Harvard professor Arthur Brooks finds Zoom dissatisfying. After listing several negative features related to the affordances of the technology, like Zoom fatigue and the muting of mirror neurons, he mounts what he calls a “philosophical objection”: “Virtual interaction is a simulation of real human life…. Just as I want to be real, I want you to be as well. I want you to be something more than a two-dimensional pixelated image, assembled from a series of ones and zeroes through cyberspace.” It’s easy to appreciate why Brooks prefers meeting “in person.” Still, his position is flawed. He conflates what’s real (and where real life occurs) with what’s desirable.
During the pandemic, I taught courses over Zoom to students I had never met face-to-face. Perhaps some of them found learning philosophy online less desirable than being educated in person due to the adverse factors related to the technological affordances Brooks highlights. If so, Chalmers would appreciate their outlook. He readily concedes, “Zoom is convenient, but it has many limitations.” Still, even if some students found the classes suboptimal, they wouldn’t be justified in concluding they weren’t real. That’s because Chalmers, not Brooks, takes the right approach to judging reality.
For starters, students had shared experiences. When they discussed the material, their interactions had causal consequences – one person’s ideas impacted another’s mind. When they submitted papers, the quality determined what grades they received. And when the courses ended, the final grades were recorded on transcripts, an outcome that could impact future opportunities like internships and jobs. Thankfully, I received high course evaluations, and most students were satisfied. But even if I tanked, our relationships and experiences wouldn’t reflect a reality deficit.
Despite the numerous virtues of Reality+, Chalmers and I part ways in how we approach an important question. Should practical, real-world constraints and problems influence how philosophers talk about possible uses of technology and possible futures where technology plays a significant role in people’s lives? When Chalmers discusses a particular technology, he often lists possible ways to use it, hypothetical ones, without considering actual ethical and political implications. For example, when entertaining the possibility that “within a decade or two, we may all use augmented reality,” he notes that augmented reality glasses could deploy “automated face recognition” to identify people. Yes, that’s possible. But it’s also highly undesirable – and the undesirability of the matter should influence how the technology is represented.
As Woodrow Hartzog and I have long argued, the actual (as opposed to hypothetical) affordances of facial recognition technology are so pernicious that it’s the perfect tool of oppression. Consequently, we believe facial recognition technology is so dangerous for society, particularly vulnerable and marginalized people, that the only appropriate governance response is to ban it. Even seemingly mundane commercial uses of facial recognition technology will only further entrench and normalize the technology to the benefit of over-zealous law enforcement and data-hungry technology companies.
Chalmers’s gee-wiz description of facial recognition technology risks adding to the normalization problem, particularly because he wants to establish further that augmented reality technologies can extend the mind (Chapter 16). From an abstract functionalist perspective, Chalmers is right. Facial recognition technology embedded in augmented reality glasses can expand our powers of perception and memory. Nevertheless, no matter how often Chalmers admits that “every technology has its downsides,” the very act of classifying one as mind-expanding endows it with positive normative associations. Language matters. After all, who would prefer a limited mind to an expanded one? Indeed, as Brett Frischmann and I argue in Re-Engineering Humanity, philosophers should primarily consider ethical and political issues when analysing extended minds, not treat them as afterthoughts. Of course, Chalmers doesn’t want to see anybody harmed by the use of technology. Perhaps he implicitly imagines that everyone who gets identified by a facial recognition device gave their consent. Unfortunately, given the collective harms involved, I don’t believe anyone can legitimately do so.
The facial recognition technology example speaks to a larger issue. The bigger picture involves the future of virtual reality. Chalmers acknowledges that corporate control might intensify and, as a result, problems that we experience today, like loss of privacy, may worsen. But he also envisions other possibilities, sunnier ones, and wants us to consider them too – like once scarce material goods becoming widely distributed. Since no one knows how the future will unfold, this seems to be a fair point. Furthermore, since it’s hard to create a better future without conceiving of one, positive visions have their place.
But the amount of money companies like Meta spend on virtual reality (and the so-called metaverse) is staggering. To write a book that says so much about virtual reality without emphasizing political economy issues creates a significant risk. The reader can get to the last page far too enamoured with the possibility that, in principle, one can flourish in virtual reality rather than putting the book down duly concerned about the likely factors that will determine whose interests virtual reality will serve.
Reality+: Virtual Worlds and the Problems of Philosophy, by David Chalmers (W.W. Norton/Allen Lane), $32.50/£25.
Evan Selinger is a professor of philosophy at Rochester Institute of Technology. His latest book, which is co-authored with Brett Frischmann, is Re-Engineering Humanity (Cambridge University Press 2018). He is a scholar in residence at the Surveillance Technology Oversight Project (S.T.O.P.)