Zach Weinersmith's "Saturday Morning Breakfast Cereal" |
The jumping-off point for discussion was an essay by Bernard Williams, "The Human Prejudice". Our organizer, Derwin, provided a link to the article, and another member posted a youtube series of Williams lecturing on this topic.
There's a number of things going on in the article, using the thought experiment of "what if we met aliens we found to be ethically superior?"
- Thinking about intelligent (and, importantly, intelligible) aliens allows us to think about personhood as distinct from biological humanity. As the article and group both discussed, humans live on a kind of "sentient plateau": due to the lack of other species with our language abilities, we've come to restrict moral personhood to our species alone.
- Williams is taking a serious run at Singer-style utilitarianism; a few of us (myself included) think he's gone too far and made a series of straw-men (for example, the idea that utilitarians are so simply "anti-suffering" that they would indulge in presumably-painless mass killing given the chance, to prevent future suffering).
- But the method of critique that Wiliams is using is interesting--he's using the idea of supremely ethical alien beings to take on the idea of the Ideal or Impartial Observer--a theoretical device used by (some?) moral objectivists to help anchor their arguments. "Ethically superior" aliens, Williams tries to show, could reduce the IO ad absurdum by forcing us to think about actual choices we could be presented with/have to make by an actual impartial perfect moral reasoner. We wound up particularly talking about Rawls and his "veil of ignorance", or Original Position, that is a persuasive but often-critiqued aspect of his "Theory of Justice".
- Speciesism. It's a very real thing, and despite the rehabilitation Williams is sorta-kinda attempting, an indefensible one. The whole discussion here had me thinking less of alien than of animal rights; though not well adopted by humanity at large, that thinking is already well-formed. See Singer's "Expanding Circle" (and many others). Once one identifies that the salient features of moral worth are not presence in a biological category (more on that in a sec), but rather a list of features (ability to feel pain/pleasure, have concept of self, project-making, etc), then it's no leap at all to think about aliens, artificial intelligences, or non-human animals as at least potentially deserving of moral consideration.
- Vegetarian SF enthusiasts will probably find this pretty 101.
- Species-as-a-category. We didn't get into it very much, but any moral scheme predicated on biological species is already doomed; "species" is not a stable nor an essential category, just a short-hand for "critters what look pretty similar and can breed with each other".
- The Personhood Plateau idea continues to appeal to me in a very science fictional, theoretical-evolutionary kind of way. Had things played out a little differently in the relatively recent (geological) past, we might very well have many species of homo, with a range of different mental abilities, rather than solitary sapiens sapiens split so sharply from all other animals. That said, discussion of human uniqueness must I think be perforce blind to at least the other great apes; chimps, orangs and gorillas all routinely exceed the "personhood" threshold we use to claim the moral worth of, for instance, young children or brain-damaged humans.
- The Great Chain of Being lives! I'm perhaps a little hyper-vigilant for this one (pet peeve/theory, particularly in SF-related topics), but it kind of blew me away how prevalent scala naturae-style thinking was at this discussion. Flags on the field include "higher and lower" animals, "more evolved", "de-evolved", and so forth. The biggest/weirdest thing for me is the idea of "ethically superior" aliens. We can posit more intelligent beings, that are better moral reasoners and act more consistently according to those principles, but without invoking theistic angels of some sort (against which we immediately level all atheist argumentation ever) I can't conceive of a being we would somehow automatically grant "superior" status in all matters ethical.
- Consequential ethics vs...everything else. I think this was the part that was bugging me the most, though I wasn't able to verbalize it at the time: almost all the discussion of these superior aliens being "morally" superior covertly relied on a consequential as opposed to deontological ethic. If we grant that beings have rights of some kind, it may be true that there are "higher" rights, as for instance a healthy adult human has more rights than a permanently brain-damaged infant, or a dog, but nonetheless there will probably remain side constraints on our behavior towards all rights-holders: the healthy adult human can't torture the dog for pleasure alone, for instance, I think we would mostly agree. Thinking this way, our alien guests might be better at helping us think about our rule or rights-based decisions, but they would be unlikely to substantively trump or restructure our affairs.
- Don't want to get too much into arguments about consequentialism generally. On the one hand it's very often the clear prudent/pragmatic course; on the other, anything relying on ends to justify the act will need to set arbitrary limits around the "ends", as well as quantify moral worth (ie a pleasure/suffering "calculus"). Moore & Gibbons's "Watchmen" is a nice genre example of this debate--when Ozymandias asks Dr. Manhattan (who is basically an ethically superior alien/Impartial Observer) whether his (horrific) acts were justified "in the end", to which Manhattan replies "In the end? Nothing ends, Adrian. Nothing ever ends." One of my favorite heavy philosophical punch-lines (although also note that Manhattan has pragmatically endorsed the action rather than seek the act-based justice desired by the moral absolutist Rorschach).
- Even setting aside those kinds of arguments against consequentialism aside, an issue that kept coming up in some of Williams's thought experiments was the idea that the aliens could show us that, for instance, actions against humanity (reprogamming us, wiping us out) were justified due to their analysis of what we would eventually do. Ignoring the justice or injustice of punishment for acts not yet committed (someone at group brought up "Minority Report"), there's also an issue here of confusing really intelligent prediction with actually seeing the future. Determinism does NOT entail fatalism; being shown an empirical prediction and then being walked through how it was arrived at CHANGES the equation, making that particular future more rather than less evitable.
- The aliens want to move in with us in large numbers, they won't harm us, but are incredibly and inescapably disgusting.
- The aliens want to rule us and significantly change our nature, in order to bring about a utopia for us.
- The aliens want to exterminate us, in order to save some vast number of other peoples/planets from somehow being harmed or destroyed by humanity.
Really a great meeting, and while we did toss around some philosophers and concepts, I don't think any formal philosophy training would be needed to get something out of these Chicago Philosophy Meetups. I'll be sure to post any other related meetups in the future!
SF examples/questions I was ready to trot out for this meeting, (Williams is pretty condescending and NOT well versed with regards to SF examples) and didn't actually:
- Asimov's notion of his robots as morally superior (better thinkers, better reasoners, better choice-makers).
- I think it was either Buckminster Fuller or Carl Sagan who said something along the lines that any civilization that travelled between stars must have harnessed great destructive energies without destroying themselves, so they must have gone through a kind of ethical/political evolution that should guarantee their peaceable nature. See also Edgar Pangborn's SF.
- Of course there's also a lot of SF going the other way, my favorite of which is Peter Watts in "Blindsight" where he puts forward the theory that "technology implies belligerence". While any realistic mass/energy consideration will reveal that Independence-Day-style resource stripping doesn't make sense, there are lots of scenarios where aliens might almost necessarily be hostile towards other life: preemptive self-defense (even incredibly tiny percentages of threat become actionable if considered over millions of years) or perhaps a different kind of resource utilization (conversion of solar system to a Matrioshka brain by a post-singularity culture, for instance). While the simplistic space-opera "evil space empire" model doesn't make much sense, there's a ton of SF that considers reasonable conditions for interstellar aggression.
- We also didn't get into the whole question of whether truly ideal/impartial superintelligences could actually exist. It's a bit of a nonsensically Platonic idea, isn't it? I think that a modern, cognitive-science-compatible read might be that any intelligence, no matter how good/fast at certain kinds of computation, will still necessarily have interests, an evolutionary past that influences its value structure, a specific body that affects its senses and processing, and probably a complex internal structure that will prevent pure impartiality (particularly if one subscribes to a non-Cartesian theory of mind, i.e. if you've paid attention).
- Something that Williams was not interested in but I was kind of hoping to get to is the idea of when and where meeting aliens could result in ethics that don't mesh cleanly, due to deep-seated biological or cultural differences, for instance. Since ethics don't actually come from a pure bodiless place, or at least not all our ethical decisions, there's a lot of ways in which biological/historical/cultural specificity affects us, mostly invisibly. Cherryh is of course my favorite example here, but SF is obviously rife with the "aliens and humans try to treat each other by the same code and it is disastrous" trope.
- As you may already know about me, I have a hugely conflicted relationship with C.S. Lewis. However, Westin's speech (and Ransom's translation) towards the end of "The Silent Planet" is a remarkably concise and rightly-satirical exposition of speciesism (and indeed connects it tangibly to the clearly unethical racist/nationalist justifications beyond imperialism generally). That might be worth copying out at length later. Lewis also addresses the notion of responsibility, stewardship, and ultimately uplift of animals in the sequel, "Perelandra".
- I'd also like to dig into ethics of a few different weird areas much-explored in SF but starting to be real-world possibilities. Three examples of the top of my head:
- Uplifted animals. Perhaps merely through teaching, maybe through better "translation" of existing language or thought, more radically through technical/genetic modification, we can conceivably deal with animals as persons. See for instance David Brin's Uplift books (lots of other examples though, particularly with dogs, apes, dolphins--Swanwick, Kress, Tepper all spring to mind).
- AIs of all shapes and kinds and how we relate to them ethically. FAR too many examples here to survey them all. Ted Chiang and Greg Egan are probably my favorite people writing on this right now.
- Speciation of humans. Though cybernetic and genetic modification, we will see drastically different types of humans with many different abilities and faculties, affecting how we think about individual cases ethically. Not that we don't already; but purposeful modification ups the game quite a bit. Again, far too many writers to summarize. Peter Watts, Vernor Vinge, Charles Stross, Elizabeth Bear spring to mind, but there's tons.
No comments:
Post a Comment