Tuesday, December 16, 2014

Chicago Philosophy: Aliens and Ethics

Zach Weinersmith's "Saturday Morning Breakfast Cereal"
Last Saturday the Chicago Philosophy Meetup hosted a really delightful session on "Aliens and Ethics". I'd wanted to put it on the upcoming events list, but the RSVP list filled to capacity almost immediately. It was definitely a philosophy discussion with a slight science fictional twist, as opposed to the other way around, but with a lot of genre relevance. I've consistently found SF to prepare me think about philosophy and vice versa; there's some interesting parallels in their at-least-purported commitment to rationality and using consistent but imaginative thinking to explore "what-ifs".
The jumping-off point for discussion was an essay by Bernard Williams,  "The Human Prejudice". Our organizer, Derwin, provided a link to the article, and another member posted a youtube series of Williams lecturing on this topic.

There's a number of things going on in the article, using the thought experiment of "what if we met aliens we found to be ethically superior?"
  • Thinking about intelligent (and, importantly, intelligible) aliens allows us to think about personhood as distinct from biological humanity. As the article and group both discussed, humans live on a kind of "sentient plateau": due to the lack of other species with our language abilities, we've come to restrict moral personhood to our species alone.
  • Williams is taking a serious run at Singer-style utilitarianism; a few of us (myself included) think he's gone too far and made a series of straw-men (for example, the idea that utilitarians are so simply "anti-suffering" that they would indulge in presumably-painless mass killing given the chance, to prevent future suffering).
  • But the method of critique that Wiliams is using is interesting--he's using the idea of supremely ethical alien beings to take on the idea of the Ideal or Impartial Observer--a theoretical device used by (some?) moral objectivists to help anchor their arguments. "Ethically superior" aliens, Williams tries to show, could reduce the IO ad absurdum by forcing us to think about actual choices we could be presented with/have to make by an actual impartial perfect moral reasoner. We wound up particularly talking about Rawls and his "veil of ignorance", or Original Position, that is a persuasive but often-critiqued aspect of his "Theory of Justice".
I won't try to capture the whole of the discussion and argumentation--it was a jam-packed three hours of disputation and deliberation--but these are a few things that really stuck in my mind:
  •  Speciesism. It's a very real thing, and despite the rehabilitation Williams is sorta-kinda attempting, an indefensible one. The whole discussion here had me thinking less of alien than of animal rights; though not well adopted by humanity at large, that thinking is already well-formed. See Singer's "Expanding Circle" (and many others). Once one identifies that the salient features of moral worth are not presence in a biological category (more on that in a sec), but rather a list of features (ability to feel pain/pleasure, have concept of self, project-making, etc), then it's no leap at all to think about aliens, artificial intelligences, or non-human animals as at least potentially deserving of moral consideration.
  • Vegetarian SF enthusiasts will probably find this pretty 101.
  • Species-as-a-category. We didn't get into it very much, but any moral scheme predicated on biological species is already doomed; "species" is not a stable nor an essential category, just a short-hand for "critters what look pretty similar and can breed with each other".
  • The Personhood Plateau idea continues to appeal to me in a very science fictional, theoretical-evolutionary kind of way. Had things played out a little differently in the relatively recent (geological) past, we might very well have many species of homo, with a range of different mental abilities, rather than solitary sapiens sapiens split so sharply from all other animals. That said, discussion of human uniqueness must I think be perforce blind to at least the other great apes;  chimps, orangs and gorillas all routinely exceed the "personhood" threshold we use to claim the moral worth of, for instance, young children or brain-damaged humans.
  • The Great Chain of Being lives! I'm perhaps a little hyper-vigilant for this one (pet peeve/theory, particularly in SF-related topics), but it kind of blew me away how prevalent scala naturae-style thinking was at this discussion. Flags on the field include "higher and lower" animals, "more evolved", "de-evolved", and so forth. The biggest/weirdest thing for me is the idea of "ethically superior" aliens. We can posit more intelligent beings, that are better moral reasoners and act more consistently according to those principles, but without invoking theistic angels of some sort (against which we immediately level all atheist argumentation ever) I can't conceive of a being we would somehow automatically grant "superior" status in all matters ethical.
  • Consequential ethics vs...everything else. I think this was the part that was bugging me the most, though I wasn't able to verbalize it at the time: almost all the discussion of these superior aliens being "morally" superior covertly relied on a consequential as opposed to deontological ethic. If we grant that beings have rights of some kind, it may be true that there are "higher" rights, as for instance a healthy adult human has more rights than a permanently brain-damaged infant, or a dog, but nonetheless there will probably remain side constraints on our behavior towards all rights-holders: the healthy adult human can't torture the dog for pleasure alone, for instance, I think we would mostly agree. Thinking this way, our alien guests might be better at helping us think about our rule or rights-based decisions, but they would be unlikely to substantively trump or restructure our affairs.
  • Don't want to get too much into arguments about consequentialism generally. On the one hand it's very often the clear prudent/pragmatic course; on the other, anything relying on ends to justify the act will need to set arbitrary limits around the "ends", as well as quantify moral worth (ie a pleasure/suffering "calculus"). Moore & Gibbons's "Watchmen" is a nice genre example of this debate--when Ozymandias asks Dr. Manhattan (who is basically an ethically superior alien/Impartial Observer) whether his (horrific) acts were justified "in the end", to which Manhattan replies "In the end? Nothing ends, Adrian. Nothing ever ends." One of my favorite heavy philosophical punch-lines (although also note that Manhattan has pragmatically endorsed the action rather than seek the act-based justice desired by the moral absolutist Rorschach).
  • Even setting aside those kinds of arguments against consequentialism aside, an issue that kept coming up in some of Williams's thought experiments was the idea that the aliens could show us that, for instance, actions against humanity (reprogamming us, wiping us out) were justified due to their analysis of what we would eventually do. Ignoring the justice or injustice of punishment for acts not yet committed (someone at group brought up "Minority Report"), there's also an issue here of confusing really intelligent prediction with actually seeing the future. Determinism does NOT entail fatalism; being shown an empirical prediction and then being walked through how it was arrived at CHANGES the equation, making that particular future more rather than less evitable.
Derwin finished the session with a really fun project, where we voted whether to collaborate with or resist our ethically superior aliens in three scenarios from Williams:
  1. The aliens want to move in with us in large numbers, they won't harm us, but are incredibly and inescapably disgusting.
  2. The aliens want to rule us and significantly change our nature, in order to bring about a utopia for us.
  3. The aliens want to exterminate us, in order to save some vast number of other  peoples/planets from somehow being harmed or destroyed by humanity.
How people decided depended a ton on how we adjusted the dials on our intuition pump, as Dennett would have it. In case 1, we mostly decided to collaborate, with dissenters mostly focusing either on the concept of harm or practical/economic/immigration polices. In case 2, we mostly decided to resist, mostly on the principle or maintaining autonomy/legitimate government. In case 3, perhaps surprisingly, we were split right down the middle--I pointed out that I would "definitely fight to save Shakespeare and famine in Africa (in case 2), and I will definitely fight to wipe out humanity". Good times.

 Really a great meeting, and while we did toss around some philosophers and concepts, I don't think any formal philosophy training would be needed to get something out of these Chicago Philosophy Meetups. I'll be sure to post any other related meetups in the future!

SF examples/questions I was ready to trot out for this meeting, (Williams is pretty condescending and NOT well versed with regards to SF examples) and didn't actually:
  • Asimov's notion of his robots as morally superior (better thinkers, better reasoners, better choice-makers).
  • I think it was either Buckminster Fuller or Carl Sagan who said something along the lines that any civilization that travelled between stars must have harnessed great destructive energies without destroying themselves, so they must have gone through a kind of ethical/political evolution that should guarantee their peaceable nature. See also Edgar Pangborn's SF.
  • Of course there's also a lot of SF going the other way, my favorite of which is Peter Watts in "Blindsight" where he puts forward the theory that "technology implies belligerence". While any realistic mass/energy consideration will reveal that Independence-Day-style resource stripping doesn't make sense, there are lots of scenarios where aliens might almost necessarily be hostile towards other life: preemptive self-defense (even incredibly tiny percentages of threat become actionable if considered over millions of years) or perhaps a different kind of resource utilization (conversion of solar system to a Matrioshka brain by a post-singularity culture, for instance). While the simplistic space-opera "evil space empire" model doesn't make much sense, there's a ton of SF that considers reasonable conditions for interstellar aggression.
  • We also didn't get into the whole question of whether truly ideal/impartial superintelligences could actually exist. It's a bit of a nonsensically Platonic idea, isn't it? I think that a modern, cognitive-science-compatible read might be that any intelligence, no matter how good/fast at certain kinds of computation, will still necessarily have interests, an evolutionary past that influences its value structure, a specific body that affects its senses and processing, and probably a complex internal structure that will prevent pure impartiality (particularly if one subscribes to a non-Cartesian theory of mind, i.e. if you've paid attention).
  • Something that Williams was not interested in but I was kind of hoping to get to is the idea of when and where meeting aliens could result in ethics that don't mesh cleanly, due to deep-seated biological or cultural differences, for instance. Since ethics don't actually come from a pure bodiless place, or at least not all our ethical decisions, there's a lot of ways in which biological/historical/cultural specificity affects us, mostly invisibly. Cherryh is of course my favorite example here, but SF is obviously rife with the "aliens and humans try to treat each other by the same code and it is disastrous" trope.
  • As you may already know about me, I have a hugely conflicted relationship with C.S. Lewis. However, Westin's speech (and Ransom's translation) towards the end of "The Silent Planet" is a remarkably concise and rightly-satirical exposition of speciesism (and indeed connects it tangibly to the clearly unethical racist/nationalist justifications beyond imperialism generally). That might be worth copying out at length later. Lewis also addresses the notion of responsibility, stewardship, and ultimately uplift of animals in the sequel, "Perelandra".
  • I'd also like to dig into ethics of a few different weird areas much-explored in SF but starting to be real-world possibilities. Three examples of the top of my head:
    1. Uplifted animals. Perhaps merely through teaching, maybe through better "translation" of existing language or thought, more radically through technical/genetic modification, we can conceivably deal with animals as persons. See for instance David Brin's Uplift books (lots of other examples though, particularly with dogs, apes, dolphins--Swanwick, Kress, Tepper all spring to mind).
    2. AIs of all shapes and kinds and how we relate to them ethically. FAR too many examples here to survey them all. Ted Chiang and Greg Egan are probably my favorite people writing on this right now.
    3. Speciation of humans. Though cybernetic and genetic modification, we will see drastically different types of humans with many different abilities and faculties, affecting how we think about individual cases ethically. Not that we don't already; but purposeful modification ups the game quite a bit. Again, far too many writers to summarize. Peter Watts, Vernor Vinge, Charles Stross, Elizabeth Bear spring to mind, but there's tons.
If you're interested in reading/discussing philosophy, not always of such an explicitly science-fictional bent, the Chicago Philosophy Meetup seems like a really good group.

No comments:

Post a Comment