Friday, October 17, 2003


Boy, don't I feel smart today.

We had a philosopher who gave a talk about Astrobiology. I'll try to summarize his main points, because I think they're interesting, and add some of my own.

1. LIFE is defined NOT by fundamental principles (which must also be defined) NOR by entropy arguments (which allow too much) but by minimal DARWINIAN NATURAL SELECTION. Anything that has INHERITANCE, VARIABILITY in inheritance, and FITNESS dependence on inherited traits is alive. This does not make any assumptions on mechanisms, but does necessitate some form of reproduction.

2. ETHICS are a set of rules that define how we treat SENTIENT beings. By definition, anything that is not sentient has no ethical value. However, non-sentient things may have indirect value by their relationship to us. For example, you don't shoot your neighbor's cat because it has an inherent right to live. Instead, you don't shoot your neighbor's cat because your neighbor places aesthetic value in that cat, and it is ethical to respect your neighbor's wishes. This applies to life on MARS. If we find life on Mars, we have to decide what to do with it. Initially, life on Mars has value to us because we can learn new information about life in the universe. However, what if a situation arises where we have to colonize Mars or die (Earth will explode or something)? We will have to make an ethical decision whether Martian bacteria have an intrinsic right to life or we have the right to destroy it to terraform the planet. Since we are SENTIENT, the Martian bacteria are less important, so we colonize Mars instead of accepting certain death.

3. There are many problems with #2. US policy, especially the endangered species acts, suggests that our society thinks that all life has a basic ETHICAL VALUE regardless of it's benefit to us. Consider all the higher-order endangered species that would not affect us if they ceased to exist. Or consider endangered fish that cause us to not build a dam, which would benefit us immensely. It also brings up another problem, that SENTIENCE is a threshhold property. This is an arbitrary, anthropomorphic property. Vegetarians and Vegans think that being ANIMAL is the threshhold, not sentience.

4. Ethics are entirely SUBJECTIVE, arising from instincts and evolution. There are no UNIVERSAL ethical principles. We could come across a sentient alien race that has totally different ethics than us. The only way to have universal ethics is through EVOLUTIONARY CONVERGENCE, which we have no way of knowing right now.

5. As humans, we are limited by our empirical knowledge. When we see a dog acting human, we have NO way of knowing if that dog is making ethical decisions or not. There are two possibilities. First, the dog is not sentient at all, and we are interpreting his actions in an anthropomorphic point of view. Second, the dog is making mental DECISIONS (not instincts or learned behaviors) to act or react in a certain manner. Since we cannot make the distinction between these two possibilities, how can we determine if the dog has ethical value (including the right to life)?

6. Humans are both RATIONAL and IRRATIONAL. Think about people who have irrational fears. There is no rational reason to be scared, but they are scared none the less. Just because we make an ethical decision about what is RIGHT or WRONG, that doesn't mean we will DESIRE to do the right thing. Suppose the principle in #2 is correct. Therefore, the WRONG reason to be vegetarian or vegan is because we feel sorry for the animals. The RIGHT reason to be vegetarian or vegan is because high-density feed lots or overfishing (etc.) are bad for the environment, and hence bad for us. The former is an IRRATIONAL reason, while the latter is the RATIONAL one. Omnivores, therefore, have an IRRATIONAL reason for eating meat, because they just want to. Dostoyevsky loved talking about this kind of stuff. Of course this all assumes #2, and there are many other issues surrounding this question that I don't have time to discuss.

7. I will posit again the previous ethical question: The Earth will blow up, and everything on it will die. However, we have just enough time to move our civilization (and even many Earth species) to Mars. We have discovered life on Mars. If we move to Mars, we will have to terraform it, and will destroy all Martian life. Do we respect Martian life and die, or do we kill Martian life and survive? What is the ethical thing to do? What will humans ultimately do? Is there a difference between the two? Why?

8. Here's another ethical question: There is a stranded boat in the middle of the ocean. Inside are an 80-year old woman, a 20-year old man, and a young puppy. They have no food. YOU must decide which one will die to feed the others. There are really only three options. 1) the puppy dies because it isn't sentient. 2) the old woman dies because her life is basically over. 3) they all have an equal right to life, so you decide by chance. What do YOU choose? What does your choice say about your ethics? What is the most probable solution? Would (or should) the problem change if the puppy were replaced by a jar of bacteria?

9. I would also like to mention that after the talk, the speaker made a comment that I was pretty quiet the whole time, while everybody else was talking and making (mostly idiotic) comments about how this guy was totally wrong. He said my quietness probably means I had everything figured all out. I liked hearing that. ;-)

No comments: