Topic: Androids and the Mind/Body Problem
Note: In order to fulfill this assignment you need to have read Hasker: ch 3. If you have not done so, please stop now and read that chapter.
Science fiction literature often raises philosophical issues and is a great source for philosophical speculation. This is especially true for the mind/body problem. For example, it is common in science fiction literature to encounter androids. An android is a robot which resembles a human being in appearance and behavior. Examples of androids in science fiction books, television programs or films are numerous (Star Trek, Star Wars, Aleins, Terminator, A.I., I Robot, etc.). In reality many computer scientists are currently working in the area of “artificial intelligence” or machines that can “think for themselves.” Many computer scientists believe this is the first step in creating these androids of the future and that in time the distinction between man and machine will be practically erased. These scientists speculate that androids with super-computer brains will have thoughts, beliefs, feelings and desires just like humans. Therefore, some argue, they will also have the same rights, responsibilities, and privileges that all humans have and should be treated as thus. Do you see problems with this view of the future? Do you think machines can ever become persons?
In order to explore this question, let us consider an episode of the popular television series, Star Trek: The Next Generation. It would be helpful if you could view this episode (perhaps you can rent it from your local video store or Netflix), but I have provided a synopsis so that you can fulfill this assignment without viewing the episode. You will need to have read Hasker, Ch 3 in order to fulfill this assignment.
For your initial post: After reading the synopsis (or viewing the episode) write a substantive response (at least 350 words) and post it on the forum. Your initial post must address the first question below. You may also address several of the other questions as well but the bulk of your response should be on the first question and relating the story to Hasker, Ch 3.
• From your reading of Hasker, and using the categories he uses, what view of the mind/body problem do you think is exhibited by Picard? By Maddox? Support your answer.
• Maddox lists three criteria for a being to be sentient: intelligence, self-awareness and consciousness. Are these adequate? Can you think of other properties or characteristics a being needs to have in order to be considered a “person?” What might they be?
• Do you think that artificial intelligence to the level as it is presented in the story will someday be possible? Why or why not?
• Do you think Maddox is right when he claims that Picard is being “irrational and emotional” in his view of Data?
• Do you agree with the JAG officers final ruling. Why or why not?
• If A.I. does become possible, will we have obligations to treat machines “ethically?”
These are some answers to the questions. Please intergrade these answers to the rest of the questions as well as reword.
From your reading of Hasker, and using the categories he uses, what view of
the mind/body problem do you think is exhibited by Picard? By Maddox?
Support your answer.
The stance of Picard is that his robot friend, Data, is a person and should be able to
make decisions like a real person (Synopsis) His apparent view of the mind/body
problem is that the mind is not separate from the body, ultimately he believes in a
very materialistic nature. Picard refers to Data as being a human in the way he
thinks; to be human in thought process, you must have the same mental processes.
Data, like humans, only has a brain, a brain that makes mental thought. Data’s brain,
like every other human has a self-operating computer (Hasker, 70).
Maddox is very different than Piccard on his view of the mind/body problem, and
this is why there is an obvious clash between the two. Maddox has the view that the
body and the mind have a dualistic nature; that we have a brain and a mind (Hasker,
66). This is how we are made and formed. Maddox believes that man has a dualistic
nature, and that Data, being a machine, was only created with a brain; not a mind.
Humans have souls, machines do not, is Maddox’s ultimate stance (Synopsis).
If A.I. does become possible, will we have obligations to treat machines
No. Ultimately I don’t think we will have any obligations to treat machines with the
same respect and dignity that other humans deserve. Why? Well, the machines we
create are not in the image of God, they do not have a spirit, a soul that can be
condemned to hell or glorified into heaven. For the same reason that we as humans
shouldn't treat animals better than humans, we should be forced to treat creations
Metaphysics: Constructing a World View
. Downers Grove, Ill., U.S.A.:
InterVarsity, 1983. Print.
“The Measure of a Man”
Star Trek: The Next Generation.
February 13, 1989.
|Due By (Pacific Time)||02/02/2016 11:00 am|
out of 1971 reviews
out of 766 reviews
out of 1164 reviews
out of 721 reviews
out of 1600 reviews
out of 770 reviews
out of 766 reviews
out of 680 reviews