TheHarry BinswangerLetter

  • This topic has 9 voices and 11 replies.
Viewing 11 reply threads
  • Author
    Posts
    • #98652 test
      | DIR.

      These two movies raise the issue of whether a manufactured conceptual entity has individual rights or is simply private property to be dealt with as the manufacturer sees fit. There was a time when I would have thought that this issue was absurdly hypothetical but then I thought about how far we have progressed in less than two hundred years from Charles Babbage’s “difference engine” to today’s computer technology. What can/should you do when you have to deal with androids?

    • #107570 test
      | DIR.

      Computers store and retrieve data. If you write on a blackboard, you do not say that the blackboard has “learned” anything. And the same goes for a card file, or a microfilm reader or a computer many orders of magnitude more powerful than today’s.

      An android, a human-shaped robot, is a machine and has no free will, no capacity for thought and no rights.

      The replicants in Blade Runner were very different. They were manufactured human beings. If it is possible to bypass conception, gestation and birth and build a human being in a lab, then what you have is a human being with all the rights of one. Engineering them to have 4 year lifespans as they did in the movie would be a heinous crime. If it were possible to create human beings like this, it would not create any moral gray area. However a person is conceived and born, he has rights if he has free will. The conceptual faculty is not separable from the free will, it is another aspect of the same thing.

    • #107573 test
      | DIR.

      Cedar B. has it exactly right about computers. But let’s go further than saying that a robot, whatever its shape, “has no free will, no capacity for thought . . .” It is not conscious at all. It does not sense, perceive, or feel pleasure or pain.

      There’s a tendency among Objectivists to forget that consciousness is much wider than just conceptual consciousness. Conceptual consciousness is an add-on to perceptual consciousness, which itself is a development from basic sentience on a pre-perceptual level (I hesitate to use the vexed term “sensation,” but you could use it in regard to organisms (worms?) that have a primitive brain but are not capable of perception).

    • #107579 test
      | DIR.

      I have drawn a distinction between replicants and androids which the author of the book behind the movie didn’t. I also haven’t read the book, so I’m responding to replicants as they were presented in the movie. The reasons why I would give this much thought to the difference between one fictitious creature and an imaginary, yet clearly possible machine may help clarify.

      I believe there is a lab somewhere currently on the verge of being able to create a replacement lung with a combination of stem cells and 3D printing, or maybe they already have done this. So let’s say they can reproduce all of the body parts the same way, including the brain, and then assemble them into a functioning human being and then teach him to function like an adult in less than the normal amount of time. The most wildly unrealistic part of this scenario is where I ask you to suppose that this process is more cost-effective than the normal way of breeding and raising a child, but if you can get past that one, the utility of the scenario in showing the difference between an android– a computer driven animatronic mannequin, and a living thing, will be apparent.

      The replicants in Blade Runner are more like the lab-created person I described than robots. They are more like Frankenstein’s monster, only it’s not really fair to call either of them monsters because neither had a choice in how they came into existence. If you build a person this way, the life you have given him is his and the effort and expense you put into his creation are your own choice and give you no claim against him. You would be responsible for his care until he can live independently. It would be morally no different than having a child. Philosophically, there would be nothing new under the sun if all of the many unrealistic suppositions underlying this scenario came to be true.

    • #107580 test
      | DIR.

      HB, 

      Conceptual consciousness is an add-on to perceptual consciousness, which itself is a development from basic sentience on a pre-perceptual level (I hesitate to use the vexed term “sensation,” but you could use it in regard to organisms (worms?) that have a primitive brain but are not capable of perception).

      I’ve thought that sentience was used by Leftists to suggest reason (or something PC) in animals. A dictionary provided a chaos of definitions or mere word substitutions. Is it merely a synonym for consciousness? And isn’t sensation a type of consciousness, however non-essentially different from perception and conception?

    • #107581 test
      | DIR.

      There was an episode of Star Trek – TNG which addressed this issue. I don’t remember the details other than the basics of the plot. A scientist wanted to dismantle Data in order to learn how to create more Datas in the lab. A court room hearing was held to determine whether this was right or wrong. Jean-Luc Picard argued Data’s case. I remember being impressed by the writing and the presentation of the issue.

      (And in this case, there was another issue involved. Since Data was in Star Fleet, that is, under military command, under the military system of justice, could he be forced to obey an order to allow himself to be dismantled?)

      From Mr. Bristol’s post above, “An android, a human-shaped robot, is a machine and has no free will, no capacity for thought and no rights.”

      As I interpret Mr. Best’s original question, I think he is asking ‘Since science seems to be able to build machines which mimic many human functions, what happens if it becomes possible to build a machine which seems to have free will or at least mimic same closely? What are the moral and legal implications of creating a machine which seems to think?’

      I think his question is of real interest.

    • #107582 test
      | DIR.

      Leaving aside the issue of whether an artifically created being — be it an “android”, a”replicant”, or some other type of entity — could/would be conscious, I too think the ethical question here is more interesting than its hypothetical status might suggest. The question cuts to the heart of what it means for a being to have rights, and requires one to seriously examine why self-interest demands respecting the rights of others.

      It seems to me that, on an egoist perspective, if we’re going to say an artificial being has rights, we have to be able to demonstrate that it’s in the self-interest of everyone (including the being’s creator) to respect those rights.

      Take the example from Ex Machina: an inventor creates an artificial “person” and essentially keeps it as a slave, using it for various household tasks. If the artificial “person” isn’t conscious, it’s obvious that creating such a being could be in the inventor’s self-interest. Why does that change if the creation is conscious, volitional, reasoning, etc.? Clearly it’s in the interest of the creation to be free, but how could you convince the inventor that he is harming himself by creating such a being and keeping it locked up?

      I think this is essentially the same question Peikoff addresses in the “Understanding Objectivism” lectures regarding why initiating force harms both the perpetrator and the victim:

      If we grasp this [the evil of the initiation of force] in principle, we do not have to say, “What about the perpetrator?” Phil said— and I’ve heard this time and again from people working with this issue—“ It’s easy to see that force is bad for the victim, that it’s against his life. But it’s more difficult to prove that force is evil for the perpetrator.” And this is absolutely not true. Why not, just on the basis of the way we’ve set it up so far? Why can’t you say, “We still have the problem of the perpetrator”? If you grasp what it means to think in principle, and that ethics is an issue of principle, you cannot say, “I’ve established that this method is destructive of man’s means of survival. I know it’s bad for some people, but why not for others? I know it’s bad for the guy on one end of it, but why not for the guy on the other end?” If man has to live by principle, and if he has to survive by his mind, that, in itself, is sufficient to establish, just as much for the perpetrator as for the victim, that he is violating the essential principle. So it comes down to this— is the context really clear in your mind? Do you grasp the necessity of functioning by principle, coupled with the necessity for rationality to be the principle? If you grasp those two, then as soon as you see that force is a violation of rationality, it’s wiped out for everybody, the perpetrator or the victim; it’s no more difficult for one than for the other.

      (“Understanding Objectivism” book version, page 127)

      But I have to admit I’ve never found this argument very convincing. It seems to me that it’s missing a premise, something like “there is no conflict of interest among rational men”, which needs to be justified inductively, e.g. by examining the benefits one gains from living in a free society and trading with other free men. But how does that context apply to the case of an artificial being created specifically to serve the needs of the inventor? I’m not really sure.

    • #107584 test
      | DIR.

      There is no new ethical question here.  If you manufacture a person in your laboratory for the purpose of exploiting him and he escapes, a just government will recognize his rights and lock you up and throw away the key.  They will do this whether you manufactured this person through the normal process of reproduction or by some not likely to ever be possible other kind of process.  

      Aside from what a just government would do when it catches you, the reason not to be that guy is that you deny yourself all of the benefits of living in a value for value society if you try to benefit from other people by any means other than consent gained through an appeal to their reason.  The only way to do that is to live in a slave society, or try to get away with enslaving people in a free one, neither of these courses serves your self interest. 

      A conceptual being who, for whatever reason, appears not to be human, has rights.  A robot driven by a computer and made to look like a human being does not.  Neither of these cases present any kind of new ethical situation.

      Data’s positronic brain would have to be something fundamentally different from a computer to produce the results you see in the show, and if I was that judge, I would say that he has free will and therefore has the only vote as to whether he will be dismantled.

    • #106552 test
      | DIR.

      I think it’s fairly straight-forward: if it walks like a conceptual, conscious entity, and talks like a conceptual, conscious entity, then all the same arguments that derive the morality of human beings apply. It just so happens that we’ve only had to apply this morality to biological humans in our history, but you could just as well apply it to robots or magic rocks or anything that has the same survival requirements of reality, ie, surviving by your mind. I am 100% certain that we will have completely human robots in the next 50 years, and for all the same reasons we deal by trade and not force with people, we’ll want to do the same with robots that function like people.

    • #107591 test
      | DIR.

      Cedar Bristol is right again. “If you manufacture a person in your laboratory for the purpose of exploiting him and he escapes, a just government will recognize his rights and lock you up and throw away the key.”

      My very first question to Ayn Rand was (wording approximate): If scientists in a laboratory created an entity that was like a human being in every respect, would it have rights?

      She responded (and here I’m more confident of the wording): “If were like a human being in every respect, it would be a human being.”

      But, contra Chad Hansen’s post, this is most definitely not going to happen. The only way to make a human being in the lab is to grow one, and then it has to go through all the stages of physical and mental development. It can’t be assembled.

    • #107599 test
      | DIR.

      I don’€™t think that the questions of how you would ‘grow’ or not grow an artificial person have any bearing on how one ought to treat one once it is created. If I were to build a machine that stamped out fully mature human beings that were 21 years old in every important mental respect (they were conscious, conceptual beings with the same kinds of mental faculties ordinary human beings have) why would it matter that they were made whole instead of growing? And if their bodies were made of silicon, metal, rubber, and a tiny nuclear reactor why would that matter either? They could still be friends or enemies. They would still need to think and act to live. They would still be capable of being members of our social group buying and selling with us, helping us, going to war with us, or whatever, just like ordinary people would. Itmay well be impossible to build people all at once. It may well be impossible to make a mechanical body with a human-like mind, but that’€™s not the question at hand. If it were possible (and it might be) then what conclusions would follow and what details of the situation would be relevant?

    • #107603 test
      | DIR.

      I entirely concur with HB and Cedar B., here, with regard to both the metaphysical and ethical/political issues.  

      In response to Chad H., I would like to add that walking/talking “like” a certain kind of entity is not the same thing as “being” that kind of entity.  The thought-experiment envisioned of non-biologically grown human-like entities is too different from our world to speculate much about.  Should that become, possible, however, it needs to be stressed that simulation (of behavior/function) is not the same as replication of being.  I take it that this is much of the thrust behind HB’s and Cedar B.’s reasoning thus far.  Thus, actual replicants could be persons, while robots would not.

Viewing 11 reply threads
  • You must be logged in to reply to this topic.