I Love My Toaster

Flibbert blogs about the possibility of people in the future marrying robots. I had some thoughts on the topic that were far too long to post in his comments. Here they are.

I agree that only things with rights can enter contracts. Marriage, being a constellation of legal relationships akin to a contract, is only properly available to things with rights. One could not marry one's toaster. One's toaster is not a conditional, volitional consciousness. Its existence is not an end in itself, it does not require sustained action to gain and achieve values in order to continue to exist, it is not required to make value judgments. It has no ethics, so rights are simply inapplicable to toasters.

There are two questions, then: What is a robot?, and Is it the sort of thing that can have rights? The first is a question of technology, the second, of philosophy.

I. Technology

Modern robots are purpose-built, computer-controlled machines. They are made of mechanical components. Filbbert says:

We're venturing now into the realm of science fiction, so I should predicate my comments by saying that I'm talking about purely mechanical beings and not biomechanical or cybernetic beings.
Using the word "beings" begs the question a little, but I think we all understand what he means. (I will use things instead.) Flibbert seems to suggest that a lack of biology is an essential component of the relevant definition of "robot." I disagree.

We are on the brink of major technological changes. We have already begun to blur the line between mechanics and biology. Biotechnology allows us to create "biological machines" to produce, for example, insulin. We are already building living organisms from scratch. It is only a matter of time before we develop the technology to build complex, multi-cellular things that carry on the same sort of tasks that modern robots do.

Ultimately, all biological things, including humans, are just very complicated, chemical machines. (I'm not suggesting that we're just flabby sacks of chemicals with delusions of grandeur. The complex chemical machine that is the human being is arranged such that it gives rise to a rational, volitional consciousness. How it does this is for science to discover. That it does this is the philosopher's only concern, and is pretty well self-evident, to boot.)

Likewise with biology, we're beginning to investigate nanotechnology, which will blur the biological/mechanical distinction to the point of irrelevance. We will have nanobots building orgobots building mechabots. But ultimately, what the thing is made from is less important than what it can do.

Bot technology is currently driven in large part by reverse-engineering the human mind. There's nothing mystical about the human mind; it works somehow, and how it works is within our power to discover. We already have robots capable of sensation. As neuroscientists learn more about the structures of perception, bot builders will incorporate electromechanical, and eventually biological and nanotechnological functional equivalents of these structures into their creations. We are not very far off from a conscious robot - one that can perceive existence. (Consciousness is the faculty of perceiving existence. A robot capable of perception would be rightly called "conscious." Perception is an automatic process of integrating separate sensations into units. Current machines are able to simulate perception in limited contexts, but we wouldn't call them conscious until they are able to emulate perception in any context.) Once we learn how our own brains perform these integrations, there's nothing saying we couldn't build a machine capable of emulating the process.

Our ability to study the brain is expanding. Every day we learn more about it. Scanning technology continues to improve the resolution with which we can examine the brain. Likewise, by the end of this decade, we will have developed electromechanical computers with the capacity to simulate the human brain. Within twenty years, computers will be able to faithfully emulate the brain. These numbers are based on Ray Kurzweil's hypothesis about the effect of his Law of Accelerating Returns on technology. For more information, see Ray Kurzweil, The Singularity Is Near 35-203 (2005). Kurzweil does not account for the effects of philosophy on human events, however. So his predictions will be off if our current philosophical crisis is resolved poorly. Whether we will know how the brain works by then (especially the question of how the brain perceives) will determine whether we are technologically able to create a conscious robot. Eventually, we will be able to build an electromechanical (or biochemical, or nanotechnological, or some hybrid thereof) analogue to the human brain that will be capable of consciousness. Once we can do that, I think it will be a much easier step to go from perceptual consciousness to conceptual. Truly thinking machines are technologically possible, and will be here sooner than we might expect.

Ayn Rand speaks in terms of "organisms" in The Objectivist Ethics, in The Virtue of Selfishness 13, 16 (1961). I do not believe that this is essential to her argument. If "organism" means "something that is alive," then her argument does not necessitate biochemistry as a basis for that life. She appears to be presupposing and describing a living entity, not defining life. I think it is entirely technologically feasible for Man to create a thing that is able to acquire "the material or fuel which it needs from the outside, ... [and to] us[e] that fuel properly." We can create new organisms in a lab, after all. I don't see how the particular method (biochemical versus some other method) makes any difference.

Merely being capable of conception, however, is not sufficient for rights. A consciousness with automatic conception would have no need of rights. Which leads us to:

II. Philosophy

There are some requirements which must be met in order for a thing to have rights. The thing must have these:

  • Life
  • Consciousness
  • Volition

See Ayn Rand, Atlas Shrugged 1012 (1957), reprinted in Ayn Rand, This is John Galt Speaking, in For the New Intellectual 117, 121 (1961) (arguing that "[M]an," who has rights, "is a being of volitional consciousness"). I am of the thought that a robot (be it electromechanical, biochemical, or nanotechological) capable of faithfully emulating the human mind would necessarily be possessed of consciousness and volition. (I do not discuss consciousness or volition in this post.) If it weren't, it wouldn't be a faithful emulation. As discussed above, I see nothing about the human mind that is beyond the power of the human mind to learn and understand, and nothing standing in the way of developing a technology capable of emulating it.

But would such a robot be alive? This seems to be Flibbert's primary objection.

[Sufficiently advanced robots] may share a rational faculty, but they do not have biological needs, specifically, they cannot die.

You can turn a robot off and turn it back on at any time. Even if it somehow could not be turned off and back on, it would be possible to recreate a robot's "consciousness" if it were damaged. In effect, robots cannot die and as a result, it has no need for a right to life, liberty, or property.
But that suggests that death is a necessary part of life. I do not think that Objectivism necessarily takes this view of life.

Life is a kind of existence that is conditioned on self-generated, self-sustaining action. See Ayn Rand, Atlas Shrugged 1012-13 (1957), reprinted in Ayn Rand, This is John Galt Speaking, in For the New Intellectual 117, 121 (1961). Modern robots are capable of self-generated action within the limits of their design. Because we have only so far achieved sensate robots, modern robots are only capable of self-generated sensory response. Many modern robots merely regurgitate preprogrammed responses to sensory stimuli. These are not self-generated actions. They are generated by the programmer. They only simulate self-generated action. But some robots, even very simple ones, can be said to be sensate. For example, this robot's self-balancing behavior is arguably a self-generated sensory response. It is at least no less sensate than a flower turning towards the sun. (But it is not alive, because this action, though self-generated, is not self-sustaining.) If we accept that one day robots will be capable of forming and dealing with concepts, then it is reasonable to accept that they will be capable of self-generated conceptual action as well.

But merely self-generated action, even self-generated conceptual action, would not make such a robot alive. The self-generated action must also be self-sustaining. It is interesting to note that we are also currently capable of constructing apparently self-sustaining robots. Apparently, because their self-sustaining actions are not actually self-generated, but pre-programmed. The Roomba vacuum cleaner, for example, has a pre-programmed response to low batteries: It parks itself in its charger to refuel. Again, not alive, because the action is not self-generated.

If we were to create a robot that could build a copy of itself out of raw materials (as opposed to mere assembly from prefabricated parts; "raw" here means "metaphysically given"), I think it would be unquestionably alive. Unicellular critters are basically robots that build copies of themselves from raw materials. We can engineer custom-built unicellular critters (see above). Is this really any different from building a self-replicating biochemical robot? If not, then is that any different from building a self-replicating mechanical robot? Or a self-replicating nanobot? Or some hybrid? As I discussed above, I still think not.

The key to truly self-sustaining action is conditionality.

I propose that conditionality is to be construed broadly. If a thing will cease to exist in the absence of some volitional action, then its existence is conditional in the sense essential to life (and by extension, rights). I think many modern robots meet this definition of conditionality, but are nonetheless not alive because their existence is not conditioned on self-generated, self-sustaining action. It is conditioned on external actions. A modern robot will run only so long as there is a person to care for it. A modern robot must be kept in existence by something else acting upon it.

A modern robot is always conditional. It requires action to keep it in existence. Which is to say, it requires action in order to keep it being a robot, instead of a pile of immotive, insensate junk. But modern robots are not alive, because this action must come from without. Specifically, the sustaining action comes from an act of will by the creator, Man.

(This introduces a problem I will discuss in more detail further down the post: the problem of artifacts. A robot is an artifact, because it is purpose-built by Man. See below.)

A robot is therefore conditional in two ways: conditional in its creation, and conditional in its continued operation. Currently, both are entirely dependent on external actions by Man. Clearly such things are not alive, and have no rights. But as I have attempted to show, the creation of a truly self-sustaining robot is possible. Such a thing would be alive. At least, until it died.

Flibbert suggests that a robot must be able to die in order to have rights. Death is the loss of a living organism's ability to take self-generated, self-sustaining action. It is the result of a failure to meet the conditions of existence. Man stops eating, Man dies. He has failed to meet a condition of his existence. Man ceases to be, and the now inanimate material of which he was composed becomes Corpse. Corpse is not alive and has no rights. A robot's existence is conditional. If it runs out of fuel (stops eating), it will be unable to move, and unable to sustain its own existence.

Death is one of the disadvantages of biochemical entities. The major advantage is that biochemical processes are relatively simple compared to mechanical analogues. Biological evolution is a very simple process for enabling incremental change in the absence of an integrating consciousness. It is very slow. A shorter individual lifespan helps evolution to occur a little more rapidly. We see this in short-lived species that can be observed over many generations in a laboratory.

Technological evolution, which has already surpassed biological evolution in the ways it has affected human life, is a much faster process, but is vastly more complex. For instance, it depends on cognitive processes. But if it is possible to build a cognitive mind out of parts that are not subject to the limitations of biochemical processes, then technological evolution could occur in the absence of any biochemical processes. The question is: would it?

If an entity cannot die, then it can have no ethics. Its existence is no longer conditional, so life is of no value to it. Ayn Rand presented this problem with the concept of an immortal, indestructible robot. See Ayn Rand, The Objectivist Ethics, in The Virtue of Selfishness 13, 16-17 (1961). Such a robot, she says, would have no ethics because its life would be unconditional. I believe that this is an ethical illustration, and should not be read to have metaphysical considerations. Only matter is absolutely indestructible, and everything, including men, is made of matter. So to suggest an indestructible robot is to suggest some peculiar immutable form of matter; such form giving rise to conceptual consciousness. I think this is a metaphysical inconsistency, so it appears Ayn Rand offered the example merely as an illustration that conditionality is a precondition of ethics. An immortal robot would not, strictly speaking, be immortal. Its form, which gives it the power of self-generated, goal-oriented action, could change and render the robot incapable of action. It could rust. Or it could be smashed. [Query: Is this similar to Aristotelian Hylomorphism?]

A thinking machine could be destroyed by outside forces. That doesn't make it alive, of course. What makes a thing alive is hinging its continued ability to think upon necessary, self-generated action. As I have already discussed, I do not think that power necessitates the incorporation of biochemical processes.

Also, immortality is to be distinguished from an indefinite lifespan. An immortal thing is not alive. But a definite lifespan (which is to say, a lifespan that will definitely not exceed a given span of time) does not appear necessary to ethics. As long as an entity can keep up the sustained action required to maintain its existence, it will be alive and possessed of rights. That such an entity is not subject to the peculiar limitations of biology does not seem particularly relevant.

All of technology (that is, the application of scientific knowledge to the task of living) has been an effort to stave off the limitations of biology. Technology also reduces the amount of individual effort required to stay alive. Dramatically. Compare the individual effort required to stay alive in Western Civilization today with the individual effort required to stay alive in 14th Century Europe; the difference is technology and the philosophy that allows it. As we move towards a more technologically-driven existence, the effects of the limitations of biology will continue to dwindle. Because we will eventually develop the technology to faithfully recreate the human mind in non-biological form, it follows that humans could eventually overcome completely the limits of biology, in favor of the much less ephemeral existence afforded to non-biological entities.

I find nothing philosophically offensive in the idea of an indefinite human lifespan. I think Objectivism holds up, even under the circumstances of non-inevitable death. Death would still be possible to humans with indefinite lifespans, and would be certain if the effort required to maintain life was withdrawn. Just because we might technologically be capable of reducing that effort to a theoretical minimum does not change the ethical calculus requiring life as the ultimate value.

By defining "life" in terms of what the thing does, rather than by the physical means by which it does it, Ayn Rand created an ethics that will not fail when man evolves from a biological entity into a technological one. When we create living "robots" in the image of our own consciousness, the Objectivist ethics will apply equally to them, despite being possessed of indefinite lifespans.

III. Concluding Notes

A. The Problem of Artifacts

If we accept that science and industry will provide the technology to faithfully emulate human consciousness, the only philosophical difference between such a piece of technology and a human being would be that the former is an artifact, while the latter is not. An artifact is an object created by Man for a particular purpose. An artifact requires an act of integration to create. Without that act of integration, the artifact would not exist. Man, however, required no act of integration to come into existence. He was not "intelligently designed." A robot would be.

Objectivism instructs that life is an end in itself. But artifacts are the means to an end. Is it possible to create a living artifact? If one were to genetically engineer a docile, unintelligent but conceptual race of creatures, specifically for the purpose of performing menial labor, would they have rights? Are their lives ends in themselves? Which status (living or artifact) is more important? I don't think Ayn Rand expressly answered the question (because it never arose), but I think it is easy to conclude that being alive trumps being an artifact. The fact that life is an end in itself comes from the necessity for self-sustaining action, not from the absence of any other "greater purpose." [Query: Isn't this the fundamental difference between Objectivism and secular Humanism?]

B. Disturbing Potentials

i. Mind Control

If we develop the technology to understand and faithfully emulate the human mind, won't that also give us the technology to manipulate the mind on its most fundamental level? I don't think that's a valid argument against the technology, but clearly individual rights will remain of paramount importance in the future.

ii. Resurrection

The question of indefinite lifespan of a conscious being raises the issue of continuity of that consciousness across gaps in the ability to undertake self-generated, self-sustaining action. A person dies when his brain ceases to function, even for a moment. No person has ever been revived from brain death. If it became technologically possible to restart a stopped brain (a possibility more relevant to a non-biological brain), would the consciousness be the same? I think so, because consciousness is a function of matter and form, not of anything mystical. The disturbing potential here is that resurrection would not be a self-generated action. It would the operation of external forces to maintain, or more accurately, to reinstate, life. I think even a resurrected consciousness would still qualify as alive, because its continued existence would still be dependent on self-generated action, but would the possibility of resurrection affect the ethical calculus by reducing the importance of life as a value? Even more disturbing, would the possibility of resurrection, presumably by another person, make the human race a collective one, where existence of the individual depends strictly on the actions of others? This is not an avenue I intend to address with this post.

iii. Intelligent Design

Living, technological entities would necessarily be intelligently designed, at least in part. We are, to some extent, intelligently designed already, to the extent that we have modified our existence through technological means. We are not the same creatures we would be if it weren't for technology. Fundamentally, however, the biological consciousness from which we build our future selves, and which we will emulate with technology, was not intelligently designed. It was the product of biological evolution. When we shift from biological evolution to technological evolution, we will not abandon the products of biological evolution. Biological evolution will in fact continue. We will just be evolving faster under our own power. Though we might improve on it drastically in the future, the nucleus of consciousness will always be in emulation of the biological origins of our future technological selves.

C. Final Words

In summary, I think Objectivism would extend rights to robots who met the definition of life and were possessed of volitional consciousness. I do not think the indefinite lifespan of such a creation would affect the conditionality of its life. While my toaster clearly does not meet the requirements, I do think that it is entirely within the ability of Man to create such a being, and that even while such creatures might be viewed as different from Man, they will nonetheless be possessed of the same individual rights. But as we develop the technology, we will change ourselves to be more like our creations. Whether there is ever a distinction between "us" and "them" seems to me irrelevant, because eventually, we will become them, and they us.

Tom G Varik