With Transcendence opening in movie theaters today, I thought it was a good time to continue here with my posting of science fiction reviews I first published in the 1990s, this time of Charles Platt's The Silicon Man, first published in 1991, brought out in paperback in 1993, and reviewed by me that year in a highly philosophic analysis in the Journal of Social and Evolutionary Systems.
In
Mind at Large: Knowing in the Technological Age (1988,
p. 180), I wrote that "A flat denial forever and anon of AI
possibilities [humanlike intelligence] in nonliving circuits
amounts to ... an unbecoming protein chauvinism." What I had in
mind was the dogmatism of a position that says, just because the
only intelligence we know is protein-based, therefore all
intelligence must be so. I went on, however, to depict the
attempt at creating real intelligence in computer systems as
akin to putting "Descartes before the horse," by which I meant
that since intelligence is a property of life insofar as we
know, we're more likely to develop artificial intelligences out
of artificial living entities than from any non-living
artificial components.

Charles Platt's
The Silicon Man, a science fiction
novel, takes up the challenge of AI from another angle.
Pointing out that something (like the mind) can be copied
without the copier fully understanding how the original entity
works ("You mean Gottbaum and his people copied my brain without
knowing how some of it works?" "Yes. By analogy, an audio
recorder can copy a piece of music without understanding harmony
and composition. All that matters is that the copy is
accurate," p. 147), Platt explores the implications of uploading
a person's mind into a central computer. Of course, the analogy
is imperfect -- music, regardless of its complexity and unlike
the mind, is not a self-regulating, generative system -- and our
scientific capacity to do this is vastly beyond our current
grasp. But the lack of ipso facto impossibility of Platt's
scheme -- an impossibility that one could take refuge in only on
the basis of a protein chauvinism -- makes it and the book it is
in worthy of very serious philosophic contemplation.
The central philosophic issue it raises for me is, given
that a human intelligence could be copied into a computer whose
system could supply that intelligence with one hundred percent
accurate simulations of everything ranging from making love to
fine dining to evening breezes, what differences if any would be
worth claiming between this simulated existence and its original
"real" one? A related ethical issue is, given that such
differences are negligible, would termination of fleshly
existence in favor of silicon constitute murder, if
involuntary, or otherwise suicide?
Platt's book focuses more on the ethical issue, raising the
stakes by suggesting that a human intelligence in a computer
might even be an existence superior to the old-fashioned one
(for example, "infomorphs," intelligences in a computer, don't
age, p. 223).
But I find the ontological question more primary.
In several essays (1994a, 1994b, 1994c), I've delved into
questions of what can't be done in cyberspace, though from the
perspective of a flesh-and-blood body working in and through
cyberspace (as anyone connected to any computer network can now
do), rather than the intelligence in the body literally vacating
it in favor of a total intra-cyberspace existence. My point in
these essays is that in areas in which the body must be served
-- as in making love, leading to procreation, and eating for
nutrition -- then whatever completely convincing alternative
cyberspace can provide is obviously not enough.
But what about the human intelligence totally within the
computer? Can it be fully served by its simulations, and if so,
what does this say about the relationship of human intelligence
and the external material universe from which it emerged? Here
we come upon the most fundamental questions about the nature of
reality and its relation to perception. We can start with the
tedious observation that, yes, we have no idea that what we
perceive in our current external world is really there -- the
whole universe could be our dream -- but once we move beyond
this logically irrefutable but fruitless observation we're left
with a very profound distinction between reality perception and
simulated perception. The first is a relationship of perception
(and the perceiver) to something not of its own making (well
recognized by Kant's insistence that knowledge is a product both
of our internal cognitive processors and the external data they
work upon); the second smacks of Narcissus looking at endless
mirrored reflections of his own mind. And thus the second kind
of perception -- the perception of human intelligence wholly
internalized in cyberspace -- seems to return to the sterile
solipsism of the world is my dream.
Platt is aware of this issue, having one of his
computer-internalized characters observe that "from the inside,
as infomorphs, we obviously can't alter the structure -- the
actual hardware -- of [our central computer]. That would be
like a tape recording trying to alter the structure of the tape
on which it was recorded" (p. 236). Actually, the analogy isn't
the best, since a tape recording with a very loud sound might in
principle cause a speaker to blow, which could in turn cause the
tape-turning mechanism to malfunction, which might in turn alter
the tape -- but it nonetheless suggests that even a perceptually intra-cyberspace existence, totally inside cyberspace, requires the existence of
outside, real "hands-on" ministration (if, say, the hardware of
the central system is in need of repair).

Near the end of the novel, though, Platt imagines the
growth of infomorphs in computer networks achieving such power
that they can in fact control physical events outside of their
systems. ("You can rent [a vehicle], pipe your mind into it,
and go wherever you want if you still need to interact with the
real world," p. 255.) And this confronts us with what might be
the most fundamental ontological question of all: If we can
indeed copy everything -- every aspect of an entity -- then is the
copy in any sense a copy, or is it better thought of as another
original?
Well, there is what I call in
Mind at Large (pp.
149-150) the paradox of copying: the copy, to the degree that it
is a perfect copy, defeats itself because in so being a perfect
copy it transforms the original into a duplicate, and therein
the perfect copy is no longer a perfect copy (because it has
obliterated rather than preserved the uniqueness of the
original, and therein failed to copy a central aspect of the
original). A perfect, artificially constructed human
intelligence would inevitably have this effect on its natural
progenitors.
On the other hand, there seems room enough -- and need
enough -- for both of us in this universe.
I recommend Platt's book for stirring attention to such
issues. Like Isaac Asimov's robot series, it shows that, in
constructing our future, we need not only technology and
philosophy but its presentation in science fiction.
References
Levinson, P. (1988)
Mind at Large: Knowing in the
Technological Age. Greenwich, CT: JAI Press.
Levinson, P. (1994a) "Will the Delta Clipper Turn Deep Space
Into Cyberspace?"
Wired, February, p. 68
Levinson, P. (1994b) "Picking Ripe: There Are Just Some Things
You Can't Do In Cyberspace."
Omni, August, p. 4.
Levinson, P. (1994c) "Entering Cyberspace: What To Embrace, What
To Watch Out For."
Journal of Social and Evolutionary
Systems, 17 (2), pp. 119-126.