8 THE EVOLUTION OF THE BRAIN
It is the business of the future to be dangerous. . . . The major
advances in civilization are processes that all but wreck the
societies in which they occur. ALFRED NORTH WHITEHEAD
Adventures in Ideas
The voice of the intellect is a soft one, but it does not rest until
it has gained a hearing. Ultimately, after endless rebuffs, it
succeeds. This is one of the few points in which one may be
optimistic about the future of mankind. SIGMUND FREUD
The Future of an Illusion
The mind of man is capable of anything
- because everything is in it,
all the past as well as all the future. JOSEPH CONRAD
Heart of Darkness
THE HUMAN BRAIN seems to be in a state of uneasy truce, with
occasional skirmishes and rare battles. The existence of brain
components with predispositions to certain behavior is not an
invitation to fatalism or despair: we have substantial control over
the relative importance of each component. Anatomy is not destiny,
but it is not irrelevant either. At least some mental illness can be
understood in terms of a conflict among the contending neural
parties.
The mutual repression among the components goes in many
directions. We have discussed limbic and neocortical repression of
the R - complex, but through society, there may also be R - complex
repression of the neocortex, and repression of one cerebral
hemisphere by the other. In general, human societies are not
innovative. They are hierarchical and ritualistic. Suggestions for
change are greeted with suspicion: they imply an unpleasant future
variation in ritual and hierarchy: an exchange of one set of rituals
for another, or perhaps for a less structured society with fewer
rituals.
And yet there are times when societies must change. “The
dogmas of the quiet past are inadequate for the stormy present” was
Abraham Lincoln’s description of this truth. Much of the difficulty
in attempting to restructure American and other societies arises
from this resistance by groups with vested interests in the status
quo. Significant change might require those who are now high in the
hierarchy to move downward many steps. This seems to them
undesirable and is resisted.
But some change, in fact some significant change, is apparent in
Western society - certainly not enough, but more than in almost
any other society. Older and more static cultures are much
more resistant to change. In Colin Turnbull’s book The Forest
People, there is a poignant description of a crippled Pygmy girl
who was provided by visiting anthropologists with a stunning
technological innovation, the crutch. Despite the fact that it
greatly eased the suffering of the little girl, the adults,
including her parents, showed no particular interest in this
invention.* There are many other cases of intolerance to novelty in
traditional societies; and diverse pertinent examples could be drawn
from the lives of such men as Leonardo, Galileo, Desiderius Erasmus,
Charles Darwin, or Sigmund Freud.
* In defense of the Pygmies, perhaps I should note that a friend of
mine who has spent time with them says that for such activities as
the patient stalking and hunting of mammals and fish they prepare
themselves through marijuana intoxication, which helps to make the
long waits, boring to anyone further evolved than a Komodo dragon,
at least moderately tolerable. Ganja is, he says, their only
cultivated crop. It would be wryly interesting if in human history
the cultivation of marijuana led generally to the invention of
agriculture, and thereby to civilization.
.
(The marijuana - intoxicated
Pygmy, poised patiently for an hour with his fishing spear aloft, is
earnestly burlesqued by the beer - sodden riflemen, protectively
camouflaged in red plaid, who, stumbling through the nearby woods,
terrorize American suburbs each Thanksgiving.)
The traditionalism of societies in a static state is generally
adaptive: the cultural forms have been evolved painfully over
many generations and are known to serve well. Like mutations,
any random change is apt to serve less well. But also like
mutations, changes are necessary if adaptation to new
environmental circumstances is to be achieved. The tension
between these two tendencies marks much of the political
conflict of our age. At a time characterized by a rapidly varying
external physical and social environment - such as our
time - accommodation to and acceptance of change is adaptive;
in societies that inhabit static environments, it is not.
The
hunter/gatherer lifestyles have served mankind well for most of
our history, and I think there is unmistakable evidence that we
are in a way designed by evolution for such a culture; when we
abandon the hunter/gatherer life we abandon the childhood of
our species. Hunter/gatherer and high technology cultures are
both products of the neocortex. We are now irreversibly set upon the
latter path. But it will take some getting used to. Britain has
produced a range of remarkably gifted multidisciplinary scientists
and scholars who are sometimes described as polymaths. The group
included, in recent times, Bertrand Russell, A. N. Whitehead, J. B.
S. Haldane, J. D. Bernal, and Jacob Bronowski.
Russell commented
that the development of such gifted individuals required a childhood
period in which there was little or no pressure for conformity, a
time in which the child could develop and pursue his or her own
interests no matter how unusual or bizarre. Because of the strong
pressures for social conformity both by the government and by peer
groups in the United States - and even more so in the Soviet Union,
Japan and the People’s Republic of China - I think that such countries
are producing proportionately fewer polymaths. I also think there is
evidence that Britain is in a steep current decline in this respect.
Particularly today, when so many difficult and complex problems face
the human species, the development of broad and powerful thinking is
desperately needed. There should be a way, consistent with the
democratic ideals espoused by all of these countries, to encourage,
in a humane and caring context, the intellectual development of
especially promising youngsters. Instead we find, in the
instructional and examination systems of most of these countries, an
almost reptilian ritualization of the educational process. I
sometimes wonder whether the appeal of sex and aggression in
contemporary American television and film offerings reflects the
fact that the R-complex is well developed in all of us, while many
neocortical functions are, partly because of the repressive nature
of schools and societies, more rarely expressed, less familiar and
insufficiently treasured.
As a consequence of the enormous social and technological
changes of the last few centuries, the world is not working
well. We do not live in traditional and static societies. But our
governments, in resisting change, act as if we did. Unless we
destroy ourselves utterly, the future belongs to those societies
that,
while not ignoring the reptilian and mammalian parts of our being,
enable the characteristically human components of our nature to
flourish; to those societies that encourage diversity rather than
conformity; to those societies willing to invest resources in a
variety of social, political, economic and cultural experiments, and
prepared to sacrifice short - term advantage for long - term benefit; to
those societies that treat new ideas as delicate, fragile and
immensely valuable pathways to the future.
A better understanding of the brain may also one day bear on such
vexing social issues as the definition of death and the
acceptability of abortions. The current ethos in the West seems to
be that it is permissible in a good cause to kill nonhuman primates
and certainly other mammals; but it is impermissible (for
individuals) to kill human beings under similar circumstances. The
logical implication is that it is the characteristically human
qualities of the human brain that make the difference. In the same
way, if substantial parts of the neocortex are functioning, the
comatose patient can certainly be said to be alive in a human sense,
even if there is major impairment of other physical and neurological
functions.
On the other hand, a patient otherwise alive but
exhibiting no sign of neocortical activity (including the
neocortical activities in sleep) might, in a human sense, be
described as dead. In many such cases the neocortex has failed
irreversibly but the limbic system, R-complex, and lower brainstem
are still operative, and such fundamental functions as .respiration
and blood circulation are unimpaired. I think more work is required
on human brain physiology before a well - supported legal definition
of death can be generally accepted, but the road to such a
definition will very likely take us through considerations of the
neocortex as opposed to the other components of the brain.
Similar ideas could help to resolve the great abortion debate
flourishing in America in the late 1970s - a controversy marked
on both sides by extreme vehemence and a denial of any merit
to opposing points of view. At one extreme is the position that
a woman has an innate right of “control of her own body,”
which encompasses, it is said, arranging for the death of a fetus
on a variety of grounds including psychological disinclination
and economic inability to raise a child.
At the other extreme is the
existence of a “right to life,” the assertion that the killing of
even a zygote, a fertilized egg before the first embryonic division,
is murder because the zygote has the “potential” to become a human
being. I realize that in an issue so emotionally charged any
proposed solution is unlikely to receive plaudits from the partisans
of either extreme, and sometimes our hearts and our heads lead us to
different conclusions. However, based on some of the ideas in
previous chapters of this book, I would like to offer at least an
attempt at a reasonable compromise.
There is no question that
legalized abortions avoid the tragedy and butchery of illegal and
incompetent “back - alley” abortions, and that in a civilization whose
very continuance is threatened by the specter of uncontrolled
population growth, widely available medical abortions can serve an
important social need. But infanticide would solve both problems and
has been employed widely by many human communities, including
segments of the classical Greek civilization, which is so generally
considered the cultural antecedent of our own. And it is widely
practiced today: there are many parts of the world where one out of
every four newborn babies does not survive the first year of life.
Yet by our laws and mores, infanticide is murder beyond any
question. Since a baby born prematurely in the seventh month of
pregnancy is in no significant respect different from a fetus in utero in the seventh month, it must, it seems to me, follow that
abortion, at least in the last trimester, is very close to murder.
Objections that the fetus in the third trimester is still not
breathing seem specious: Is it permissible to commit infanticide
after birth if the umbilicus has not yet been severed, or if the
baby has not yet taken its first breath?
Likewise, if I am
psychologically unprepared to live with a stranger - in army boot camp
or college dormitory, for example - I do not thereby have a right to
kill him, and my annoyance at some of the uses of my tax money does
not extend to exterminating the recipients of those taxes. The civil
liberties point of view is often muddled in such debates. Why, it is
sometimes asked, should the beliefs of others on this issue have to
extend to me? But those who do not personally support the
conventional prohibition against murder arc nevertheless required by
our society to abide by the criminal code.
On the opposite side of the discussion, the phrase “right to life”
is an excellent example of a “buzz word,” designed to inflame rather
than illuminate. There is no right to life in any society on Earth
today, nor has there been at any former time (with a few - rare
exceptions, such as among the Jains of India). We raise farm animals
for slaughter; destroy forests; pollute rivers and lakes until no
fish can live there; hunt deer and elk for sport, leopards for their
pelts, and whales for dog food; entwine dolphins, gasping and
writhing, in great tuna nets; and club seal pups to death for
“population management.”
All these beasts and vegetables are as
alive as we. What is protected in many human societies is not life,
but human life. And even with this protection, we wage “modern” wars
on civilian populations with a toll so terrible we are, most of us,
afraid to consider it very deeply. Often such mass murders are
justified by racial or nationalistic redefinitions of our opponents
as less than human.
In the same way, the argument about the “potential” to be human
seems to me particularly weak. Any human egg or sperm under
appropriate circumstances has the potential to become a human being.
Yet male masturbation and nocturnal emissions are generally
considered natural acts and not cause for murder indictments. In a
single ejaculation there are enough spermatozoa for the generation
of hundreds of millions of human beings. In addition, it is possible
that in the not - too - distant future we may be able to clone a whole
human being from a single cell taken from essentially anywhere in
the donor’s body.
If so, any cell in my body has the potential to
become a human being if properly preserved until the time of a
practical cloning technology. Am I committing mass murder if I prick
my finger and lose a drop of blood? The issues are clearly complex.
The solution, equally clearly, must involve a compromise among a
number of cherished but conflicting values. The key practical
question is to determine when a fetus becomes human. This in turn
rests on what we mean by human. Surely not having a human shape,
because an artifact of organic materials that resembled a human
being but was constructed for the purpose would certainly not be
considered human.
Likewise, an extraterrestrial intelligent being who did not
resemble human beings but who had ethical, intellectual and artistic
accomplishments exceeding our own should certainly fall within our
prohibitions against murder. It is not what we look like that
specifies our humanity, but what we are. The reason we prohibit the
killing of human beings must be because of some quality human beings
possess, a quality we especially prize, that few or no other
organisms on Earth enjoy. It cannot be the ability to feel pain or
deep emotions, because that surely extends to many of the animals we
gratuitously slaughter.
This essential human quality, I believe, can only be our
intelligence. If so, the particular sanctity of human life can be
identified with the development and functioning of the neocortex. We
cannot require its full development, because that does not occur
until many years after birth. But perhaps we might set the
transition to humanity at the time when neocortical activity begins,
as determined by electroencephalography of the fetus. Some insights
on when the brain develops a distinctly human character emerge from
the simplest embryological observations (see the figure on page
208).
Very little work has been done in this field to date, and it
seems to me that such investigations could play a major role in
achieving an acceptable compromise in the abortion debate.
Undoubtedly there would be a variation from fetus to fetus as to the
time of initiation on the first neocortical EEC signals, and a legal
definition of the. beginning of characteristically human life should
be biased conservatively - that is, toward the youngest fetus that
exhibits such activity. Perhaps the transition would fall toward the
.end of the first trimester or near the beginning of the second
trimester of pregnancy.
(Here we are talking about what, in rational
society, should be prohibited by law: anyone who feels that abortion
of a younger fetus might be murder should be under no legal
obligation to perform or accept such an abortion.)
But a consistent application of these ideas must avoid human
chauvinism. If there are other organisms that share the
intelligence of a somewhat backward but fully developed human
being, they at least should be offered the same protection
against murder that we are willing to extend to human beings
late in their uterine existence. Since the evidence for intelligence
in dolphins, whales and apes is now at least moderately compelling,
any consistent moral posture on abortion should, I would think,
include firm strictures against at least the gratuitous slaughter of
these animals. But the ultimate key to the solution of the abortion
debate would seem to be the investigation of prepartum neocortical
activity.
And what of the future evolution of the human brain? There is a wide
and growing body of evidence that many forms of mental illness are
the result of chemical or wiring malfunctions in the brain. Since
many mental diseases have the same symptoms, they may arise from the
same malfunctions and should be accessible to the same cures.
The pioneering nineteenth - century British neurologist Hughlings
Jackson remarked, “Find out about dreams and you will find out about
insanity.” Severely dream deprived subjects often begin
hallucinating in daytime. Schizophrenia is often accompanied by
nighttime sleep impairment, but whether as a cause or an effect is
uncertain. One of the most striking aspects of schizophrenia is how
unhappy and despairing its sufferers generally are. Might
schizophrenia be what happens when the dragons are no longer safely
chained at night; when they break the left - hemisphere shackles and
burst forth in daylight? Other diseases perhaps result from an
impairment of right - hemisphere function: Obsessive - compulsives, for
example, are very rarely found to make intuitive leaps.
In the middle 1960s Lester Grinspoon and his colleagues at
Harvard Medical School performed a set of controlled
experiments on the relative value of various therapeutic
techniques for treating schizophrenia. They are psychiatrists,
and if they had any bias it was toward the use of verbal rather
than pharmacological techniques. But they found to their
surprise that the recently developed tranquilizer, thioridazine
(one of a group of approximately equally effective antipsychotic
drugs known as phenothiazines), was far more effective in
controlling if not curing the disease; in fact, they found that
thioridazine alone was at least as effective - in the judgment of the
patients, their relatives, and the psychiatrists - as thioridazine
plus psychotherapy. The integrity of the experimenters in the face
of this unexpected finding is breathtaking.
(It is difficult to
imagine any experiment that would convince leading practitioners of
many political or religious philosophies of the superiority of a
competing doctrine.)
Recent research shows that endorphins, small protein molecules which
occur naturally in the brains of rats and other mammals, can induce
in these animals marked muscular rigidity and stupor reminiscent of
schizophrenic catatonia. The molecular or neurological cause of
schizophrenia - which was once responsible for one out of ten
hospital - bed occupancies in the United States - is still unknown; but
it is not implausible that someday we will discover precisely what
locale or set of neurochemicals in the brain determines this
malfunction.
A curious question in medical ethics emerges from the experiments of
Grinspoon et al. The tranquilizers are now so effective in treating
schizophrenia that it is widely considered unethical to withhold
them from a patient. The implication is that the experiments showing
tranquilizers to be effective cannot be repeated. It is thought to
be an unnecessary cruelty to deny the patient the most successful
treatment for his condition. Consequently, there can no longer be a
control group of schizophrenics that is not given tranquilizers. If
critical experiments in the chemotherapy of brain malfunction can be
performed only once, they must be performed the first time very well
indeed.
An even more striking example of such chemotherapy is the use of
lithium carbonate in the treatment of manic depressives. The
ingestion of carefully controlled doses of lithium, the lightest and
simplest metal, produces startling improvements - again as reported
from the patients’ perspective and from the perspective of others - in
this agonizing disease. Why so simple a therapy is so strikingly
effective is unknown, but it most likely relates to the enzyme
chemistry of the brain.
A very strange mental illness is Gilles de la Tourette’s disease
(named, as always, after the physician who first drew attention to
it, not after the most celebrated sufferer of the malady). One of
the many motor and speech disorders that are among the symptoms of
this disease is a remarkable compulsion to utter - in whatever
language the patient is most fluent - an uninterrupted stream of
obscenities and profanities. Physicians describe the identification
of this disease as “corridor diagnosis”: The patient can, with great
difficulty, control his compulsion for the length of a short medical
visit; as soon as the physician leaves the room for the corridor,
the scatologies overflow like the flood from a burst dam. There is a
place in the brain that makes “dirty” words (and apes may have it).
There are very few words that the right hemisphere can deal with
competently - not much more than hello, goodbye, and ... a few choice
obscenities. Perhaps Tourette’s disease affects the left hemisphere
only. The British anthropologist Bernard Campbell of Cambridge
University suggests that the limbic system is rather well integrated
with the right cerebral hemisphere, which, as we have seen, deals
much better with emotions than the left hemisphere does. Whatever
else they involve, obscenities carry with them strong emotions. Yet
Gilles de la Tourette’s disease, complex as it is, seems to be a
specific deficiency in a neuronal transmitter chemical, and appears
to be alleviated by carefully controlled doses of haloperidol.
Recent evidence indicates that such limbic hormones as ACTH and
vasopressin can greatly improve the ability of animals to retain and
recall memories. These and similar examples suggest, if not the
ultimate perfectibility of the brain, at least prospects for its
substantial improvement - perhaps through altering the abundance or
controlling the production of small brain proteins. Such examples
also greatly relieve the burden of guilt commonly experienced by
sufferers from a mental disease, a burden rarely felt in victims of,
say measles.
The remarkable fissurization, convolutions and cortical folding of
the brain, as well as the fact that the brain fits so snugly into
the skull, are clear indications that packing more brain into the
present braincase is going to be difficult. Larger brains with
larger skulls could not develop until very recently because of
limits on the size of the pelvis and the birth canal. But the advent
of Caesarean section - performed rarely two thousand years ago but
much more commonly today - does permit larger brain volumes. Another
possibility is a medical technology sufficiently advanced to permit
full - term development of the fetus outside of the uterus.
However,
the rate of evolutionary change is so slow that none of the problems
facing us today is likely to be overcome by significantly larger
neo - cortices and consequent superior intelligences. Before such a
time, but not in the immediate future, it may be possible, by brain
surgery, to improve those components of the brain we consider worth
improving and to inhibit further those components that may be
responsible for some of the perils and contradictions facing
mankind. But the complexity, and redundancy of brain function make
such a course of action impractical for the near future, even if it
were socially desirable. We may be able to engineer genes before we
are able to engineer brains.
It is sometimes suggested that such experiments may provide
unscrupulous governments - and there are many of them - with
tools
to control their citizenry still further. For example, we can
imagine a government that implants hundreds of tiny electrodes
in the “pleasure” and “pain” centers of the brains of newborn
children, electrodes capable of remote radio
stimulation - perhaps at frequencies or with access codes known
only to the government. When the child grows up, the
government might stimulate his pleasure centers if he has
performed, in work quota and ideology, an acceptable day’s
work; otherwise it might stimulate his pain centers.
This is a
nightmarish vision, but I do not think it is an argument against
experiments on electrical stimulation of the brain. It is, rather,
an argument against letting the government control the
hospitals. Any people that will permit its government to implant
such electrodes has already lost the battle and may well
deserve what it gets. As in all such technological nightmares,
the principal task is to foresee what is possible; to educate use
and misuse; and to prevent its organizational, bureaucratic and
governmental abuse.
There is already a range of psychotropic and mood - altering drugs
which are, to varying degrees, dangerous or benign (ethyl alcohol is
the most widely used and one of the most dangerous), and which
appear to act on specific areas of the R-complex, limbic system and
neocortex. If present trends continue, even without the
encouragement of governments people will pursue the home - laboratory
synthesis of and self - experimentation with such drugs - an activity
that represents a small further step in our knowledge of the brain,
its disorders and untapped potentials.
There is reason to think that many alkaloids and other drugs which
affect behavior work by being chemically similar to natural small
brain proteins, of which the endorphins are one example. Many of
these small proteins act on the limbic system and are concerned with
our emotional states. It is now possible to manufacture small
proteins made of any specified sequence of amino acids.
Thus, the
time may soon come when a great variety of molecules will be
synthesized capable of inducing human emotional states, including
extremely rare ones. For example, there is some evidence that
atropine - one of the chief active ingredients in hemlock, foxglove,
deadly nightshade, and jimson weed - induces the illusion of flying;
and indeed such plants seem to have been the principal constituents
of unguents self - administered to the genital mucosa by witches in
the Middle Ages - who, rather than actually flying as they boasted,
were in fact atropine - tripping.
But a vivid hallucination of flying
is an extremely specific sensation to be conveyed by a relatively
simple molecule. Perhaps there are a range of small proteins which
will be synthesized and which will produce emotional states of a
sort never before experienced by human beings. This is one of many
potential near - term developments in brain chemistry which hold great
promise both for good and for evil, depending on the wisdom of those
who conduct, control and apply this research.
When I leave my office and get into my car, I find that, unless I
make a specific effort of will, I will drive myself home. When I
leave home and get into my car, unless I make a similar conscious
effort, there is a part of my brain that arranges events so that I
end up at my office. If I change my home or my office, after a short
period of learning, the new locales supplant the old ones, and
whatever brain mechanism controls such behavior has readily adapted
to the new coordinates.
This is very much like self - programming a
part of the brain that works like a digital computer. The comparison
is even more striking when we realize that epileptics, suffering
from a psychomotor seizure, often go through an exactly comparable
set of activities, the only difference being perhaps that they run a
few more red lights than I usually do, but have no conscious memory
of having performed these actions once the seizure has subsided.
Such automatism is a typical symptom of temporal - lobe epilepsy; it
also characterizes my first half - hour after awakening.
Certainly not
all of the brain works like a simple digital computer; the part that
does the reprogramming, for example, is rather different. But there
are enough similarities to suggest that a compatible working
arrangement between electronic computers and at least some
components of the brain - in an intimate neurophysiological
association - can be constructively organized.
The Spanish neurophysiologist Jose Delgado has devised working
feedback loops between electrodes implanted in the brains of
chimpanzees and remote electronic computers. Communication between
brain and computer is accomplished through a radio link.
Miniaturization of electronic computers has now reached the stage
where such feedback loops can be “hardwired” and do not require a
radio link with a remote computer terminal. For example, it is
entirely possible to devise a self - contained feedback loop in which
the signs of an on - coming epileptic seizure are recognized and
appropriate brain centers are automatically stimulated to forestall
or ameliorate the attack. We are not yet at the stage where this is
a reliable procedure, but the time when it will be does not seem
very far off.
Perhaps some day it will be possible to add a variety of cognitive
and intellectual prosthetic devices to the brain - a kind of
eyeglasses for the mind. This would be in the spirit of the past
accretionary evolution of the brain and is probably far more
feasible than attempting to restructure the existing brain.
Perhaps one day we will have surgically implanted in our brains
small replaceable computer modules or radio terminals which will
provide us with a rapid and fluent knowledge of Basque, Urdu,
Amharic, Ainu, Albanian, Nu, Hopi, !Kung, or delphinese; or
numerical values of the incomplete gamma function and the
Tschebysheff polynomials; or the natural history of animal spoor; or
all legal precedents for the ownership of floating islands; or radio
telepathy connecting several human beings, at least temporarily, in
a form of symbiotic association previously unknown to our species.
But the real extensions of our brains, particularly for the uniquely
human aspects of the neocortex, are already in the course of being
accomplished. Some of them are so old we have forgotten that they
have occurred. Rich and unrepressive learning environments for
children represent a remarkably promising and successful educational
tool. Written language is a notable invention that is essentially a
simple machine for the storage and retrieval of quite complex
information.
The amount of information stored in a large library far
exceeds the amount of information in either the human genome or the
human brain. The information is certainly not stored as efficiently
as it is in biological systems, but it is still serviceably compact,
and the development of microfilm, microfiche and the like has
greatly improved the extrasomatic information storage capabilities
of mankind. The number of bits of information contained in human
libraries, works of art, and other cultural institutions would
provide a point in the chart on page 26 many factors of 10 beyond
the right - hand edge of that figure.
Writing is, however, very simple machinery. Much more
sophisticated and far more promising extrasomatic information
storage and processing lies in the electronic computer. To give
some idea of the level of development of such computers and
their programs, they can today play perfect tic-tac-toe, world - class
checkers, expert chess and practice passable psychotherapy. A
computer system now exists which will transcribe a live performance
of a classical or other melody into conventional musical notation.
The amount of information storage and the speed of information
processing by computers are, of course, prodigious.
A non-directive psychotherapy program developed by Joseph Weizenbaum of the Massachusetts Institute of Technology is a
good example of the present serviceability of psychiatric
computer programs (which are, incidentally, far from the most
difficult or most elaborate programs in existence). After the
psychotherapy program was developed, a human psychiatrist
was brought in to relate in lower - case type (via a typewriter
keyboard on an interactive console) with the program, which
prints out in upper case. The exchange was probably intended
as heuristic but may just possibly have gotten out of hand - on
the human side.
The conversation begins with the human
psychiatrist saying,
doctor, i have terrible news
YOU SAY YOU HAVE TERRIBLE NEWS
yes, there is a computer program at m.i.t. that’ll ruin us
WHY DO YOU SAY THAT
i have seen it working
YOU SAY YOU HAVE SEEN IT WORKING
it’s frightening - you must see it
YOU THINK I MUST SEE IT
you sound depressed to me
IN YOUR FANTASY PERHAPS WE SOUND DEPRESSED TO
EACH OTHER
yes - perhaps I UNDERSTAND i do too
YOU SAY YOU DO TOO we are both in the same boat with respect to that computer
LET’S CONTINUE TO TALK ABOUT YOU - SHALL WE you are a comfort
WHAT MAKES YOU THINK I AM A COMFORT at a time like this you want to talk about me
WHY DO YOU THINK I WANT TO TALK ABOUT YOU because you share my feelings about that damned computer
WHAT DOES THAT SUGGEST TO YOU that, of course, you hate computers too
YOU THINK I HATE COMPUTERS TOO why not, they’re hardly human, are they
PERHAPS THAT’S WHAT WE ARE HERE TO FIND OUT no computer could converse as you can, for example
OH i can’t understand how anyone could be fooled by a mere
machine CAN’T YOU REALLY no. after all, there is nothing creative about a computer
program YOU ARE BEING NEGATIVE i am convinced i am correct
ARE YOU WORRIED TO HAVE BEEN CONVINCED YOU ARE CORRECT
no, not in
the least, not at all. not even a little. YOU ARE BEING NEGATIVE
you said you agreed with me, didn’t you IS THAT WHAT YOU HEARD ME SAY
why don’t you try to dissuade me - computers diddle with
numbers, that’s all YOU WISH ME TO TRY TO DISSUADE YOU you can’t
YOU THINK i CAN’T for example, could a computer in a conversation like this raise
a new topic YOU SEEM TO HAVE AVOIDED SPEAKING OF YOUR
PARENTS ALTOGETHER precisely as you just did YOU THINK I JUST DID
yes, and that’s what shows the difference between a man and a
computer I SEE
The computer psychiatrist lacks sophistication, training,
sensitivity, human intuition; it is mechanical (of course!),
inflexible and relatively unresponsive to emotional nuances and
nonverbal cues. And yet it has produced a conversation more
intelligent than many. Its response on “a new topic” is stunning.
But that response is very likely only a fluke. The program is
undoubtedly designed to pay attention to words such as “mother,”
“father,” “parent,” and so on; after the computer’s clock has ticked
away so many minutes, if these words have not been introduced, the
program is designed to come up with,
“You seem to have avoided . . .”
Emerging at just the moment it did, the remark gives an eerie
impression of insight.
But what is the game of psychotherapy if not a very complex, learned
set of responses to human situations? Is not the psychiatrist also
preprogrammed to give certain responses? Non - directive psychotherapy
clearly requires very simple computer programs, and the appearance
of insight requires only slightly more sophisticated programs. I do
not intend these remarks to disparage the psychiatric profession in
any way, but rather to augur the coming of machine intelligence.
Computers are by no means yet at a high enough level of development
to recommend the widespread use of computer psychotherapy. But it
does not seem to me a forlorn hope that we may one day have
extremely patient, widely available and, as least for certain
problems, adequately competent computer therapists. Some programs
already in existence are given high marks by patients because the
therapist is perceived as unbiased and extremely generous with his
or her or its time.
Computers are now being developed in the United States that will be
able to detect and diagnose their own malfunctions.
When systematic performance errors are found, the faulty
components will be automatically bypassed or replaced. Internal
consistency will be tested by repeated operation and through
standard programs whose consequences are known
independently; repair will be accomplished chiefly by redundant
components. There are already in existence programs - e.g., in
chess - playing computers - capable of learning from experience and from other computers. As time goes on, the computer
appears to become increasingly intelligent.
Once the programs are so
complex that their inventors cannot quickly predict all possible
responses, the machines will have the appearance of, if not
intelligence, at least free will. Even the computer on the Viking
Mars lander, which has a memory of only 18,000 words, is at this
point of complexity: we do not in all cases know what the computer
will do with a given command. If we knew, we would say it is “only”
or “merely” a computer. When we do not know, we begin to wonder if
it is truly intelligent.
The situation is very much like the
commentary that has echoed over the centuries after a famous animal
story told both by Plutarch and by Pliny: A dog, following the scent
of its master, was observed to come to a triple fork in the road. It
ran down the leftmost prong, sniffing; then stopped and returned to
follow the middle prong for a short distance, again sniffing and
then turning back. Finally, with no sniffing at all, it raced
joyously down the right-hand prong of the forked road.
Montaigne, commenting on this story, argued that it showed clear
canine syllogistic reasoning: My master has gone down one of these
roads. It is not the left - hand road; it is not the middle road;
therefore it must be the right - hand road. There is no need for me to
corroborate this conclusion by smell - the conclusion follows by
straightforward logic.
The possibility that reasoning at all like this might exist in the
animals, although perhaps less clearly articulated, was troubling to
many, and long before Montaigne, St. Thomas Aquinas attempted
unsuccessfully to deal with the story. He cited it as a cautionary
example of how the appearance of intelligence can exist where no
intelligence is in fact present. Aquinas did not, however, offer a
satisfactory alternative explanation of the dog’s behavior. In human
split - brain patients, it is quite clear that fairly elaborate
logical analysis can proceed surrounded by verbal incompetence.
We are at a similar point in the consideration of machine
intelligence. Machines are just passing over an important threshold:
the threshold at which, to some extent at least, they give an
unbiased human being the impression of intelligence.
Because of a kind of human chauvinism or anthropocentrism, many
humans are reluctant to admit this possibility. But I think it is
inevitable. To me it is not in the least demeaning that
consciousness and intelligence are the result of “mere” matter
sufficiently complexly arranged; on the contrary, it is an exalting
tribute to the subtlety of matter and the laws of Nature.
It by no means follows that computers will in the immediate future
exhibit human creativity, subtlety, sensitivity or wisdom. A classic
and probably apocryphal illustration is in the field of machine
translation of human languages: a language - say, English - is input and
the text is output in another language - say, Chinese. After the
completion of an advanced translation program, so the story goes, a
delegation which included a U.S. senator was proudly taken through a
demonstration of the computer system.
The senator was asked to
produce an English phrase for translation and promptly suggested,
“Out of sight, out of mind.” The machine dutifully whirred and
winked and generated a piece of paper on which were printed a few
Chinese characters. But the senator could not read Chinese. So, to
complete the test, the program was run in reverse, the Chinese
characters input and an English phrase output. The visitors crowded
around the new piece of paper, which to their initial puzzlement
read: “Invisible idiot.”
Existing programs are only marginally
competent even on matters of this not very high degree of subtlety.
It would be folly to entrust major decisions to computers at our
present level of development - not because the computers are not
intelligent to a degree, but because, in the case of most complex
problems, they will not have been given all relevant information.
The reliance on computers in determining American policy and
military actions during the Vietnam war is an excellent example of
the flagrant misuse of these machines.
But in reasonably restricted
contexts the human use of artificial intelligence seems to be one of
the two practicable major advances in human intelligence available
in the near future. (The other is enrichment of the preschool and
school learning environments of children.)
Those who have not grown up with computers generally find them more
frightening than those who have. The legendary manic computer biller
who will not take no - or even yes - for an answer, and who can be
satisfied only by receiving a check for zero dollars and zero cents
is not to be considered representative of the entire tribe; it is a
feeble - minded computer to begin with, and its mistakes are those of
its human programmers.
The growing use in North America of
integrated circuits and small computers for aircraft safety,
teaching machines, cardiac pacemakers, electronic games,
smoke - actuated fire alarms and automated factories, to name only a
few uses, has helped greatly to reduce the sense of strangeness with
which so novel an invention is usually invested. There are some
200,000 digital computers in the world today; in another decade,
there are likely to be tens of millions. In another generation, I
think that computers will be treated as a perfectly natural - or at
least commonplace - aspect of our lives.
Consider, for example, the development of small, pocket computers. I
have in my laboratory a desk - sized computer purchased with a
research grant in the late 1960s for $4,900. I also have another
product of the same manufacturer, a computer that fits into the palm
of my hand, which was purchased in 1975. The new computer does
everything that the old computer did, including programming
capability and several addressable memories. But it cost $145, and
is getting cheaper at a breathtaking rate.
That represents quite a
spectacular advance, both in miniaturization and in cost reduction,
in a period of six or seven years. In fact, the present limit on the
size of hand - held computers is the requirement that the buttons be
large enough for our somewhat gross and clumsy human fingers to
press. Otherwise, such computers could easily be built no larger
than my fingernail. Indeed, ENIAC, the first large electronic
digital computer, constructed in 1946, contained 18,000 vacuum tubes
and occupied a large room. The same computational ability resides
today in a silicon chip microcomputer the size of the smallest joint
of my little finger.
The speed of transmission of information in the circuitry of such
computers is the velocity of light. Human neural transmission is one
million times slower. That in nonarithmetic operations the small and
slow human brain can still do so much better than the large and fast
electronic computer is an impressive tribute to how cleverly the
brain is packaged and programmed - features brought about, of course,
by natural selection. Those who possessed poorly programmed brains
eventually did not live long enough to reproduce.
Computer graphics have now reached a state of sophistication that
permits important and novel kinds of learning experiences in arts
and sciences, and in both cerebral hemispheres. There are
individuals, many of them analytically extremely gifted, who are
impoverished in their abilities to perceive and imagine spatial
relations, particularly three - dimensional geometry. We now have
computer programs that can gradually build up complex geometrical
forms before our eyes and rotate them on a television screen
connected to the computer.
At Cornell University, such a system has been designed by Donald
Greenberg of the School of Architecture. With this system it is
possible to draw a set of regularly spaced lines which the computer
interprets as contour intervals. Then, by touching our light pen to
any of a number of possible instructions on the screen, we command
the construction of elaborate three - dimensional images which can be
made larger or smaller, stretched in a given direction, rotated,
joined to other objects or have designated parts excised. (See
figures on pp. 226 - 227.)
This is an extraordinary tool for improving
our ability to visualize three - dimensional forms - a skill extremely
useful in graphic arts, in science and in technology. It also
represents an excellent example of cooperation between the two
cerebral hemispheres: the computer, which is a supreme construction
of the left hemisphere, teaches us pattern recognition, which is a
characteristic function of the right hemisphere.
There are other computer programs that exhibit two - and
three - dimensional projections of four - dimensional objects. As
the four - dimensional objects turn, or our perspective changes, not
only do we see new parts of the four - dimensional objects; we also
seem to see the synthesis and destruction of entire geometrical
subunits. The effect is eerie and instructive and helps to make
four - dimensional geometry much less mysterious; we are not nearly so
baffled as I imagine a mythical two - dimensional creature would be on
encountering the typical projection (two squares with the corners
connected) of a three - dimensional cube on a flat surface.
The
classical artistic problem of perspective - the projection of
three - dimensional objects onto two - dimensional canvases - is
enormously clarified by computer graphics; the computer is obviously
also a major tool in the quite practical problem of picturing an
architect’s design of a building, made in two dimensions, from all
vantage points in three dimensions.
Computer graphics are now being extended into the area of play.
There is a popular game, sometimes called Pong, which simulates on a
television screen a perfectly elastic ball bouncing between two
surfaces. Each player is given a dial that permits him to intercept
the ball with a movable “racket.” Points are scored if the motion of
the ball is not intercepted by the racket. The game is very
interesting. There is a clear learning experience involved which
depends exclusively on Newton’s second law for linear motion. As a
result of Pong, the player can gain a deep intuitive understanding
of the simplest Newtonian physics - a better understanding even than
that provided by billiards, where the collisions are far from
perfectly elastic and where the spinning of the pool balls
interposes more complicated physics.
This sort of information gathering is precisely what we call play.
And the important function of play is thus revealed: it permits us
to gain, without any particular future application in mind, a
holistic understanding of the world, which is both a complement of
and a preparation for later analytical activities. But computers
permit play in environments otherwise totally inaccessible to the
average student.
A still more interesting example is provided by the game Space War,
whose development and delights have been chronicled by Stuart Brand.
In Space War, each side controls one or more “space vehicles” which
can fire missiles at the other. The motions of both the spacecraft
and the missiles are governed by certain rules - for example, an
inverse square gravitational field set up by a nearby “planet.” To
destroy the spaceship of your opponent you must develop an
understanding of Newtonian gravitation that is simultaneously
intuitive and concrete. Those of us who do not frequently engage in
interplanetary space flight do not readily evolve a right - hemisphere
comprehension of Newtonian gravitation. Space War can fill that gap.
The two games, Pong and Space War, suggest a gradual elaboration of
computer graphics so that we gain an experiential and intuitive
understanding of the laws of physics. The laws of physics are almost
always stated in analytical and algebraic - that is to say,
left - hemisphere - terms; for example, Newton’s second law is written F
= m a, and the inverse square law of gravitation as F = G M m/r2.
These analytical representations are extremely useful, and it is
certainly interesting that the universe is made in such a way that
the motion of objects can be described by such relatively simple
laws. But these laws are nothing more than abstractions from
experience.
Fundamentally they are mnemonic devices. They permit us
to remember in a simple way a great range of cases that would
individually be much more difficult to remember - at least in the
sense of memory as understood by the left hemisphere. Computer
graphics gives the prospective physical or biological scientist a
wide range of experience with the cases his laws of nature
summarize; but its most important function may be to permit those
who are not scientists to grasp in an intuitive but nevertheless
deep manner what the laws of nature are about.
There are many non - graphical interactive computer programs
which are extremely powerful teaching tools. The programs can
be devised by first - rate teachers, and the student has, in a
curious sense, a much more personal, one - to - one relationship
with the teacher than in the usual classroom setting; he may also be
as slow as he wishes without fear of embarrassment. Dartmouth
College employs computer learning techniques in a very broad array
of courses. For example, a student can gain a deep insight into the
statistics of Mendelian genetics in an hour with the computer rather
than spend a year crossing fruit flies in the laboratory. Another
student can examine the statistical likelihood of becoming pregnant
were she to use various birth control methods. (This program has
built into it a one - in - ten - billion chance of a woman’s becoming
pregnant when strictly celibate, to allow for contingencies beyond
present medical knowledge.)
The computer terminal is a commonplace on the Dartmouth campus. A
very high proportion of Dartmouth undergraduates learn not only to
use such programs but also to write their own. Interaction with
computers is widely viewed as more like fun than like work, and many
colleges and universities are in the process of imitating and
extending Dartmouth’s practice. Dartmouth’s preeminence in this
innovation is related to the fact that its president, John G. Kemeny, is a distinguished computer scientist and the inventor of a
very simple computer language called BASIC.
The Lawrence Hall of Science is a kind of museum connected with the
University of California at Berkeley. In its basement is a rather
modest room filled with about a dozen inexpensive computer
terminals, each hooked up to a time - sharing mini-computer system
located elsewhere in the building. Reservations for access to these
terminals are sold for a modest fee, and they may be made up to one
hour in advance. The clientele is predominantly youngsters, and the
youngest are surely less than ten years old. A very simple
interactive program available there is the game Hangman. To play
Hangman you type on a fairly ordinary typewriter keyboard the
computer code “XEQ - $HANG.” The computer then types out:
HANGMAN CARE FOR THE RULES?
If you type “YES”, the machine replies: GUESS A LETTER IN THE WORD I’M THINKING OF.
IF YOU ARE RIGHT, THEN I WILL TELL YOU. BUT IF YOU ARE WRONG (HA,
HA) YOU WILL BE CLOSER (SNICKER, SNICKER) TO DEATH BY HANGING! THE
WORD HAS EIGHT LETTERS. YOUR GUESS IS...
?
Let us say you type the response: “E”. The computer then types: E
If you guess wrong, the computer then types out an engaging
simulacrum (within the limitations of the characters available to
it) of a human head. And in the usual manner of the game there is a
race between the gradually emerging word and the gradually emerging
form of a human being about to be hanged. In two games of Hangman I
recently witnessed, the correct answers were “VARIABLE” and
“THOUGHT”. If you win the game the program - true to its
mustache - twirling villainy - types out a string of non - letter
characters from the top row of the typewriter keyboard (used in
comic books to indicate curses) and then prints:
RATS, YOU WIN CARE FOR ANOTHER CHANCE TO DIE?
Other programs are more polite. For example, “XEQ
- $KING” yields:
THIS IS THE ANCIENT KINGDOM OF SUMERIA, AND YOU ARE ITS VENERATED
RULER. THE FATE OF SUMERIA’s ECONOMY AND OF YOUR LOYAL SUBJECTS IS
ENTIRELY IN YOUR HANDS. YOUR MINISTER, HAMMURABI, WILL REPORT TO YOU
EACH YEAR ON POPULATION AND ECONOMY. USING HIS INFORMATION YOU MUST
LEARN TO ALLOCATE RESOURCES FOR YOUR KINGDOM WISELY. SOMEONE IS
ENTERING YOUR COUNCIL CHAMBER...
Hammurabi then presents you with relevant statistics on the number
of acres owned by the city, how many bushels per acre were harvested
last year, how many were destroyed by rats, how many are now in
storage, what the present population is, how many people died of
starvation last year, and how many migrated to the city. He begs to
inform you of the current exchange rate of land for food and queries
how many acres you wish to buy. If you ask for too much, the program
prints:
HAMMURABI: PLEASE THINK AGAIN. YOU HAVE ONLY TWENTY - EIGHT HUNDRED
BUSHELS IN STORE.
Hammurabi turns out to be an extremely patient and polite Grand
Vizier. As the years flicker by, you gain a powerful impression that
it may be very difficult, at least in certain market economies, to
increase both the population and landholdings of a state while
avoiding poverty and starvation.
Among the many other programs available is one called Grand Prix
Racing which permits you to choose from among a range of opponents,
running from a Model T Ford to a 1973 Ferrari. If your speed or
acceleration are too low at appropriate places on the track, you
lose; if too high, you crash. Since distances, velocities and
accelerations must be given explicitly, there is no way to play this
game without learning some physics. The array of possible courses of
computer interactive learning is limited only by the ingenuity of
the programmers, and that is a well that runs very deep.
Since our society is so profoundly influenced by science and
technology, which the bulk of our citizens understand poorly or not
at all, the widespread availability in both schools and homes of
inexpensive interactive computer facilities could just possibly play
an important role in the continuance of our civilization.
The only objection I have ever heard to the widespread use of
pocket calculators and small computers is that, if introduced to
children too early, they preempt the learning of arithmetic,
trigonometry and other mathematical tasks that the machine is able
to perform faster and more accurately than the student. This debate
has occurred before.
In Plato’s Phaedrus - the same Socratic dialogue I referred to earlier
for its metaphor of chariot, charioteer and two horses - there is a
lovely myth about the god Thoth, the Egyptian equivalent of
Prometheus. In the tongue of ancient Egypt, the phrase that
designates written language means literally “The Speech of the
Gods.” Thoth is discussing his invention *
of writing with Thamus
(also called Ammon), a god - king who rebukes him in these words:
This discovery of yours will create forgetfulness in the learners’
souls, because they will not use their memories; they will trust to
the external written characters and not remember of themselves. The
specific which you have discovered is an aid not to memory, but to
reminiscence, and you give your disciples not truth, but only the
semblance of truth; they will be hearers of many things and will
have learned nothing; they will appear to be omniscient and will
generally know nothing; they will be tiresome company, having the
show of wisdom without its reality.
* According to the Roman historian Tacitus, the Egyptians claimed to
have taught the alphabet to the Phoenicians, “who, controlling the
seas, introduced it to Greece and were credited with inventing what
they had really borrowed.” According to legend, the alphabet arrived
in Greece with Cadmus, Prince of Tyre, seeking his sister, Europa,
who had been stolen away to the island of Crete by Zeus, king of the
gods, temporarily disguised as a bull. To protect Europa from those
who would steal her back to Phoenicia, Zeus ordered a bronze robot
made which, with clanking steps, patrolled Crete and turned back or
sank all approaching foreign vessels. Cadmus, however, was
elsewhere - ^unsuccessfully seeking his sister in Greece when a dragon
devoured all his men; whereupon he slew the dragon
and, in response to instructions from the goddess Athena, sowed the
dragon’s teeth in the furrows of a plowed field. Each tooth became a
warrior; and Cadmus and his men together founded Thebes, the first
civilized Greek city, bearing the same name as one of the two
capital cities of ancient Egypt. It is curious to find in the same
legendary account the invention of writing, the founding of Greek
civilization, the first known reference to artificial intelligence,
and the continuing warfare between humans and dragons.
I am sure there is some truth to Thamus’ complaint. In our modern
world, illiterates have a different sense of direction, a different
sense of self - reliance, and a different sense of reality. But before
the invention of writing, human knowledge was restricted to what one
person or a small group could remember. Occasionally, as with the
Vedas and the two great epic poems of Homer, a substantial body of
information could be preserved. But there were, so far as we know,
few Homers. After the invention of writing, it was possible to
collect, integrate and utilize the accumulated wisdom of all times
and peoples; humans were no longer restricted to what they and their
immediate acquaintances could remember. Literacy gives us access to
the greatest and most influential minds in history:
Socrates, say, or Newton have had audiences vastly larger than the
total number of people either met in his whole lifetime. The
repeated rendering of an oral tradition over many generations
inevitably leads to errors in transmission and the gradual loss of
the original content, a degradation of information that occurs far
more slowly with the successive reprinting of written accounts.
Books are readily stored. We can read them at our own pace
without disturbing others. We can go back to the hard parts, or
delight once again in the particularly enjoyable parts. They are
mass - produced at relatively low cost. And reading itself is an
amazing activity: You glance at a thin, flat object made from a
tree, as you are doing at this moment, and the voice of the
author begins to speak inside your head. (Hello!) The
improvement in human knowledge and survival potential
following the invention of writing was immense. (There was also an
improvement in self - reliance: It is possible to learn at least the
rudiments of an art or a science from a book and not be dependent on
the lucky accident that there is a nearby master craftsman to whom
we may apprentice ourselves.)
When all is said and done, the invention of writing must be reckoned
not only as a brilliant innovation but as a surpassing good for
humanity. And assuming that we survive long enough to use their
inventions wisely, I believe the same will be said of the modern
Thoths and Prometheuses who are today devising computers and
programs at the edge of machine intelligence. The next major
structural development in human intelligence is likely to be a
partnership between intelligent humans and intelligent machines.
Back to Contents
|