Part 2
The Extended Mind
You are the world.
Krishnamurti
CHAPTER SIX
The Creative Observer
It is strange what clings to your mind from the flotsam and jetsam
of the everyday.
For Helmut Schmidt it was an article in,
of all places, Reader’s Digest. He’d read it as a 20-year-old
student in 1948, at the University of Cologne, after Germany had
just emerged from the Second World War.
It lodged in his memory for
nearly twenty years, surviving through two emigrations, from Germany
to America and from academia to industry - from a professorship at
the University of Cologne to a position as a research physicist at
Boeing Scientific Research Laboratories in Seattle, Washington.
Through all his changes of country and career, Schmidt pondered the
meaning of the article, as though something in him knew that it was
central to his life’s direction even before he was consciously aware
of it. Every so often he would engage in a bit more reflection, take
out the article in his mind’s eye and examine it in the light,
turning it this way and that, before filing it away again, a bit of
unfinished business he wasn’t yet sure how to tend to.1
The article had been nothing more than an abridged version of some
writing by the biologist and parapsychologist J. B. Rhine. It
concerned his famous experiments on precognition and extrasensory
perception, including the card tests which would later be used by
Edgar Mitchell in outer space. Rhine had conducted all of his
experiments under carefully controlled conditions and they had
yielded interesting results.2
The studies had shown that it was
possible for a person to transmit information about card symbols to
another or increase the odds of a certain number being rolled with a
set of dice.
Schmidt had been drawn to Rhine’s work for its implications in
physics.
Even as a student, Schmidt had had a contrary streak, which
rather liked testing the limits of science. In his private moments,
he regarded physics and many of the sciences, with their claim to
have explained many of the mysteries of the universe, as exceedingly
presumptuous. He’d been most interested in quantum physics, but he
found himself perversely drawn to those aspects of quantum theory
which presented the most potential problems.
the field
What held the most fascination of all for Schmidt was the role of
the observer.3
One of the most mysterious aspects of quantum physics
is the so-called Copenhagen interpretation (so named because Niels
Bohr, one of the founding fathers of quantum physics, resided
there).
Bohr, who forcefully pushed through a variety of
interpretations in quantum physics without the benefit of a unified
underlying theory, set out various dictums about the behavior of
electrons as a result of the mathematical equations which are now
followed by workaday physicists all over the world.
Bohr (and Werner
Heisenberg) noted that, according to experiment, an electron is not
a precise entity, but exists as a potential, a superposition, or
sum, of all probabilities until we observe or measure it, at which
point the electron freezes into a particular state. Once we are
through looking or measuring, the electron dissolves back into the
ether of all possibilities.
Part of this interpretation is the notion of ‘complementarity’ -
that you can never know everything about a quantum entity such as an
electron at the same time. The classic example is position and
velocity; if you discover information about one aspect of it - where
it is, for instance - you cannot also determine exactly where it’s
going or at what speed.
Many of the architects of quantum theory had grappled with the
larger meaning of the results of their calculations and experiments,
making comparisons with metaphysical and Eastern philosophical
texts.4
But the rank and file of physicists in their wake complained
that the laws of the quantum world, while undoubtedly correct from a
mathematical point of view, beggared ordinary common sense. French
physicist and Nobel prize winner Louis de Broglie devised an
ingenious thought experiment, which carried quantum theory to its
logical conclusion.
On the basis of current quantum theory, you
could place an electron in a container in Paris, divide the
container in half, ship one half to Tokyo and the other to New York,
and, theoretically, the electron should still occupy either side
unless you peer inside, at which point a definite position in one
half or the other would finally be determined.5
What the Copenhagen interpretation suggested was that randomness is
a basic feature of nature. Physicists believe this is demonstrated
by another famous experiment involving light falling on a
semi-transparent mirror. When light falls on such a mirror, half of
it is reflected and the other half is transmitted through it.
However, when a single photon arrives at the mirror, it must go one
way or the other, but the way it will go - reflected or transmitted - cannot be predicted. As with any such
binary process, we have a 50-50 chance of guessing the eventual
route of the photon.6
On the subatomic level, there is no causal
mechanism in the universe.
If that were so, Schmidt wondered, how was it that some of Rhine’s
subjects were able to correctly guess cards and dice - implements,
like a photon, of random processes? If Rhine’s studies were correct,
something fundamental about quantum physics was wrong. So-called
random binary processes could be predicted, even influenced.
What appeared to put a halt to randomness was a living observer. One
of the fundamental laws of quantum physics says that an event in the
subatomic world exists in all possible states until the act of
observing or measuring it ‘freezes’ it, or pins it down, to a single
state. This process is technically known as the collapse of the wave
function, where ‘wave function’ means the state of all
possibilities.
In Schmidt’s mind, and the minds of many others, this
was where quantum theory, for all its mathematical perfection, fell
down. Although nothing existed in a single state independently of an
observer, you could describe what the observer sees, but not the
observer himself. You included the moment of observation in the
mathematics, but not the consciousness doing the observing. There
was no equation for an observer.7
There was also the ephemeral nature of it all. Physicists couldn’t
offer any real information about any given quantum particle. All
they could say with certainty was that when you took a certain
measurement at a certain point, this is what you would find. It was
like catching a butterfly on the wing. Classical physics didn’t have
to talk about an observer; according to Newton’s version of reality,
a chair or even a planet was sitting there, whether or not we were
looking at it. The world existed out there independently of us.
But in the strange twilight of the quantum world, you could only
determine incomplete aspects of subatomic reality with an observer
pinning down a single facet of the nature of an electron only at
that moment of observation, not for all time. According to the
mathematics, the quantum world was a perfect hermetic world of pure
potential, only made real - and, in a sense, less perfect - when
interrupted by an intruder.
It seems to be a truism of important shifts in thinking that many
minds begin to ask the same question at roughly the same time. In
the early 1960s, nearly twenty years after he’d first read Rhine’s
article, Schmidt, like Edgar Mitchell, Karl Pribram and the others,
was one of a growing number of scientists trying to get some measure
of the nature of human consciousness in the wake of the questions
posed by quantum physics and the observer effect. If the human
observer settled an electron into a set state, to what extent did he
or she influence reality on a large scale?
The observer effect
suggested that reality only emerged from a primordial soup like the
Zero Point Field with the involvement of living consciousness. The
logical conclusion was that the physical world only existed in its
concrete state while we were involved in it. Indeed, Schmidt
wondered, was it true that nothing existed independently of our
perception of it?
A few years after Schmidt was pondering all this, Mitchell would
head off to Stanford on the West Coast of the USA, gathering funding
for his own consciousness experiments with a number of gifted
psychics. For Mitchell, like Schmidt, the importance of Rhine’s
findings would be what they appeared to show about the nature of
reality. Both scientists wondered to what extent order in the
universe was related to the actions and intentions of human beings.
If consciousness itself created order - or indeed in some way
created the world - this suggested much more capacity in the human
being than was currently understood. It also suggested some
revolutionary notions about humans in relation to their world and
the relation between all living things. What Schmidt was also asking
was how far our bodies extended.
Did they end with what we always
thought of as our own isolated persona, or ‘extend out’ so that the
demarcation between us and our world was less clear-cut? Did living
consciousness possess some quantum-field-like properties, enabling
it to extend its influence out into the world? If so, was it
possible to do more than simply observe? How strong was our
influence?
It was only a small step in logic to conclude that in our
act of participation as an observer in the quantum world, we might
also be an influencer, a cre-ator.8 Did we not only stop the
butterfly at a certain point in its flight, but also influence the
path it will take - nudging it in a particular direction?
A related quantum effect suggested by Rhine’s work was the
possibility of nonlocality, or action at a distance: the theory that
two subatomic particles once in close proximity seemingly
communicate over any distance after they are separated. If Rhine’s
ESP experiments were to be believed, action at a distance might also
be present in the world at large.
Schmidt was 37 before he finally got the opportunity to test out his
ideas, in 1965, during his tenure at Boeing. A tall, thin presence
with a pronounced, angular intensity, his hair heavily receded on
either side of an exaggerated widow’s peak, Schmidt was in the happy
circumstance of being employed to pursue pure research in the Boeing
laboratory, whether or not it was connected to aerospace
development. Boeing was in a lull in its fortunes. The aerospace
giant had come up with the supersonic but had shelved it, and hadn’t
yet invented the 747, so Schmidt had time on his hands.
An idea slowly began taking shape. The simplest way to test all
these ideas was to see if human consciousness could affect some sort
of probabilistic system, as Rhine had done. Rhine had used his
special cards for the ESP ‘forced choice’ guessing, or
‘precognition’, exercises and dice for ‘psychokinesis’ - tests of
whether mind could influence matter.
There were certain limitations
with both media.
You could never truly show that a toss of the dice
had been a random process affected by human consciousness, or that a
correct guess of the face of a card hadn’t been purely down to
chance. Cards might not be shuffled perfectly, a die might be shaped
or weighted to favor a certain number. The other problem was that
Rhine had recorded the results by hand, a process that could be
prone to human error. And finally, because they were done manually,
the experiments took a long time.
Schmidt believed he could contribute to Rhine’s work by mechanizing
the testing process. Because he was considering a quantum effect, it
made sense to build a machine whose randomness would be determined
by a quantum process. Schmidt had read about two Frenchmen, named
Remy Chauvin and Jean-Pierre Genthon, who’d conducted studies to see
if their test subjects could in some way change the decay rate of
radioactive materials, which would be recorded by a Geiger counter.9
Nothing much is more random than radioactive atomic decay. One of
the axioms of quantum physics is that no one can predict exactly
when an atom will decay and an electron consequently be released. If
Schmidt made use of radioactive decay in the machine’s design, he
could produce what was almost a contradiction in terms: a precision
instrument built upon quantum mechanical uncertainty.
With machines using a quantum decay process, you’re dealing in the
realm of probability and fluidity - a machine governed by atomic
particles, in turn governed by the probabilistic universe of quantum
mechanics. This would be a machine whose output consisted of
perfectly random activity, which in physics is viewed as a state of
‘disorder’.
The Rhine studies in which participants had apparently
affected the roll of the dice suggested that some information
transfer or ordering mechanism was going on - what physicists like
to term ‘negative entropy’, or ‘negentropy’ for short - the move away from randomness, or disarray, to order.
If it could
be shown that participants in a study had altered some element of
the machine’s output, they’d have changed the probabilities of
events - that is, shifted the odds of something happening or altered
the tendency of a system to behave in a certain way.10
It was like
persuading a person at a crossroads, momentarily undecided about
taking a walk, to head down one road rather than another. They
would, in other words, have created order.
As most of his work had consisted of theoretical physics, Schmidt
needed to brush up on his electronics in order to construct his
machine. With the help of a technician, he produced a small,
rectangular box, slightly larger than a fat hardback book, with four
colored lights and buttons and a thick cable attached to another
machine punching coding holes in a stream of paper tape. Schmidt
dubbed the machine a ‘random number generator’, which he came to
refer to as RNG. The RNG had the four colored lights on top of it -
red, yellow, green and blue - which would flash on randomly.
In the experiment, a participant would press a button under one of
the lights, which registered a prediction that the light above it
would light up.11
If you were correct, you’d score a hit. On top of
the device were two counters. One would count the number of ‘hits’ -
the times the participant could correctly guess which lamp would
light - and the other would count the number of trials.
Your success
rate would be staring at you as you continued with the experiment.
Schmidt had employed a small amount of the isotope strontium-90,
placed near an electron counter so that any electrons ejected from
the unstable, decaying molecules would be registered inside a
Geiger-Müller tube.
At the point where an electron was flung into
the tube - at a rate, on average, of 10 a second - it stopped a
high-speed counter breathlessly racing through numbers between one
and four at a million per second, and the number stopped at would
light the correspondingly numbered lamp. If his participants were
successful, it meant that they had somehow intuited the arrival time
of the next electron, resulting in the lighting of their designated
lamp.
If someone was just guessing, he’d have a 25 per cent chance of
getting the right results. Most of Schmidt’s first test subjects
scored no better than this, until he contacted a group of
professional psychics in Seattle and collected subjects who went on
to be successful. Thereafter, Schmidt was meticulous in his
recruitment of participants with an apparent psychic gift for
guessing correctly. The effects were likely to be so minuscule, he
figured, that he had to maximize his chances of success.
With his
first set of studies, Schmidt got 27 per cent - a result that may
appear insignificant, but which was enough of a deviation in
statistical terms for him to conclude that something interesting was
going on.12
Apparently, there’d been some connection between the mind of his
subjects and his machine. But what was it? Did his participants
foresee which lights would be lit? Or did they make a choice among
the colored lamps and somehow mentally ‘force’ that particular lamp
to light? Was the effect precognition or psychokinesis?
Schmidt decided to isolate these effects further by testing
psychokinesis.
What he had in mind was an electronic version of
Rhine’s dice studies. He went on to build another type of machine -
a twentieth-century version of the flip of a coin. This machine was
based on a binary system (a system with two choices: yes or no; on
or off; one or zero). It could electronically generate a random
sequence of ‘heads’ and ‘tails’ which were displayed by the movement
of a light in a circle of nine lamps.
One light was always lit. With
the top lamp lit at the start, for each generated head or tail the
light moved by one step in a clockwise or anticlockwise direction.
If ‘heads’ were tossed, the next light in clockwise order would
light. If ‘tails’, the next light in the anticlockwise direction
would light instead. Left to its own devices, the machine would take
a random walk around the circle of nine lights, with movements in
each direction roughly half the time.
After about two minutes and
128 moves, the run stopped and the numbers of generated heads and
tails were displayed. The full sequence of moves was also recorded
automatically on paper tape, with the number of heads or tails
indicated by counters.
Schmidt’s idea was to have his participants will the lights to take
more steps in a clockwise direction. What he was asking his
participants to do, on the most elementary level, was to get the
machine to produce more heads than tails.
In one study, Schmidt worked with two participants, an aggressive,
extroverted North American woman and a reserved male researcher in
parapsychology from South America. In preliminary tests, the North
American woman had scored consistently more heads than tails, while
the South American man had scored the reverse - more tails than
heads - even though he’d been trying for a greater number of heads.
During a larger test of more than 100 runs apiece, both kept to the
same scoring tendencies - the woman got more heads, the man more
tails. When the woman did her test, the light showed a preference
for clockwise motion
52.5 per cent of the time. But when the man concentrated, the
machine once again did the opposite of what he intended. In the end,
only 47.75 per cent of the lit lights moved in a clockwise
direction.
Schmidt knew he had come up with something important, even if he
couldn’t yet put his finger on how any known law of physics could
explain this. When he worked it out, the odds against such a large
disparity in the two scores occurring by chance were more than 10
million to one. That meant he’d have to conduct 10 million similar
studies before he’d get the results by chance alone.13
Schmidt gathered together eighteen people, the most easily available
he could find. In their first studies, he found that, as with his
South American fellow, they seemed to have a reverse effect on the
machine. If they tried to make the machine move clockwise, it tended
to move in the other direction.
Schmidt was mainly interested in whether there was any effect at
all, no matter what the direction. He decided to see whether he
could set up an experiment to make it more likely that his subjects
got a negative score. If these participants ordinarily had a
negative effect, then he’d do his best to amplify it. He selected
only those participants who’d had a reverse effect on the machine.
He then created an experimental atmosphere that might encourage
failure. His participants were asked to conduct their test in a
small dark closet where they’d be huddled with the display panel.
Schmidt studiously avoided giving them the slightest bit of
encouragement. He even told them to expect that they were going to
fail.
Not surprisingly, the team had a significantly negative effect on
the RNG. The machine moved more in the opposite way than what they’d
intended. But the point was that the participants were having some
effect on the machine, even if it was a contrary one. Somehow,
they’d been able to shift the machines, ever so slightly, away from
their random activity; their results were 49.1 per cent against an
expected result of 50 per cent.
In statistical terms, this was a
result of major significance - a thousand to one that the result had
occurred by chance. Since none of his subjects knew how the RNG
worked, it was clear that whatever they were doing must have been
generated by some sort of human will.14
Schmidt carried on with similar studies for a number of years,
publishing in New Scientist and other journals, meeting with
like-minded people and achieving highly significant scores in his
studies - sometimes as high as 54 per cent against an expected
result of 50 per cent.15 By 1970, the year before Mitchell’s moon
walk, Boeing suffered a setback in profits and needed to cut back
sharply on staff.
Schmidt, along with hundreds of others, was one of
its casualties. Boeing had been such a key source of R&D jobs in the
area that without the aerospace giant, there was virtually no work
to be had. A sign at the border of Seattle read, ‘Will the last one
to leave Seattle please turn off the lights?’ Schmidt made his third
and final career move. He would continue on with his consciousness
research, a physicist among parapsychologists.
He relocated to
Durham, North Carolina, and sought work at Rhine’s laboratory, the
Foundation for Research on the Nature of Man, carrying on his RNG
research with Rhine himself.
A few years later, word of Schmidt’s machines filtered through to
Princeton University and came to the attention of a young university
student in the school of engineering. She was an undergraduate, a
sophomore, studying electrical engineering, and something about the
idea of mind being able to influence a machine held a certain
romantic appeal. In 1976, she decided to approach the dean of the
engineering school about the possibility of replicating Helmut
Schmidt’s RNG studies as a special project.16
Robert Jahn was a tolerant man. When campus unrest had erupted at
Princeton, as it did at most universities across America in response
to the escalation of the Vietnam War, Jahn, then a professor of
engineering, had found himself an unwitting apologist for high
technology, at a point when it was being blamed for America’s stark
polarization.
Jahn had argued persuasively to the Princeton student
body that technology actually offered the solution to this
divisiveness. His conciliatory line not only had settled down the
campus unrest but also had helped to create an accepting atmosphere
for students with technical interests at what was essentially a
liberal arts university. Jahn’s skill at diplomacy may have been one
reason he’d been asked to serve as dean in 1971.
Now his famous tolerance was being stretched nearly to its limit.
Jahn was an applied physicist who had invested his entire life in
the teaching and development of technology. All of his own degrees
came from Princeton, and his work in advanced space propulsion
systems and high temperature plasma dynamics had won him his current
distinguished position.
He’d returned to Princeton in the early 1960s with the mission of
introducing electric propulsion to the aeronautical engineering
department.
The project he was now being asked to supervise
essentially belonged to the category of psychic phenomena. Jahn
wasn’t convinced it was a viable topic, but the sophomore was such a
brilliant student who was already on a fast track through her
program that he eventually relented.
He agreed to subsidize a summer
project for her out of his discretionary funds. Her task was to
research the existing scientific literature on RNG studies and other
forms of psychokinesis and to carry out a few preliminary
experiments. If she could convince Jahn that the field held some
credibility and, more importantly, could be approached from a
technical perspective, he told her, then he’d agree to supervise her
independent work.
Jahn tried to approach the topic as an open-minded scholar might.
Over the summer, his student would leave photocopies of technical
papers on his desk and even managed to coax him into accompanying
her to a meeting of the Parapsychological Association. He tried to
get a feel for the people involved in studying what had always been
dismissed as a fringe science.
Jahn rather hoped that the entire
subject would go away. Much as he was amused by the project,
particularly by the notion that he somehow might have the power to
influence all the complicated array of equipment around him, he knew
that this was something, in the long run, that might mean trouble
for him, particularly among his fellow faculty members.
How would he
ever explain it as a serious topic of study?
Jahn’s student kept returning with more convincing proof that this
phenomenon existed. There was no doubt that the people involved in
the studies and the research itself had a certain credibility. He
agreed to supervise a two-year project for her, and when she began
returning with her own successful results, he found himself making
suggestions and trying to refine the equipment.
By the second year of the student’s project, Jahn himself began
dabbling in his own RNG experiments. It was beginning to look as
though there might be something interesting here. The student
graduated and left her RNG work behind, an intriguing thought
experiment, and no more, the results of which had satisfied her
curiosity.
Now it was time to get serious and return to the more
traditional line she’d originally chosen for herself. She embarked
on what would turn out to be a lucrative career in conventional
computer science, leaving in her wake a body of tantalizing data and
also a bomb across Bob Jahn’s path that would change the course of
his life forever.
Jahn respected many of the investigators into consciousness
research, but privately he felt that they were going about it the
wrong way. Work like Rhine’s, no matter how scientific, tended to be
placed under the general umbrella of parapsychology, which was
largely dismissed by the scientific establishment as the province of
confidence tricksters and magicians. Clearly what was needed was a
highly sophisticated, solidly based research program, which would
give the studies a more temperate and scholarly framework. Jahn,
like Schmidt, realized the enormous implications of these
experiments.
Ever since Descartes had postulated that mind was
isolated and distinct from the body, all the various disciplines of
science had made a clear distinction between mind and matter.
The
experiments with Schmidt’s machines seemed to be suggesting that
this separation simply didn’t exist. The work that Jahn was about to
embark on represented far more than resolving the question of
whether human beings had the power to affect inanimate objects,
whether dice, spoons or microprocesses. This was study into the very
nature of reality and the nature of living consciousness. This was
science at its most wondrous and elemental.
Schmidt had taken great care to find special people with exceptional
abilities who might be able to get especially good results.
Schmidt’s was a protocol of the extraordinary - abnormal feats
performed by abnormal people with a peculiar gift. Jahn believed
that this approach further marginalized the topic. The more
interesting question, in his mind, was whether this was a capacity
present in every human being.
He also wondered what impact this might have on our everyday lives.
From his position as dean of an engineering school in the 1970s,
Jahn realized that the world stood poised on the brink of a major
computer revolution. Microprocessor technology was becoming
increasingly sensitive and vulnerable. If it were true that living
consciousness could influence such sensitive equipment, this in
itself would have a major impact on how the equipment operated.
The
tiniest disturbances in a quantum process could create significant
deviations from established behavior, the slightest movement send it
soaring in a completely different direction.
Jahn knew that he was in a position to make a unique contribution.
If this research were grounded in traditional science backed by a
prestigious university, the entire topic might be aired in a more
scholarly way.
He made plans for setting up a small program, and gave it a neutral
name: Princeton Engineering Anomalies Research, which would
thereafter always be known as PEAR. Jahn also resolved to take a
low-key and lone-wolf approach by deliberately distancing himself
from the various parapsychological associations and studiously
avoiding any publicity.
Before long, private funding began rolling in, launching a precedent
that Jahn would follow thereafter of never taking a dime of the
University’s money for his PEAR work.
Largely because of Jahn’s
reputation, Princeton tolerated PEAR like a patient parent with a
precocious but unruly child. He was offered a tiny cluster of rooms
in the basement of the engineering school, which was to exist as its
own little universe within one of the more conservative disciplines
on this American Ivy League campus.
As Jahn began considering what he might need to get a program of
this size off the ground, he made contact with many of the other new
explorers in frontier physics and consciousness studies. In the
process, he met and hired Brenda Dunne, a developmental psychologist
at the University of Chicago, who had conducted and validated a
number of experiments in clairvoyance.
In Dunne, Jahn had deliberately chosen a counterpoint to himself,
which was obvious at first sight by their gaping physical
differences. Jahn was spare and gaunt, often neatly turned out in a
tidy checked shirt and casual trousers, the informal uniform of
conservative academia, and in both his manner and his erudite speech
gave off a sense of containment - never a superfluous word or
unnecessary gesture.
Dunne had the more effusive personal style. She
was often draped in flowing clothes, her immense mane of
salt-and-pepper hair hung loose or pony-tailed like a Native
American.
Although also a seasoned scientist, Dunne tended to lead
from the instinctive. Her job was to provide the more metaphysical
and subjective understanding of the material to bolster Jahn’s
largely analytical approach. He would design the machines; she would
design the look and feel of the experiments. He would represent
PEAR’s face to the world; she would represent a less formidable face
to its participants.
The first task, in Jahn’s mind, was to improve upon the RNG
technology. Jahn decided that his Random Event Generators, or REGs
(hard ‘G’), as they came to be called, should be driven by an
electronic noise source, rather than atomic decay. The random output
of these machines was controlled by something akin to the white
noise you hear when the dial of your radio is between stations - a
tiny roaring surf of free electrons.
This provided a mechanism to
send out a randomly alternating string of positive and negative
pulses.
The results were displayed on a computer screen and then
transmitted on-line to a data management system. A number of
failsafe features, such as voltage and thermal monitors, guarded
against tampering or breakdown, and they were checked religiously to
ensure that when not involved in experiments of volition, they were
producing each of their two possibilities, 1 or 0, more or less 50
per cent of the time.
All the hardware failsafe devices guaranteed that any deviation from
the normal 50-50 chance heads and tails would not be due to any
electronic glitches, but purely the result of some information or
influence acting upon it. Even the most minute effects could be
quickly quantified by the computer. Jahn also souped up the
hardware, getting it to work far faster.
By the time he was
finished, it occurred to him that in a single afternoon he could
collect more data than Rhine had amassed in his entire lifetime.
Dunne and Jahn also refined the scientific protocol. They decided
that all their REG studies should follow the same design: each
participant sitting in front of the machine would undergo three
tests of equal length. In the first, they would will the machine to
produce more 1s then 0s (or ‘HI’s, as PEAR researchers put it). In
the second, they would mentally direct the machine to produce more
0s than 1s (more ‘LO’s). In the third, they would attempt not to
influence the machine in any way.
This three-stage process was to
guard against any bias in the equipment. The machine would then
record the operator’s decisions virtually simultaneously.
When a participant pressed a button, he would set off a trial of 200
binary ‘hits’ of 1 or 0, lasting about one-fifth of a second, during
which time he would hold his mental intention (to produce more than
the 100 ‘1’s, say, expected by chance).
Usually the PEAR team would
ask each operator to carry out a run of 50 trials at one go, a
process that might only take half an hour but which would produce
10,000 hits of 1 or 0. Dunne and Jahn typically examined scores for
each operator of blocks of 50 or 100 runs (2,500 to 5,000 trials, or
500,000 to one million binary ‘hits’) - the minimum chunk of data,
they determined, for reliably pinpointing trends.17
From the outset it was clear that they needed a sophisticated method
of analyzing their results. Schmidt had simply counted up the number
of hits and compared them to chance. Jahn and Dunne decided to use a
tried-and-tested method in statistics called cumulative deviation,
which
the field
entailed continually adding up your deviation from the chance score
- 100 - for each trial and averaging it, and then plotting it on a graph.
The graph would show the mean, or average, and certain standard
deviations - margins where results deviate from the mean but are
still not considered significant.
In trials of 200 binary hits
occurring randomly, your machine should throw an average of 100
heads and 100 tails over time - so your bell curve will have 100 as
its mean, represented by a vertical line initiated from top of its
highest point. If you were to plot each result every time your
machine conducted a trial, you would have individual points on your
bell curve - 101, 103, 95, 104 - representing each score.
Because
any single effect is so tiny, it is difficult, doing it that way, to
see any overall trend. But if you continue to add up and average
your results and are having effects, no matter how slight, your
scores should lead to a steadily increasing departure from
expectation.
Cumulative averaging shows off any deviation in bold
relief.18
It was also clear to Jahn and Dunne that they needed a vast amount
of data. Statistical glitches can occur even with a pool of data as
large as 25,000 trials. If you are looking at a binary chance event
like coin tossing, in statistical terms you should be throwing heads
or tails roughly half the time. Say you decided to toss a coin 200
times and came up with 102 heads. Given the small numbers involved,
your slight favouring of heads would still be considered
statistically well within the laws of chance.
But if you tossed that same coin 2 million times, and you came up
with 1,020,000 heads, this would suddenly represent a huge deviation
from chance. With tiny effects like the REG tests, it is not
individual or small clusters of studies but the combining of vast
amounts of data which ‘compounds’ to statistical significance, by
its increasing departure from expectation.19
After their first 5000 studies Jahn and Dunne decided to pull off
the data and compute what was happening thus far. It was a Sunday
evening and they were at Bob Jahn’s house. They took their average
results for each operator and began plotting them on a graph, using
little red dots for any time their operators had attempted to
influence the machine to have a HI (heads) and little green dots for
the LO intentions (tails).
When they finished, they examined what they had. If there had been
no deviation from chance, the two bell curves would be sitting right
on top of the bell curve of chance, with 100 as the mean.
Their results were nothing like that. The two types of intention had
each gone in a different direction. The red bell curve, representing
the ‘HI’ intentions, had shifted to the right of the chance average,
and the green bell curve had shifted to the left. This was as
rigorous a scientific study as they come, and yet somehow their
participants - all ordinary people, no psychic superstars among them
- had been able to affect the random movement of machines simply by
an act of will.
Jahn looked up from the data, sat back in his chair and met Brenda’s
eye.
‘That’s very nice,’ he said.
Dunne stared at him in disbelief.
With scientific rigor and
technological precision they had just generated proof of ideas that
were formerly the province of mystical experience or the most
outlandish science fiction. They’d proved something revolutionary
about human consciousness. Maybe one day this work would herald a
refinement of quantum physics.
Indeed, what they had in their hands
was beyond current science - was perhaps the beginnings of a new
science.
‘What do you mean, “that’s very nice”?’ she replied. ‘This is
absolutely... incredible!’
Even Bob Jahn, in his cautious and deliberate manner, his dislike of
being immoderate or waving a fist in the air, had to admit, staring
at the graphs sprawled across his dining-room table, that there were
no words in his current scientific vocabulary to explain them.
It was Brenda who first suggested that they make the machines more
engaging and the environment more cosy in order to encourage the
‘resonance’ which appeared to be occurring between participants and
their machines.
Jahn began creating a host of ingenious random
mechanical, optical and electronic devices - a swinging pendulum; a
spouting water fountain; computer screens which switched attractive
images at random; a moveable REG which skittled randomly back and
forth across a table; and the jewel in the PEAR lab’s crown, a
random mechanical cascade.
At rest it appeared like a giant pinball
machine attached to the wall, a 6-by-10-foot framed set of 330 pegs.
When activated, nine thousand polystyrene balls tumbled over the
pegs in the span of only 12 minutes and stacked in one of nineteen
collection bins, eventually producing a configuration resembling a
bell-shaped curve. Brenda put a toy frog on the moveable REGs and
spent time selecting attractive computer images, so that
participants would be ‘rewarded’ if they chose a certain image by
seeing more of it.
They put up wood paneling. They
began a collection of teddy bears. They offered participants snacks
and breaks.
Year in and year out, Jahn and Dunne carried on the tedious process
of collecting a mountain of data - which would eventually turn into
the largest database ever assembled of studies into remote
intention. At various points, they would stop to analyze all they
had amassed thus far. In one 12-year period of nearly 2.5 million
trials, it turned out that 52 per cent of all the trials were in the
intended direction and nearly two-thirds of the ninety-one operators
had overall success in influencing the machines the way they’d
intended. This was true, no matter which type of machine was used.20
Nothing else - whether it was the way a participant looked at a
machine, the strength of their concentration, the lighting, the
background noise or even the presence of other people - seemed to
make any difference to the results. So long as the participant
willed the machine to register heads or tails, he or she had some
influence on it a significant percentage of the time.
The results with different individuals would vary (some would
produce more heads than tails, even when they had concentrated on
the exact opposite). Nevertheless, many operators had their own
‘signature’ outcome - Peter would tend to produce more heads than tails, and Paul vice
versa.21
Results also tended to be unique to the individual
operator, no matter what the machine. This indicated that the
process was universal, not one occurring with only certain
interactions or individuals.
In 1987, Roger Nelson of the PEAR team and Dean Radin, both doctors
of psychology, combined all the REG experiments - more than 800 -
that had been conducted up to that time.22
A pooling together of the
results of the individual studies of sixty-eight investigators,
including Schmidt and the PEAR team, showed that participants could
affect the machine so that it gives the desired result about 51 per
cent of the time, against an expected result of 50 per cent. These
results were similar to those of two earlier reviews and an overview
of many of the experiments performed on dice.23
Schmidt’s results
remained the most dramatic with those studies that had leapt to 54
per cent.24
Although 51 or 54 per cent doesn’t sound like much of an effect,
statistically speaking it’s a giant step. If you combine all the
studies into what is called a ‘meta-analysis’, as Radin and Nelson
did, the odds of this overall score occurring are a trillion to
one.25
In their meta-analysis, Radin and Nelson even took account of
the most frequent criticisms of the REG studies concerning
procedures, data or equipment by setting up sixteen criteria by
which to judge each experimenter’s overall data and then assigning
each experiment a quality score.26 A more recent meta-analysis of
the REG data from 1959 to 2000 showed a similar result.27
The US
National Research Council also concluded that the REG trials could
not be explained by chance.28
An effect size is a figure which reflects the actual size of change
or outcome in a study. It is arrived at by factoring in such
variables as the number of participants and the length of the test.
In some drug studies, it is arrived at by dividing the number of
people who have had a positive effect from the drug by the total
number of participants in the trial. The overall effect size of the
PEAR database was 0.2 per hour.29
Usually an effect size between 0.0
to 0.3 is considered small, a 0.3 to 0.6 effect size is medium and
anything above that is considered large. The PEAR effect sizes are
considered small and the overall REG studies, small to medium.
However, these effect sizes are far larger than those of many drugs
deemed to be highly successful in medicine.
Numerous studies have shown that propranolol and aspirin are highly
successful in reducing heart attacks. Aspirin in particular has been
hailed as a great white hope of heart disease prevention.
Nevertheless, large studies have shown that the effect size of
propranolol is 0.04 and aspirin is 0.03, respectively - or about ten
times smaller than the effect sizes of the PEAR data. One method of
determining the magnitude of effect sizes is to convert the figure
to the number of persons surviving in a sample of 100 people.
An
effect size of 0.03 in a medical life-or-death situation would mean
that three additional people out of one hundred survived, and an
effect size of 0.3 would mean that an additional thirty of one
hundred survived.30
To give some hypothetical idea of the magnitude of the difference,
say that with a certain type of heart operation, thirty patients out
of a hundred usually survive. Now, say that patients undergoing this
operation are given a new drug with an effect size of 0.3 - close to
the size of the hourly PEAR effect. Offering the drug on top of the
operation would virtually double the survival rate.
An additional
effect size of 0.3 would turn a medical treatment that had been
life-saving less than half the time into one that worked in the
majority of cases.31
Other investigators using REG machines discovered that it was not
simply humans who had this influence over the physical world. Using
a variation of Jahn’s REG machines, a French scientist named René
Peoc’h also carried out an ingenious experiment with baby chicks.
As
soon as they
were born, a moveable REG was ‘imprinted’ on them as their ‘mother’.
The robot was then placed outside the chicks’ cage and allowed to
move about freely, as Peoc’h tracked its path.
After a time, the
evidence was clear - the robot was moving toward the chicks more
than it would do if it were wandering randomly. The desire of the
chicks to be near their mother was an ‘inferred intention’ that
appeared to be having an effect in drawing the machine nearer.32
Peoc’h carried out a similar study with baby rabbits. He placed a
bright light on the moveable REG that the baby rabbits found
abhorrent. When the data from the experiment were analyzed, it
appeared that the rabbits were successfully willing the machine to
stay away from them.
Jahn and Dunne began to formulate a theory. If reality resulted from
some elaborate interaction of consciousness with its environment,
then consciousness, like subatomic particles of matter, might also
be based on a system of probabilities.
One of the central tenets of
quantum physics, first proposed by Louis de Broglie, is that
subatomic entities can behave either as particles (precise things
with a set location in space) or waves (diffuse and unbounded
regions of influence which can flow through and interfere with other
waves). They began to chew over the idea that consciousness had a
similar duality. Each individual consciousness had its own
‘particulate’ separateness, but was also capable of ‘wave-like’
behavior, in which it could flow through any barriers or distance,
to exchange information and interact with the physical world.
At
certain times, subatomic consciousness would get in resonance with -
beat at the same frequency as - certain subatomic matter. In the
model they began to assemble, consciousness ‘atoms’ combined with
ordinary atoms - those, say, of the REG machine - and created a
‘consciousness molecule’ in which the whole was different from its
component parts.
The original atoms would each surrender their
individual entities to a single larger, more complex entity. On the
most basic level, their theory was saying, you and your REG machine
develop coherence.33
Certainly some of their results seemed to favor this interpretation.
Jahn and Dunne had wondered if the tiny effect they were observing
with individuals would get any larger if two or more people tried to
influence the machine in tandem. The PEAR lab ran a series of
studies using pairs of people, in which each pair was to act in
concert when attempting to influence the machines.
Of 256,500 trials, produced by fifteen pairs in forty-two
experimental series, many pairs also produced a ‘signature’ result,
which didn’t necessarily resemble the effect of either individual
alone.34
Being of the same sex tended to have a very slight negative
effect. These types of couples had a worse outcome than they
achieved individually; with eight pairs of operators the results
were the very opposite of what was intended. Couples of the opposite
sex, all of whom knew each other, had a powerful complementary
effect, producing more than three and a half times the effect of
individuals. However, ‘bonded’ pairs, those couples in a
relationship, had the most profound effect, which was nearly six
times as strong as that of single operators.35
If these effects depended upon some sort of resonance between the
two participating consciousnesses, it would make sense that stronger
effects would occur among those people sharing identities, such as
siblings, twins or couples in a relationship.36
Being close may
create coherence. As two waves in phase amplified a signal, it may
be that a bonded couple has an especially powerful resonance, which
would enhance their joint effect on the machine.
A few years later, Dunne analyzed the database to see if results
differed according to gender. When she divided results between men
and women, she found that men on the whole were better at getting
the machine to do what they wanted it do, although their overall
effect was weaker than it was with women. Women, on the whole, had a
stronger effect on the machine, but not necessarily in the direction
they’d intended.37
After examining 270 databases produced by 135
operators in nine experiments between 1979 and 1993, Dunne found
that men had equal success in making the machine do what they
wanted, whether heads or tails (or HIs and LOs). Women, on the other
hand, were successful in influencing the machine to record heads (HIs),
but not tails (LOs). In fact, most of their attempts to get the
machine to do tails failed. Although the machine would vary from
chance, it would be in the very opposite direction of what they’d
intended.38
At times, women produced better results when they weren’t
concentrating strictly on the machine, but were doing other things
as well, whereas strict concentration seemed important for men’s
success.39
This may provide some subatomic evidence that women are
better at multitasking than men, while men are better at
concentrated focus. It may well be that in microscopic ways men have
a more direct impact on their world, while women’s effects are more
profound.
Then something happened which forced Jahn and Dunne to reconsider
their hypothesis about the nature of the effects they were
observing.
In 1992, PEAR had banded together with the University of
Giessen and the Freiberg Institute to create the Mind-Machine
Consortium. The consortium's first task was to replicate the
original PEAR data, which everyone assumed would proceed as a matter
of course. Once the results of all three laboratories were examined,
however, they looked, at first glance, a failure - little better
than the 50-50 odds which occur by chance alone.40
When writing up the results, Jahn and Dunne noticed some odd
distortions in the data. Something interesting had occurred in the
secondary variables. In statistical graphs, you can show not only
what your average ought to be but also how far the deviations from
it ought to spread from your mean.
With the Mind-Machine data, the
mean was right where it would be with a chance result, but not much
else was. The size of the variation was too big, and the shape of
the bell curve was disproportionate. Overall, the distribution was
far more skewed than it would be if it were just a chance result.
Something strange was going on.
When Jahn and Dunne looked a little closer at the data, the most
obvious problem had to do with feedback.
Up until that time they’d
operated on the assumption that providing immediate feedback -
telling the operators how they were doing in influencing the machine
- and making an attractive display or a machine that people could
really engage with would crucially help to produce good results.
This would hook the operator into the process and help them to get
in ‘resonance’ with the device. For the mental world to interact
with the physical world, they’d thought, the interface - an
attractive display - was crucial in breaching that divide.
However, in the Consortium data, they realized that the operators
were doing just as well - or sometimes better - when they had no
feedback.
One of their other studies, called ArtREG, had also failed to get
significant overall results.41 They decided to examine that study a
bit more closely in light of the Mind-Machine Consortium results.
They’d used engaging images on a computer, which randomly switched
back and forth - in one case a Navajo sand painting switched with Anubis, the
ancient Egyptian judge of the dead. The idea was for their operators
to will the machine to show more of one than the other. The PEAR
team had assumed once again that an attractive image would act as a
carrot - you’d be ‘rewarded’ for your intention by seeing more of
the image you preferred.
Once they’d examined the data of the study in terms of yield by
picture, those images which had produced the most successful
outcomes all fell into a similar category: the archetypal, the
ritualistic or the religiously iconographic. This was the domain of
dreams, the unexpressed or unarticulated - images that, by their
very design, were intended to engage the unconscious.
If that were true, the intention was coming from deep in the
unconscious mind, and this may have been the cause of the effects.
Jahn and Dunne realized what was wrong with their assumptions. Using
devices to make the participant function on a conscious level might
be acting as a barrier. Instead of increasing conscious awareness
among their operators, they should be diminishing it.42
This realization caused them to refine their ideas about how the
effects they’d observed in their labs might occur. Jahn liked to
call it his ‘work in progress’. It appeared that the unconscious
mind somehow had the capability of communicating with the
subtangible physical world - the quantum world of all possibility.
This marriage of unformed mind and matter would then assemble itself
into something tangible in the manifest world.43
This model makes perfect sense if it also embraces theories of the
Zero Point Field and quantum biology proposed by Pribram, Popp and
the others. Both the unconscious mind - a world before thought and
conscious intention - and the ‘unconscious’ of matter - the Zero
Point Field - exist in a probabilistic state of all possibility.
The
subconscious mind is a pre-conceptual substrate from which concepts
emerge, and the Zero Point Field is a probabilistic substrate of the
physical world. It is mind and matter at their most fundamental. In
this subtangible dimension, possibly of a common origin, it would
make sense that there would be a greater likelihood of quantum
interaction.
At times, Jahn kicked around the most radical idea of all. When you
get down far enough into the quantum world, there may be no
distinction between the mental and the physical. There may be only
the concept. It might just be consciousness attempting to make sense
of a blizzard of information. There might not be two intangible
worlds.
There might be only one - The Field and the ability of
matter to organize itself coherently.44
As Pribram and Hameroff theorized, consciousness results from
super-radiance, a rippling cascade of subatomic coherence - when
individual quantum particles such as photons lose their
individuality and begin acting as a single unit, like an army
calling every soldier into line. Since every motion of every charged
particle of every biological process is mirrored in
the Zero Point Field, our coherence extends out in the world.
According to the laws of classical physics, particularly the law of
entropy, the movement of the inanimate world is always toward chaos
and disorder. However, the coherence of consciousness represents the
greatest form of order known to nature, and the PEAR studies suggest
that this order may help to shape and create order in the world.
When we wish for something or intend something, an act which
requires a great deal of unity of thought, our own coherence may be,
in a sense, infectious.
On the most profound level, the PEAR studies also suggest that
reality is created by each of us only by our attention. At the
lowest level of mind and matter, each of us creates the world.
The effects that Jahn had been able to record were almost
imperceptible. It was too early to know why. Either the machinery
was still too crude to pick up the effect or he was only picking up
a single signal, when the real effect occurs from an ocean of
signals - an interaction of all living things in the Zero Point
Field.
The difference between his own results and the higher ones
recorded by Schmidt suggested that this ability was spread across
the population, but that it was like artistic ability. Certain
individuals were more skillful at harnessing it.
Jahn had seen that this process had minute effects on probabilistic
processes, and that this might explain all the well-known stories
about people having positive or negative effects on machines - why,
on some bad days, computers, telephones and photocopiers
malfunction.
It might even explain the problems Benveniste had been
having with his robot.
It seemed that we had an ability to extend our own coherence out
into our environment. By a simple act of wishing, we could create
order. This represented an almost unimaginable amount of power. On
the crudest level, Jahn had proved that, at least on the subatomic
level, there was such as thing as mind over matter. But he’d
demonstrated something even more fundamental about the powerful
nature of human intention.
The REG data offered a tiny window into
the very essence of human creativity - its capacity to create, to
organize, even to heal.45 Jahn had his evidence that human
consciousness had the power to order random electronic devices.
The question now before him was what
else might be possible.
Back to Contents
|