| 
 
 
 
 
 
			CHAPTER FIVE 
 The entire point of the exercise, as far as Karl Lashley was concerned, had been to find where the engrams were - the precise location in the brain where memories were stored. 
 The name ‘engram’ had been coined by Wilder Penfield in the 1920s after he thought he’d discovered that memories had an exact address in the brain. 
 Penfield had performed extraordinary research on epileptic patients with anaesthetized scalps while they were fully conscious, showing that if he stimulated certain parts of their brains with electrodes, specific scenes from their past could be evoked in living color and excruciating detail. 
 
			Even more amazingly, whenever he had 
			stimulated the same spot in the brain (often unbeknownst to the 
			patient) it seemed to elicit the same flashback, with the same level 
			of detail. 
 
			Every last detail of our lives had been 
			carefully encoded in specific spots in the brain, like guests at a 
			restaurant placed at certain tables by a particularly exacting 
			maitre d’. All we needed to find was who was sitting where - and, 
			perhaps as a bonus, who the maitre d’ was. 
 He’d thought that he would be amplifying Penfield’s findings, when all he seemed to be doing was proving him wrong. Lashley tended to the hypercritical, and small wonder. It was as though his life’s entire oeuvre had a singularly negative purpose: to disprove all the work of his forebears. 
 
			The other gospel of the time that still 
			held the scientific community in thrall, but which Lashley was 
			busily disproving, was the notion that every psychological process 
			had a measurable physical manifestation - the move of a muscle, the 
			secretion of a chemical. Once again, the brain was simply, fussily, 
			the maitre d’. 
 Lashley didn’t employ aseptic technique, largely because it wasn’t considered necessary for rats. He was a crude and sloppy surgeon, by any medical standard, possibly deliberately so, sewing up wounds with a simple stitch - a perfect recipe for brain infection in larger mammals - but no cruder than most brain researchers of the day. After all, none of Ivan Pavlov’s dogs survived his brain surgery, all succumbing to brain abscesses or epilepsy.2 
 
			Lashley sought to deactivate certain 
			portions of his rats’ brains to find which part held the precious 
			key to specific memories. To accomplish this delicate task he chose 
			as his surgical instrument his wife’s curling iron - a curling iron! 
			- and simply burned off the part he wished to remove.3 
 
			Lashley became even more liberal with 
			the curling iron, working through one part of the brain to the next, 
			but still it didn’t seem to have any effect on the rat’s ability to 
			remember. Even when he’d injured the vast majority of the brains of 
			individual rats - and a curling iron caused much more damage to the 
			brain than any clean surgical cut - their motor skills might be 
			impaired, and they might stagger disjointedly along, but the rats 
			always remembered the routine. 
 The rats had confirmed what he had long suspected. In his 1929 monograph Brain Mechanisms and Intelligence, a small work that had first gained him notoriety with its radical notions, Lashley had already elucidated his view that cortical function appeared to be equally potent everywhere.4 
 As he would later point out, the necessary conclusion from all his experimental work, 
 
			When it came to cognition, for all 
			intents and purposes, the brain was a mush.6 
 
			Pribram had bought Lashley’s monograph 
			for ten cents secondhand, and when he first arrived in Florida, he 
			hadn’t been shy about challenging it with the same fervor Lashley 
			had reserved for many of his peers. Lashley had been stimulated by 
			his bright upstart apprentice, whom he would eventually regard as 
			the closest he ever had to a son. 
 His intention was to study the functions of the frontal cortex of monkeys, in an attempt to understand the effects of frontal lobotomies being performed on thousands of patients at the time. Teaching and carrying out research appealed to him far more than the lucrative life of a neurosurgeon; at one point some years later he would turn down a $100,000 salary at New York’s Mt Sinai for the relatively impoverished salary of a professor. 
 Like Edgar Mitchell, Pribram always thought of himself as an explorer, rather than a doctor or healer; as an eight-year-old he’d read over and over - at least a dozen times - the exploits of Admiral Byrd in navigating the North Pole. America itself represented a new frontier to conquer for the boy, who’d arrived at that age from Vienna. 
 Pribram was the son of a famous biologist who’d relocated his family to the US in 1927 because he’d felt that Europe, war-torn and impoverished after the First World War, was no place to raise a child. 
 
			As an adult, possibly because he’d been 
			so slight of build and not really the stuff of hearty physical 
			exploration (in later life he’d resemble an elfin version of Albert 
			Einstein, with the same majestic drapery of white shoulder-length 
			hair) Karl chose the human brain as his exploratory terrain. 
 
			He would set up his own experiments on 
			monkeys and cats, painstakingly carrying out systems studies to work 
			out what part of the brain does what. His laboratory was among the 
			first to identify the location of cognitive processes, emotion and 
			motivation, and he was extraordinarily successful. His experiments 
			clearly showed that all these functions had a specific address in 
			the brain - a finding that Lashley was hard-pressed to believe. 
 It was true that parts of the brain performed specific functions, but the actual processing of the information seemed to be carried out by something more basic than particular neurons - certainly something that was not particular to any group of cells. 
 
			For instance, storage appeared to be 
			distributed throughout a specific location and sometimes beyond. But 
			through what mechanism was this possible? 
 If this were true, the electrical activity in the visual cortex should mirror precisely what is being viewed - and this is true to some extent at a very gross level. But in a number of experiments, Lashley had discovered that you could sever virtually all of a cat’s optic nerve without apparently interfering whatsoever with its ability to see what it was doing. 
 
			To his astonishment, the cat apparently 
			continued to see every detail as it was able to carry out 
			complicated visual tasks. If there were something like an internal 
			movie screen, it was as though the experimenters had just demolished 
			all but a few inches of the projector, and yet all of the movie was 
			as clear as it had been before.8 
 This result convinced Pribram that control was being formulated and sent down from higher areas in the brain to the more primary receiving stations. 
 
			This must mean that something far more 
			complicated was happening than what was widely believed at the time, 
			which was that we see and respond to outside stimuli through a 
			simple tunnel flow of information, which flows in from our sense 
			organs to the brain and flows out from the brain to our muscles and 
			glands.9 
 In another study, this time of newborn cats, which had been given contact lenses with either vertical or horizontal stripes, Pribram’s associates found that the behavior of the horizontally oriented cats wasn’t markedly different from that of the vertically oriented ones, even though their brain cells were now oriented either horizontally or vertically. 
 This meant that perception couldn’t be occurring with line detection.10 
 
			His experiments and those of others like 
			Lashley were at odds with many of the prevailing neural theories of 
			perception. Pribram was convinced that no images were being 
			projected internally and that there must be some other mechanism 
			allowing us to perceive the world as we do.11 
 The problem was that the old notions about electrical ‘image’ formation in the brain - the supposed correspondence between images in the world and the brain’s electrical firing - had been disproved by Pribram, and his own monkey studies made him extremely dubious about the latest, most popular theory of perception - that we know the world through line detectors. 
 Just to focus on a face would require a new huge computation by the brain anytime you moved a few inches away from it. 
 Hilgard kept pressing him. Pribram hadn’t a clue as to what kind of theory he could give his friend, and he kept racking his brain to offer up some positive angle. Then one of his colleagues chanced across an article in Scientific American by Sir John Eccles, the noted Australian physiologist, who postulated that imagination might have something to do with microwaves in the brain. 
 
			Just a week later, another article 
			appeared, written by Emmet Leith, an engineer at the 
			University of Michigan, about split laser beams and optical 
			holography, a new technology.12 
 Lashley himself had formulated a theory of wave interference patterns in the brain but abandoned it because he couldn’t envision how they could be generated in the cortex.13 
 Eccles’ ideas appeared to solve that problem. Pribram now thought that the brain must somehow ‘read’ information by transforming ordinary images into wave interference patterns, and then transform them again into virtual images, just as a laser hologram is able to. The other mystery solved by the holographic metaphor would be memory. 
 
			Rather than precisely located anywhere, 
			memory would be distributed everywhere, so that each part contained 
			the whole. 
 Gabor, the first engineer to win the Nobel prize in physics, had been working over the mathematics of light rays and wavelengths. In the process he’d discovered that if you split a light beam, photograph objects with it and store this information as wave interference patterns, you could get a better image of the whole than you could with the flat two dimensions you get by recording point-to-point intensity, the method used in ordinary photography. 
 For his mathematical calculations, Gabor had used a series of calculus equations called Fourier transforms, named after the French mathematician Jean Fourier, who’d developed it early in the nineteenth century. Fourier first began work on his system of analysis, which has gone on to be an essential tool of modern-day mathematics and computing, when working out, at Napoleon’s request, the optimum interval between shots of a cannon so that the barrel wouldn’t overheat. 
 Fourier’s method was eventually found to be able to break down and precisely describe patterns of any complexity into a mathematical language describing the relationships between quantum waves. 
 Any optical image could be converted into the mathematical equivalent of interference patterns, the information that results when waves superimpose on each other. In this technique, you also transfer something that exists in time and space into ‘the spectral domain’ - a kind of timeless, spaceless shorthand for the relationship between waves, measured as energy. 
 
			The other neat trick of the equations is 
			that you can also use them in reverse, to take these components 
			representing the interactions of waves - their frequency, amplitude 
			and phase - and use them to reconstruct any image.14 
 
			There were numerous fine points to be 
			worked out in the laboratory; the theory wasn’t complete. But they 
			were convinced of one thing: perception occurred as a result of a 
			complex reading and transforming of information at a different level 
			of reality. 
 
			They are then reunited and captured on a 
			piece of photographic film. The result on the plate - which 
			represents the interference pattern of these waves - resembles 
			nothing more than a set of squiggles or concentric circles. 
 The mechanism by which this works has to do with the properties of waves that enables them to encode information and also the special quality of a laser beam, which casts a pure light of only a single wavelength, acting as a perfect source to create interference patterns. When your split beams both arrive on the photographic plate, one half provides the patterns of the light source and the other picks up the configuration of the teacup and both together interfere. By shining the same type of light source on the film, you pick up the image that has been imprinted. 
 
			The other strange property of holography 
			is that each tiny portion of the encoded information contains the 
			whole of the image, so that if you chopped up your photographic 
			plate into tiny pieces, and shone a laser beam on any one of them, 
			you would get a full image of the teacup. 
 It was the unique ability of quantum waves to store vast quantities of information in a totality and in three dimensions, and for our brains to be able to read this information and from this to create the world. Here was finally a mechanical device that seemed to replicate the way that the brain actually worked: how images were formed, how they were stored and how they could be recalled or associated with something else. 
 
			Most important, it gave a clue to the 
			biggest mystery of all for Pribram: how you could have localized 
			tasks in the brain but process or store them throughout the larger 
			whole. In a sense, holography is just convenient shorthand for wave 
			interference - the language of The Field. 
 
			Pribram and 
			some of his colleagues went on to develop his hypothesis with a 
			mathematical model demonstrating that this same mathematics also 
			describes the processes of the human brain. He had come up with 
			something so radical that it was almost unthinkable - a hot, living 
			thing like the brain functioned according to the weird world of 
			quantum theory. 
 
			We perceive an object by ‘resonating’ 
			with it, getting ‘in synch’ with it. To know the world is literally 
			to be on its wavelength. 
 
			This information is then picked by the 
			ordinary electrochemical circuits of the brain, just as the 
			vibrations of the strings eventually resonate through the entire 
			piano. 
 This would mean that the art of seeing is one of transforming. In a sense, in the act of observation, we are transforming the timeless, spaceless world of interference patterns into the concrete and discrete world of space and time - the world of the very apple you see in front of you. We create space and time on the surface of our retinas. 
 
			As with a hologram, the lens of the eye 
			picks up certain interference patterns and then converts them into 
			three-dimensional images. It requires this type of virtual 
			projection for you reach out to touch an apple where it really is, 
			not in some place inside your head. If we are projecting images all 
			the time out in space, our image of the world is actually a virtual 
			creation. 
 These neurons send information about these frequencies to another set of neurons. The second set of neurons makes a Fourier translation of these resonances and sends the resulting information to a third set of neurons, which then begins to construct a pattern that eventually will make up the virtual image you create of the apple out in space, on top of the fruit bowl.17 
 
			This three-fold process makes it far 
			easier for the brain to correlate separate images - which is easily 
			achieved when you are dealing with wave interference shorthand but 
			extremely awkward with an actual real-life image. 
 Storing memory in wave interference patterns is remarkably efficient, and would account for the vastness of human memory. Waves can hold unimaginable quantities of data - far more than the 280 quintillion (280,000,000,000,000,000,000) bits of information which supposedly constitute the average human memory accumulated through an average lifespan.18 
 It’s been said that with holographic wave-interference patterns, all of the US Library of Congress, which contains virtually every book ever published in English, would fit onto a large sugar cube.19 
 
			The holographic model would also account 
			for the instant recall of memory, often as a three-dimensional 
			image. 
 If Pribram were right, then some of the salamander’s brain could be removed, or reshuffled, and it shouldn’t affect its ordinary function. 
 But Pietsch was certain that Pribram was wrong and he was fierce in his determination to prove it so. In more than 700 experiments, Pietsch cut out scores of salamander brains. 
 Before putting them back in, he began tampering with them. In successive experiments he reversed, cut out, sliced away, shuffled and even sausage-ground his test subjects’ brains. But no matter how brutally mangled, or diminished in size, whenever whatever was left of the brains were returned to his subjects and the salamanders had recovered, they returned to normal behavior. 
 
			From being a complete skeptic, Pietsch 
			turned convert to Pribram’s view that memory is distributed 
			throughout the brain.20 
 Russell and Karen DeValois converted simple plaid and checkerboard patterns into Fourier waves and discovered that the brain cells of cats and monkeys responded not to the patterns themselves but to the interference patterns of their component waves. 
 Countless studies, elaborated on by the DeValois team in their book Spatial Vision,21 show that numerous cells in the visual system are tuned into certain frequencies. 
 Other studies by Fergus Campbell of Cambridge University in England, as well as by a number of other laboratories, also showed that the cerebral cortex of humans may be tuned to specific frequencies.22 
 
			This would explain how we can recognize 
			things as being the same, even when they are vastly different sizes. 
 Russell DeValois and his colleagues also showed that the receptive fields in the neurons of the cortex were tuned to a very small range of frequen-cies.25 In his studies of both cats and humans, Campbell at Cambridge also demonstrated that neurons in the brain responded to a limited band of frequencies.26 
 At one point, Pribram came across the work of the Russian Nikolai Bernstein. 
 Bernstein had made films of human subjects dressed entirely in black costumes on which white tapes and dots had been placed to mark the limbs - not unlike the classic Halloween skeleton costume. The participants were asked to dance against a black background while being filmed. When the film was processed, all that could be seen was a series of white dots moving in a continuous pattern in a wave form. Bernstein analyzed the waves. 
 To his astonishment, all the rhythmic movements could be represented in Fourier trigonometric sums to such an extent that he found that he could predict the next movements of his dancers, 
 The fact that movement could somehow be represented formally in terms of Fourier equations made Pribram realize that the brain’s conversations with the body might also be occurring in the form of waves and patterns, rather than as images.28 
 The brain somehow had the capacity to analyze movement, break it down into wave frequencies and transmit this wave-pattern shorthand to the rest of the body. This information, transmitted nonlocally, to many parts at once, would explain how we can fairly easily manage complicated global tasks involving multiple body parts, such as riding a bicycle or roller skating. It also accounts for how we can easily imitate some task. 
 
			Pribram also came across evidence that 
			our other senses - smell, taste and hearing - operate by analyzing 
			frequencies.29 
 It then occurred to him that the one area of the brain where wave-interference patterns might be created was not in any particular cell, but in the spaces between them. 
 At the end of every neuron, the basic unit of a brain cell, are synapses, where chemical charges build up, eventually triggering electrical firing across these spaces to the other neurons. In the same spaces, dendrites - tiny filaments of nerve endings wafting back and forth, like shafts of wheat in a slow breeze - communicate with other neurons, sending out and receiving their own electrical wave impulses. 
 
			These ‘slow-wave potentials’, as they 
			are called, flow through the glia, or glue, surrounding neurons, to 
			gently touch or even collide with other waves. It is at this busy 
			juncture, a place of a ceaseless scramble of electromagnetic 
			communications between synapses and dendrites, where it was most 
			likely that wave frequencies could be picked up and analyzed, and 
			holographic images formed, since these wave patterns criss-crossing 
			all the time are creating hundreds and thousands of 
			wave-interference patterns. 
 
			When we perceive something, it’s not due 
			to the activity of neurons themselves but to certain patches of 
			dendrites distributed around the brain, which, like a radio station, 
			are set to resonate only at certain frequencies. It is like having a 
			vast number of piano strings all over your head, only some of which 
			would vibrate as a particular note is played. 
 
			His most important support was from an 
			unlikely source: a German trying to make a medical diagnostic 
			machine work better. 
 Kepler famously claimed in his book Harmonice mundi, that people on earth could hear the music of the stars. 
 
			At the time, Kepler’s contemporaries 
			thought him crazy. It was four hundred years before a pair of 
			American scientists showed that there is indeed a music of the 
			heavens. In 1993, Hulse and Taylor landed the Nobel prize for 
			discovering binary pulsars - stars which send out electromagnetic 
			waves in pulses. The most sensitive of equipment located in one of 
			the world’s highest places, high on a mountaintop in Arecibo, Puerto 
			Rico, picks up evidence of their existence through radio waves. 
 
			Without reading of Gabor, he’d worked 
			out his own holographic theory, reconstructed from mathematical 
			theory. He’d consulted his own books in mathematics to no avail, but 
			after looking up what had been done in optical theory, he came 
			across Gabor’s work. 
 Schempp began thinking that the same principles of wave holography might apply to magnetic resonance imaging (MRI), a medical tool used to examine the soft tissues of the body, which was still in its infancy. But when he inquired about it, he soon realized that the people who’d developed and were running the machines had little idea how MRI worked. 
 
			The technology was so primitive that it 
			was simply being used intuitively. Patients would have to sit still 
			for four hours or more while pictures were slowly taken, by what 
			means nobody was exactly sure. Walter was utterly dissatisfied with 
			MRI technology as it then stood and realized that it was a 
			relatively simple prospect to make sharper images. 
 
			He accepted a place offered at Johns 
			Hopkins Medical School in Baltimore, Maryland, which has the best 
			outpatient radiology department in the USA, and later trained at 
			Massachusetts General Hospital, which is affiliated with MIT. After 
			a fellowship in radiology in Zurich, Walter was finally able to 
			return to Germany, where he now had the appropriate qualifications 
			to officially lay hands on the machine. 
 To do so, you need to be able to find the nuclei of the water molecules scattered throughout the brain. Because protons spin, like little magnets, locating them is often most simply accomplished by applying a magnetic field. This causes the spin to accelerate, eventually to the point where the nuclei behave like microscopic gyroscopes spinning out of control. 
 
			All this molecular manipulation makes 
			the water molecules that much more conspicuous, enabling the MRI 
			machine to locate them and ultimately to extract an image of the 
			brain’s soft tissues. 
 
			Through the use of Fourier transforms, 
			and many slices of the body, you combine and eventually turn this 
			information into an optical picture. 
 His improvements cut down the time required for a patient to sit still from 4 hours to 20 minutes. But he began to wonder whether the mathematics and theory of how this machine worked could be applied to biological systems. 
 He had called his theory ‘quantum holography’, because what he’d really discovered was that all sorts of information about objects, including their three-dimensional shape, is carried in the quantum fluctuations of the Zero Point Field, and that this information can be recovered and reassembled into a three-dimensional image. 
 Schempp had discovered, as Puthoff had predicted, that the Zero Point Field was a vast memory store. 
 
			Through Fourier transformation, MRI 
			machines could take information encoded in the Zero Point Field and 
			turn it into images. The real question he was posing went far beyond 
			whether he could create a sharper image in MRI. What he was really 
			trying to find out was whether his mathematical equations unlocked 
			to the key to the human brain. 
 
			The problem was that the theory was 
			abstract and general, and needed more mathematical rooting to make 
			it concrete. In the early 1990s, he received a call from Walter 
			Schempp, whose work threw a life jacket to his theory. It grounded 
			his own work into something tidy and mathematical. 
 He also had, as Peter saw it, a machine which worked according to this process. 
 
			Like Pribram’s model of the brain, 
			Schempp’s MRI machine underwent a staged process, combining 
			wave-interference information taken from different views of the body 
			and then eventually transforming it into a virtual image. MRI was 
			experimental verification that Peter’s own quantum mechanical theory 
			actually worked. 
 They spent several excited lunches comparing notes and decided that all three of them needed to collaborate. Walter would also correspond with Pribram, trading information. What they all discovered was something that Pribram’s work had always hinted at: perception occurred at a much more fundamental level of matter - the netherworld of the quantum particle. 
 
			We didn’t see objects per se, but only 
			their quantum information and out of that constructed our image of 
			the world. Perceiving the world was a matter of tuning into the Zero 
			Point Field. 
 It fascinated him that gases with such disparate chemistry as nitrous oxide (N2O), ether (CH3CH2OCH2CH3), halothane (CF3CHClBr), chloroform (CHCl3) and isoflurane (CHF2OCHClCF3) could all bring about loss of consciousness.32 
 
			It must have something to do with some 
			property besides chemistry. Hameroff guessed that general 
			anesthetics must interfere with the electrical activity within the 
			microtubules, and this activity would turn off consciousness. If 
			this were the case, then the reverse would also be true: electrical 
			activity of microtubules that composed the insides of dendrites and 
			neurons in the brain must somehow be at the heart of consciousness. 
 Thirteen strands of tubules wrap around the hollow core in a spiral; and all the microtubules in a cell radiate outward from the center to the cell membrane, like a cartwheel. 
 
			We know that these little honeycomb 
			structures act as tracks in transporting various products along 
			cells, particularly in nerve cells, and they are vital for pulling 
			apart chromosomes during cell division. We also know that most 
			microtubules are constantly remaking themselves, assembling and 
			disassembling, like an endless set of Lego. 
 Kunio Yasue, a quantum physicist from Kyoto, Japan, had carried out mathematical formulations to help understand the neural microprocess. Like Pribram, his equations showed that brain processes occurred at the quantum level, and that the dendritic networks in the brain were operating in tandem through quantum coherence. 
 The equations developed in quantum physics precisely described this cooperative interaction.35 Independently of Hameroff, Yasue and his colleague Mari Jibu, of the Department of Anesthesiology, Okayama University, in Japan, had also theorized that the quantum messaging of the brain must take place through vibrational fields, along the microtubules of cells.36 
 Others had theorized that the basis of all the brain’s functions had to do with the interaction between brain physiology and the Zero Point Field.37 
 
			An Italian physicist, Ezio Insinna 
			of the Bioelectronics Research Association, in his own experimental 
			work with microtubules, discovered that these structures had a 
			signaling mechanism, thought to be associated with the transfer of 
			electrons.38 
 
			According to their theory, microtubules 
			and the membranes of dendrites represented the Internet of the body. 
			Every neuron of the brain could log on at the same time and speak to 
			every other neuron simultaneously via the quantum processes within. 
 
			Photons can penetrate the core of the 
			microtubule and communicate with other photons throughout the body, 
			causing collective cooperation of subatomic particles in 
			microtubules throughout the brain. If this is the case, it would 
			account for unity of thought and consciousness - the fact that we 
			don’t think of loads of disparate things at once.40 
 This would provide an explanation for the instantaneous operation of our brains, which occurs at between one ten-thousandth and one-thousandth of a second, requiring that information be transmitted at 100-1000 meters per second - a speed that exceeds the capabilities of any known connections between axons or dendrites in neurons. 
 
			Superradiance along the light pipes also 
			could account for a phenomenon that has long been observed - the 
			tendency of EEG patterns in the brain to get synchronized.41 
 But some of the water molecules in brain cells are coherent, the Italian team discovered, and this coherence extends as far as 3 nanometers or more outside the cell’s cytoskeleton. Since this is the case, it is overwhelmingly likely that the water inside the microtubules is also ordered. This offered indirect evidence that some sort of quantum process, creating quantum coherence, was occurring inside.43 
 
			They’d also shown that this focusing of 
			waves would produce beams 15 nanometers in diameter - precisely the 
			size of the microtubule’s inner core.44 
 The universe was a vast dynamic cobweb of energy exchange, with a basic substructure containing all possible versions of all possible forms of matter. Nature was not blind and mechanistic, but open-ended, intelligent and purposeful, making use of a cohesive learning feedback process of information being fed back and forth between organisms and their environment. 
 
			Its unifying mechanism was not a 
			fortunate mistake but information which had been encoded and 
			transmitted everywhere at once.46 
 After Pribram’s discoveries, a number of scientists, including systems theorist Ervin Laszlo, would go on to argue that the brain is simply the retrieval and read-out mechanism of the ultimate storage medium - The Field.47 
 Pribram’s associates from Japan would hypothesize that what we think of as memory is simply a coherent emission of signals from the Zero Point Field, and that longer memories are a structured grouping of this wave informa-tion.48 
 
			If this were true, it would explain why 
			one tiny association often triggers a riot of sights, sounds and 
			smells. It would also explain why, with long-term memory in 
			particular, recall is instantaneous and doesn’t require any scanning 
			mechanism to sift though years and years of memory. 
 
			Lashley’s rats with the fried brains 
			were able to conjure up their run in its entirety because the memory 
			of it was never burned away in the first place. Whatever reception 
			mechanism was left in the brain - and as Pribram had demonstrated, 
			it was distributed all over the brain - was tuning back into the 
			memory through The Field. 
 It hinted at human capabilities for knowledge and communication far deeper and more extended than we presently understand. It also blurred the boundary lines of our individuality - our very sense of separateness. 
 If living things boil down to charged particles interacting with a field and sending out and receiving quantum information, 
 
			Indeed, there was no more ‘out there’ if 
			we and the rest of the world were so intrinsically interconnected. 
 The idea of a system of exchanged and patterned energy and its memory and recall in the Zero Point Field hinted at all manner of possibility for human beings and their relation to their world. 
 Modern physicists had set mankind back for many decades. In ignoring the effect of the Zero Point Field, they’d eliminated the possibility of interconnectedness and obscured a scientific explanation for many kinds of miracles. 
 What they’d being doing, in renormalizing their equations, was a little like subtracting out God. 
 
 |