
	by Jon Rappoport
	August 27, 2013
	
	from
	
	JonRappoport Website
	
 
	
	 
	
	
	If you’ve ever studied infomercials, you know the whole business is based on 
	back-end sales. It’s not the product you buy for $19.95, it’s the products 
	they can hook you into after you spend the $19.95.
	
	So it is with 
	Google Glass. It’s all about the apps that’ll be attached. .
	
	Glass gives the wearer short-hand reality as he taps in. That’s what it’s 
	for. The user is “on the go.” If he’s driving his Lexus and suddenly thinks 
	about Plato, he’s not going to download the full text of The Republic to 
	mull while he’s crashing into big trucks on the Jersey Turnpike. He’s going 
	to take a shorthand summary. A few lines.
	
	People want boiled-down info while they’re on the move. Reduction. The 
	“essentials.”
	
	This is perfectly in line with the codes of the culture. Ads, quick-hitter 
	seminars, headlines, two-sentence summaries, ratings for products, news with 
	no context. Stripped-down, reduced.
	
	Well, here is a look into right now. A student at Stanford is developing a 
	Google app that “reads other people.”
	
	
	From SFGate, 8/26,
	
		
		“Google Glass being designed to read 
		emotions”: “The [emotion-recognition] tools can analyze facial 
		expressions and vocal patterns for signs of specific emotions: 
		Happiness, sadness, anger, frustration, and more.”
	
	
	This is the work of 
	
	Catalin Voss, an 18-year-old 
	student at Stanford and his start-up company, Sension.
	
	So you’re wearing Google Glass at a meeting and it checks out the guy across 
	the table who has an empty expression on his mug and, above your right eye, 
	you see the word “neutral.” Now he smiles, and the word “happy” appears.
	
	I kid you not. This information is supposed to guide you in your 
	communication. 
	
	 
	
	The number of things that can go wrong? Count the ways, if 
	you’re able. I’m personally looking forward to that guy across the table 
	saying, 
	
		
		“Hey, you, schmuck with the Glass, what is your app saying about me 
	now? Angry?” 
	
	
	That should certainly enhance the communication.
	
	Or a husband, just back from his 12-mile morning bike ride, enters his Palo 
	Alto home, wearing Glass, of course, and as he looks at his wife, who is 
	sitting at the kitchen table reading a book, sees the word “sad” appear 
	above his eye. 
	
		
		“Honey,” he says, recalling the skills he 
		picked up in a 26- minute webinar, “have you been pursuing a negative 
		line of thinking?”
	
	
	She slowly gazes up at the goggle-eyed monster 
	in his spandex and grasshopper helmet, rises from her chair and tosses a 
	plate of hot eggs in his face. 
	
	 
	
	YouTube, please!
	
	But wait. There’s more. 
	
	 
	
	The Glass app is also being heralded as a step 
	forward in “machine-human relationships.” With recognition services like 
	Google Now and Siri, when computers and human users talk to each other, the 
	computers will be able to respond not only to the content of the user’s 
	words, but also to his tone, his feelings.
	
	This should be a real marvel. As you’ve no doubt already realized, the 
	emotion-recognition tool is all about reduction. It shrinks human feelings 
	to simplistic labels. Therefore, what machines say back to humans will be 
	something to behold.
	
	Machine version of NLP, anyone? I’m predicting a surge in destroyed 
	computers.
	
	The astonishing thing about this new app is that many tech people are so 
	on-board with it. In other words, they believe that human feelings can be 
	broken down and worked with on an androidal basis, with no loss incurred. 
	These people are already boiled down, cartoonized.
	
	You think you’ve observed predictive programming in movies? That’s nothing. 
	The use of apps like this one will help bring about a greater willingness on 
	the part of humans to reduce their own thoughts and feelings to… FIT THE 
	SPECS OF THE MACHINES AND THE SOFTWARE.
	
	Count on it.
	
	This isn’t really about machines acting more like humans. It’s about humans 
	acting like machines.
	
	The potential range of human emotions is extraordinary. Our language, when 
	used with imagination, actually extends that range. It’s something called 
	art.
	
	The counter-trend is in gear. No matter how subtle the emotion-recognition 
	algorithms become, there will always be a wide, wide gap between what they 
	produce and the expression of humans.
	
	The most profound kind of mind control seeks to eliminate that gap by 
	encouraging us to mimic technology. That means people will think and feel 
	less, and what they think and feel will mean less.
	
	The machines won’t say, 
	
		
		“I’m sorry, I can’t identify that emotion, 
		it’s too complex.” 
	
	
	They’ll say “sad” or “happy” or “upset” or 
	whatever they have to say to give the appearance that they’re on top of the 
	human condition.
	
	Eventually, significant numbers of people will tailor their self-awareness 
	to what the machines point to, name, label, declare.
	
	Thus, inventing reality.
	
	The wolf becomes a lamb, the lamb becomes a flea.
	
	And peace prevails. You can wear it and see with it.
	
	Eventually, realizing that Glass is too obvious and obnoxious and bulky, 
	companies will develop something they might call Third Eye, a chip the size 
	of half a grain of rice, made flat, and inserted under the skin of the 
	forehead.
	
	Perfect. Invisible. Of course, cops will have them. And talk to them.
	
		
		“I’m parked at the corner of Wilshire and 
		Westwood. Suspicious male standing outside the Harmon Building.”
		
		“I see him. Searching relevant data.”
	
	
	Which means any past arrests, race, conditions 
	noted in his medical records, tax status, questionable statements he’s made 
	in public or private, significant known associates, group affiliations, etc. 
	And present state of mind...
	
	The cop: 
	
		
		“Recommendation?”
		
		“Passive-aggressive, right now he’s peaking at 3.2 on the Hoover Bipolar 
		scale. Bring subject into custody for general questioning.”
		
		“Will do.”
	
	
	No one will wonder why, because such analysis 
	resonates with the vastly reduced general perception of what reality is all 
	about.
	
	People mimic how machines see them and adjust their human thinking 
	accordingly.
	
	Hand and glove, key and lock. Wonderful.
	
	As the cop is transporting the suspect to the station, Third Eye intercedes:
	
	
		
		“Sorry, Officer Crane, it took me a minute 
		to dig further. Suspect is business associate of REDACTED. This is a 
		catch and release. Repeat, catch and release. Printing out four 
		backstage passes to Third Memorial Rolling Stones concert at the 
		Hollywood Bowl. Apologize profusely, give subject the tickets, and 
		release him immediately.”
		
		“I copy.”
		
		“This arrest and attendant communication is being deleted…now.”