Google is ready to make put its artificial intelligence platform, its "virtual brain" technology, to better use soon.
The technology has already taught itself to recognize human faces and cat faces, and is now gearing up to take on the difficult task of speech recognition and more. Google's "virtual brain" gets its name because it is based on the idea of a neural network in the human brain.
In that respect, it each piece of the "virtual brain" can interact and influence each other in order to build a self-teaching system.
Who said those cat videos were worthless?
The technology has also started getting folded into other Google products.
Apparently, the tech has been put into the speech recognition system in Android, and according to Vincent Vanhoucke, one of the heads of Google's speech recognition system, the technology has already improved the accuracy of speech recognition by 20-25%.
Android 4+ users may have noticed the tech at work with the real time dictation feature.
Any time you see a word pop up then correct
itself based on context, that's the Google brain at work. The neural
network is only working in US English right now, but should move to
different languages eventually. The technology is likely going to be pushed into the self-driving car soon enough, and has potential to make searching by image better and much more. Essentially, the neural network could make all Google products better in one way or another, because the technology excels at learning context and adjusting for behavior.
This means that if you use voice actions to search for "Oklahoma City Thunder", but the last word is garbled, the system will check with your pooled Google data and understand that you likely said "Thunder" because of search history, etc. From the sound of it, this technology either already is, or will soon be a huge part of the intelligent push system that makes Google Now such an interesting platform.
This technology appears to be the magic sauce that allows Google to find some amazing links in context from the huge amounts of data the company has on its users' habits, searches, interests, and location.
Using pattern recognition and sound recognition, the neural network could get closer to understanding context and information surrounding a central target, like scanning the background of an image to learn where a photo was taken by using existing similar images and geotag data.
Of course, Google is quick to point out that this is really still just the first step towards a true artificial intelligence.
So, the neural network can find specific visual data faster than humans can, and it can match shapes and patterns, and ultimately do jobs that would be incredibly tedious and boring for humans. But, it can't draw from the outside world and reason out the why or how of a thing.
But, the question still stands: when the day does come that Google or some other company creates a true artificial intelligence, will you be there with pitchforks and torches, or with an offering of peace for our new Cylon overlords?
|