from Wired Website
Even so, Urs Hölzle hedged his bet by not
resigning from his university post, but taking a year-long leave.
As its czar of infrastructure, Hölzle oversaw the growth of its network operations from a few cages in a San Jose co-location center to a massive internet power; a 2010 study by Arbor Networks concluded that if Google was an ISP it would be the second largest in the world (the largest is Level 3, which services over 2,700 major corporations in 450 markets over 100,000 fiber miles.)
Google treats its infrastructure like a state secret, so Hölzle rarely speaks about it in public.
Today is one of those rare days: at the Open
Networking Summit in Santa Clara, California, Hölzle is announcing that
Google essentially has remade a major part of its massive internal network,
providing the company a bonanza in savings and efficiency. Google has done
this by brashly adopting a new and radical open-source technology called
OpenFlow.
In this case, Google has used its software expertise to overturn the current networking paradigm. If any company has potential to change the networking game, it is Google.
The company has essentially two huge networks:
It makes sense to bifurcate the information that way because the data flow in each case has different characteristics and demand.
The user network has a smooth flow, generally adopting a diurnal pattern as users in a geographic region work and sleep.
The performance of the user network also has
higher standards, as users will get impatient (or leave!) if services are
slow. In the user-facing network you also need every packet to arrive intact
- customers would be pretty unhappy if a key sentence in a document or
e-mail was dropped.
Urs Hölzle
Photo provided by Google. The internal backbone, in contrast, has wild swings in demand - it is “bursty” rather than steady.
Google is in control of scheduling internal traffic, but it faces difficulties in traffic engineering. Often Google has to move many petabytes of data (indexes of the entire web, millions of backup copies of user Gmail) from one place to another.
When Google updates or creates a new service, it wants it available worldwide in a timely fashion - and it wants to be able to predict accurately how quickly the process will take.
But Google found an answer in OpenFlow, an open source system jointly devised by scientists at Stanford and the University of California at Berkeley.
Adopting an approach known as Software Defined Networking (SDN), OpenFlow gives network operators a dramatically increased level of control by separating the two functions of networking equipment: packet switching and management.
OpenFlow moves the control functions to servers, allowing for more complexity, efficiency and flexibility.
Google became one of several organizations to sign on to the Open Networking Foundation, which is devoted to promoting OpenFlow. (Other members include Yahoo, Microsoft, Facebook, Verizon and Deutsche Telekom, and an innovative startup called Nicira.)
But none of the partners so far have announced
any implementation as extensive as Google’s.
In short, the taxi driver will get you there,
but you don’t want to bet the house on your exact arrival time.
Such a system doesn’t need independent taxi drivers, because the system knows where the quickest routes are and what streets are blocked, and can set an ideal route from the outset.
The system knows all the conditions and can
institute a more sophisticated set of rules that determines how the taxis
proceed, and even figure whether some taxis should stay in their garages
while fire trucks pass.
The routers Google built to accommodate OpenFlow on what it is calling “the G-Scale Network” probably did not mark not the company’s first effort in making such devices.
(One former Google employee has told Wired’s Cade Metz that the company was designing its own equipment as early as 2005. Google hasn’t confirmed this, but its job postings in the field over the past few years have provided plenty of evidence of such activities.)
With SDN, though, Google absolutely had to go its own way in that regard.
The process was conducted, naturally, with stealth - even the academics who were Google’s closest collaborators in hammering out the OpenFlow standards weren’t briefed on the extent of the implementation.
In early 2010, Google established its first SDN
links, among its triangle of data centers in North Carolina, South Carolina
and Georgia. Then it began replacing the old internal network with G-Scale
machines and software - a tricky process since everything had to be done
without disrupting normal business operations.
By early this year, all of Google’s internal
network was running on OpenFlow.
In other words, all the lanes in Google’s humongous internal data highway can be occupied, with information moving at top speed.
The industry considers thirty or forty percent
utilization a reasonable payload - so this implementation is like boosting
network capacity two or three times. (This doesn’t apply to the user-facing
network, of course.)
To Hölzle, this news is all about the new paradigm.
He does acknowledge that challenges still remain in the shift to SDN, but thinks they are all surmountable.
If SDN is widely adopted across the industry, that’s great for Google, because virtually anything that happens to make the internet run more efficiently is a boon for the company. As for Cisco and Juniper, he hopes that as more big operations seek to adopt OpenFlow, those networking manufacturers will design equipment that supports it.
If so, Hölzle says, Google will probably be a customer.
For proof, big players in networking can now look to Google.
The search giant claims that it’s already reaping benefits from its bet on the new revolution in networking. Big time.
|