C&B Notes

Data in the Deep

Microsoft recently made a splash with Project Natick, a nascent program that is submerging data centers in the ocean to capture a series of important benefits.

Project Natick had no shortage of hurdles to overcome.  The first, of course, was keeping the inside of its big steel container dry.  Another was figuring out the best way to use the surrounding seawater to cool the servers inside.  And finally there was the matter of how to deal with the barnacles and other forms of sea life that would inevitably cover a submerged vessel a phenomenon that should be familiar to anyone who has ever kept a boat in the water for an extended period.  Clingy crustaceans and such would be a challenge because they could interfere with the transfer of heat from the servers to the surrounding water.  These issues daunted us at first, but we solved them one by one, often drawing on time-tested solutions from the marine industry.  This is a true “lights out” environment, meaning that the system’s managers would work remotely, with no one to fix things or change out parts for the operational life of the pod.

But why go to all this trouble?  Sure, cooling computers with seawater would lower the air-conditioning bill and could improve operations in other ways, too, but submerging a data center comes with some obvious costs and inconveniences.  Does trying to put thousands of computers beneath the sea really make sense?  We think it does, for several reasons.  For one, it would offer a company like ours the ability to quickly target capacity where and when it is needed.  Corporate planners would be freed from the burden of having to build these facilities long before they are actually required in anticipation of later demand.  For an industry that spends billions of dollars a year constructing ever-increasing numbers of data centers, quick response time could provide enormous cost savings.

The reason underwater data centers could be built more quickly than land-based ones is easy enough to understand.  Today, the construction of each such installation is unique. The equipment might be the same, but building codes, taxes, climate, workforce, electricity supply, and network connectivity are different everywhere.  And those variables affect how long construction takes.  We also observe their effects in the performance of our facilities, where otherwise identical equipment exhibits different levels of reliability depending on where it is located.  As we see it, a Natick site would be made up of a collection of “pods” — steel cylinders that would each contain possibly several thousand servers.  Together they’d make up an underwater data center, which would be located within a few kilometers of the coast and placed between 50 and 200 meters below the surface.  The pods could either float above the seabed at some intermediate depth, moored by cables to the ocean floor, or they could rest on the seabed itself.

Once we deploy a data-center pod, it would stay in place until it’s time to retire the set of servers it contains.  Or perhaps market conditions would change, and we’d decide to move it somewhere else. This is a true “lights out” environment, meaning that the system’s managers would work remotely, with no one to fix things or change out parts for the operational life of the pod.  Now imagine applying just-in-time manufacturing to this concept.  The pods could be constructed in a factory, provisioned with servers, and made ready to ship anywhere in the world.  Unlike the case on land, the ocean provides a very uniform environment wherever you are.  So no customization of the pods would be needed, and we could install them quickly anywhere that computing capacity was in short supply, incrementally increasing the size of an underwater installation to meet capacity requirements as they grew.  Our goal for Natick is to be able to get data centers up and running, at coastal sites anywhere in the world, within 90 days from the decision to deploy.

Most new data centers are built in locations where electricity is inexpensive, the climate is reasonably cool, the land is cheap, and the facility doesn’t impose on the people living nearby.  The problem with this approach is that it often puts data centers far from population centers, which limits how fast the servers can respond to requests.  For interactive experiences online, these delays can be problematic.  We want Web pages to load quickly and video games such as Minecraft or Halo to be snappy and lag free.  In years to come, there will be more and more interaction-rich applications, including those enabled by Microsoft HoloLens and other mixed reality/virtual reality technologies.  So what you really want is for the servers to be close to the people they serve, something that rarely happens today.  It’s perhaps a surprising fact that almost half  the world’s population lives within 100 kilometers of the sea.  So placing data centers just offshore near coastal cities would put them much closer to customers than is the norm today.

If that isn’t reason enough, consider the savings in cooling costs.  Historically, such facilities have used mechanical cooling — think home air-conditioning on steroids.  This equipment typically keeps temperatures between 18 and 27 °C, but the amount of electricity consumed for cooling is sometimes almost as much as that used by the computers themselves.  More recently, many data-center operators have moved to free-air cooling, which means that rather than chilling the air mechanically, they simply use outside air.  This is far cheaper, with a cooling overhead of just 10 to 30 percent, but it means the computers are subject to outside air temperatures, which can get quite warm in some locations.  It also often means putting the centers at high latitudes, far from population centers.  What’s more, these facilities can consume a lot of water.  That’s because they often use evaporation to cool the air somewhat before blowing it over the servers.  This can be a problem in areas subject to droughts, such as California, or where a growing population depletes the local aquifers, as is happening in many developing countries.  Even if water is abundant, adding it in the air makes the electronic equipment more prone to corrosion.

Our Natick architecture sidesteps all these problems.  The interior of the data-center pod consists of standard computer racks with attached heat exchangers, which transfer the heat from the air to some liquid, likely ordinary water.  That liquid is then pumped to heat exchangers on the outside of the pod, which in turn transfer the heat to the surrounding ocean.  The cooled transfer liquid then returns to the internal heat exchangers to repeat the cycle.  Of course, the colder the surrounding ocean, the better this scheme will work.  To get access to chilly seawater even during the summer or in the tropics, you need only put the pods sufficiently deep.  For example, at 200 meters’ depth off the east coast of Florida, the water remains below 15 °C all year round.

 

Referenced In This Post