If you’re in the business of managing and setting up servers in big data centers, you’re probably looking ahead to the second half of this year. Before the year is out, the first servers running on microprocessors based on the ARM chip architecture will come to market.

This development may amount to a big challenge to chip giant Intel, because it will be the first time in a few years that the companies who build and buy servers will have real choice in the kind of processor inside.

If you bought a server in 2013, chances are it had an Intel chip in it. Market research firm IDC estimated that of the 2.3 million servers sold in the second quarter of last year, 2.2 million of them were based on x86 chips, and most of those were Intel’s.

cnbc-1

That gives Intel a lot of market leverage when it comes to setting prices on those chips. And, as Patrick Moorhead, head of Austin-based Moor Insights and Strategy, explained in a recent interview, there’s a growing desire among companies who build and buy servers for a new alternative to Intel.

Until recently, that alternative was Opteron, an x86 server chip from Intel’s longtime rival, Advanced Micro Devices (AMD), which at one time competed well, but in recent years has failed to keep up with Intel.

“There is a vacuum in the marketplace left by the downward trajectory of Opteron, and it happened much faster than anyone was prepared for,” Moorhead said. “The companies that make servers feel that they had it good when they had a choice, and they’re eager to have one again.”

A quick refresher: The main reason that ARM chips dominate the mobile phone business is that they’re designed to consume power efficiently in order to preserve battery life.

Companies buy licenses for the basic designs from ARM in order to build their own chips. Those licenses constitute a who’s who of the the smartphone world: Apple’s A7 chip used in the iPhone and iPad is based on ARM designs; same goes for Qualcomm’s Snapdragon chips. Nvidia’s graphics chips and its processors aimed at notebooks and tablets are also based on ARM designs.

So far, Intel has failed to get any meaningful traction in smartphones with its own line of low-power mobile chips, known as Atom, though it keeps trying.

That same power-sipping capability is what makes ARM chips attractive for servers. As data centers operated by companies like Google, Facebook, Amazon and many others pack thousands of machines into ever more dense spaces, the cost of power to keep them running has quickly risen to the top of the list of things those companies worry about. Add to that Intel’s ability to charge relatively high prices for Xeon chips, and the potential appeal only grows.

The one thing that ARM chips have lacked until recently is a 64-bit core design. Without that, the chips can’t work with the amount of memory typically required in a server. Intel and AMD chips have had 64-bit chips for about a decade. ARM didn’t release its first 64-bit design until 2011, and the first one to wind up in a smartphone was Apple’s A7, in the latest iPhones and iPads.

Both Hewlett-Packard and Dell have plans on the table to sell servers based on ARM chips. HP has Project Moonshot, the tiny, customizable line of servers it debuted last year. HP will probably be first out of the gate with an ARM-based server this year, Moorhead said, adding that Dell, which demonstrated its first ARM-based servers in the fall will likely follow soon after HP.

Last month, Google was reported to be interested in designing its own line of ARM chips for the custom servers it builds for its data centers. It wouldn’t be a small undertaking. Depending on what type of license Google might get from ARM, Moorhead estimates that it could easily cost Google $1 billion or more to design its own custom chip.

“They’d be creating everything from scratch, and then they’d have to take out licenses on parts of the chip design they don’t have rights to,” he said. “It would get really expensive rather quickly.” It’s also possible, Moorhead said, that Google has started the rumor in order to put pressure on Intel for more favorable pricing. “It may just amount to a negotiating tactic. … I just don’t see Google taking on that much risk.”

Meanwhile, several companies are in various stages of building ARM chips they would sell to hardware manufacturers. Calxeda, an Austin-based startup that had raised more than $90 million in venture capital funding, had been working on ARM server chips and had worked closely on server designs with as many as seven server companies, including HP. It ceased operations last month after failing to raise more money from investors.

AppliedMicro, a chipmaker based in Sunnyvale, Calif., has designed a server chip called X-Gene, aimed at servers. Moorhead said that this is the one most likely to land in commercially available servers during the second half of this year. Other server chips from Broadcom and Cavium will likely follow in 2015.

Then there are some dark horses in the picture. Nvidia, which was among the first to bring ARM chips to Windows-based notebooks, has a secretive development project known as Project Denver, which is oddly enough based in Portland, Ore. Moorhead thinks a server chip could emerge from that effort, but not initially. “Nvidia hasn’t come right out and said it will build a server chip. But it hasn’t said it won’t either,” he said. “It has certainly hired a lot of people who have a history of designing server chips at companies like Intel, AMD and HP.”

Other companies said to be exploring ARM-based server chips are Samsung, Qualcomm and AMD. While Samsung and Qualcomm are both powerhouses in the design of chips for phones, neither has ever built a general-purpose microprocessor. AMD, which has a long history of building x86 chips for PCs and servers, has that expertise.

“If AMD were to take out the relevant ARM licenses, it could potentially be a very potent force,” Moorhead said. “AMD knows how to build processors, and it has the respect of HP and Dell and all the other vendors.”

And if ARM chips eventually become a serious competitive threat, Intel is already ready to respond. Its low-power Atom chips, first created for phones, tablets and small notebooks, have already been adapted for servers. Atom sells for less and was built with low power consumption in mind.

Additionally, Intel has created new versions of its Xeon server chips, due later this year, that include networking features. Moorhead compares the strategy to one Intel followed in the mid-1990s with chips for personal computers.

“They had Celeron on the low end and Pentium on the high end,” Moorhead said. “They’re basically going to try and repeat the strategy with servers.”

Beyond that, Intel could easily consider building specialized versions of its server chips to suit the particular needs of its biggest customers, like Google or Facebook, he said. “If they think the risk is credible, and the opportunity makes sense, Intel will give its customers whatever they want.”



10 comments
ElianGonzalez
ElianGonzalez

 I would never have pegged Patrick Moorehead as having tats.

aerialspin
aerialspin

It's still relatively early for ARM servers to take over the world.  You missed a key point about HP Moonshot servers containing Intel Atoms.  With servers, reliable software is crucial and Linux support for ARM processors in a server environment is just getting started really.  


Building a mobile OS on a single ARM processor is not quite the same as developing a stable, rock-solid server OS on a cluster of ARM processors.  Mobile devices crash on a periodic basis from software updates and bad apps.  A server can't do that.  It is hard to develop a reliable distributed software system for a cluster of cpu's.  It is much easier and cheaper to develop for a single Intel server that can do the same amount of work.  

ARM processors are cheaper, of course, per unit, but not by a lot if you compare processing power with Intel's, especially with their fab foundry tech lead.  Haswell and Broadwell have significantly cut into ARM's performance per watt advantage.  


ARM servers will become a reality but probably not as big of an industry sea change as Calxeda discovered.  

bjr
bjr

The bottleneck in server architecture has nothing to do with the instruction set of the CPU, it's the RAM interface. In order to build chips with large numbers of cores, it doesn't matter if they are ATOMs or ARMs, you have to dramatically increase the bandwidth to the DRAMs. You can do that in two ways, you can put the CPU and the RAM on the same chip or you can use very high speed serial interfaces to the DRAMS. The reason that you haven't seen the former is because logic processes (used to build CPUs) and memory processes (used to buiild DRAMS) have historically been very different which doesn't mean that you couldn't build a CPU/RAM chip if you wanted to. Serial interface DRAMs have been built and used in servers but that approach hasn't been pushed to the point where it needs to be in order to make an order of magnitude difference. The reason is that serial interfaces consume a lot of power, depending on frequency it can be as much as a watt per pin. The power consumption of any chip with a large number of CPUs will be dominated by the DRAM and communication interfaces, the CPUs themselves will be of secondary importance so ARM vs x86 doesn't really matter. There are reports that Intel has a 64 ATOM chip in the works (it could be more) and of similar efforts by ARM server chip vendors. When they are announced the figure that you should look at to determine how well they will work is not the aggregate instructions/second, it's the total DRAM bandwidth. If the DRAM bandwidth isn't significantly higher that the current crop of Xeon chips then those chips will fail in the marketplace, if the bandwidth is 10X the current bandwidth then you should see widespread adoption in cloud type applications. 


Luc@Mop
Luc@Mop

BTW, just a little explanation about RISC (Reduced Intruction Set Computer) versus CISC (Complex Instruction Set Computer).

A (CISC) classic computer can use around 250 different Instruction (just for example)

The period cycle to run the more complex instructions could need 50 NS (example 25 years ago)

As a matter of fact, these complex instructions were not used very much, lest us say < 5%.

By removing these complex intructions and let  the equivalent operations to run during  2 or 3 period cycles, that would optimize the maximum of time needed for (gate) electronic switches within one period cycle, example 30 nS (versu 50nS). 

We can go from 250 down to 150/200 instructions (for example)

One of the fist System was the IBM 6150  followed by the IBM  RS/600 family under AIX (IBM undercover UNIX)

dbrianf
dbrianf

There are other non-x86 vendors out there: Oracle and IBM to name two.  Intel's manufacturing advantage beats them down with better perf/watt/$.  Nothing above changes that dynamic.

Luc@Mop
Luc@Mop

We are too focusing on Windows ! 

ARM is a RISC Technology which were originally designed mainly for UNIX OS.

As, more and more servers are moving to Linux, this is also to be taken into acount !

getwired
getwired

Interesting read. Even if such server chips arrive anytime soon, it'll be some time before we see them in production, as operating systems and existing server applications broadly deployed today are designed for x64 and x86. While the Windows client now runs on ARM systems, the server platform and all of Microsoft's own server applications are deeply embedded in x64. Any transition to ARM servers from x64 in the general server market will take a great deal of time, and it won't happen overnight.

znmeb
znmeb

@dbrianfIn the cosmic scheme of things, Oracle (SPARC) and IBM (Power and s390) aren't growing. Sure, they have to keep up on the hardware end and keep the software maintained, but the growth is in ARM, GPUs and to a lesser extent x86.


If I were Intel, I'd be really worried because the only solid server monopoly they have is Windows and whatever name VMware ESX is going by these days. Once you let Linux, Xen and KVM loose, ARM is as good as x86_64.

znmeb
znmeb

@getwiredAh, but so many servers run Linux and Linux is well-established on ARM. Really, though, it comes down to the ratio between floating point speed and power consumption. "Pure servers" that just shuffle bits around aren't where the big bucks are. The big bucks are in high-performance computing / 'big data' / modeling and simulation - floating-point-intensive number crunching.


Back in the late 1980s - early 1990s, Intel managed to dominate high-performance computing with the x86 CISC architecture over its RISC rivals of the day, even Intel's own IA64. Don't get me wrong - with the right kind of capital investment from major companies, ARM can challenge Intel in HPC. But it's going to take that kind of investment to really challenge Intel.

znmeb
znmeb

@getwiredP.S.: I am getting an Atom-powered phone, supposedly towards the end of the month - a Geeksphone 'Revolution' running either Android or Firefox OS. I can't wait to see what I can squeeze out of the browser on a 1.6 GHz dual-core Atom. ;-)

Follow

Get every new post delivered to your Inbox.

Join 299,352 other followers