Every time I hear about green computing I feel like there is a gap — an enormous gap. The same thing is true in most conservation efforts I witness:

I tend to argue a point where I believe my point is right and the alternative is wrong. In this unique case, I find that the alternative is right, but just not "right enough." We could do better. We should do better. Think complete.

Green Computing through Hardware Optimization

So much focus is placed on making equipment (processors, ram, storage) more energy efficient that people are losing sight of the bigger picture. Energy efficient equipment is certainly one piece of the puzzle. Unfortunately, too many people see that one piece as a fait accompli in their energy conservation efforts.

At OmniTI we're always careful about fully understanding the power profile of the hardware we install. We are conservative and look for the most power efficient machines we can find that still meet our architectural requirements (which can vary wildly from component to component). Everyone should do this. IBM and HP and Intel are all telling you that you should do it and that they can help. Do it. Let them. But please, don't stop there.

Green Computing through Virtualization

The next step that is popular in the efforts to save your wallet (and the planet) is consolidation. This is the philosophy that one of today's machine is powerful enough to accomplish the goals of many of yesteryear's machines. So, virtualize! Take the old machines, turn them into virtual servers and run them on one machine today. Virtualization (of one type or another) has many advantages including: ease of management, simplistic disaster recovery, flexibility in technology selection, shorter provisioning times and the opportunity for consolidation.

Many of our engineers run VirtualBox or VMWare to quickly launch the platform of their choice. They are allocated one machine each, so they only have the opportunity to use a certain number of watts. Virtualization makes their job a bit faster and a bit easier despite the user experience being ever-so-slightly slower than running native. This use of virtualization does not reduce energy consumption in any significant way though it does increase individual productivity.

We have development environments that are managed by the operations team here that must resemble (as closely as is economically feasible) the production environment to which they deploy. We have many of these and they are all distinct, but not heavily loaded. It is feasible that consolidation could be used in this approach. Our actual situation is that we have to operate 40 isolated development environment. We do this on…

  • Two $2300 1U machines
  • Solaris Containers (Zones) as the lightweight virtualization technology
  • at about 200W run rate, which results in about 3.5 MW-hours per year

If you considered the alternative naive implementation:

  • 40 1U machines
  • at about 180W run rate, which results in about 63.1 MW-hours per year.

We realize a savings of 59.6 megawatts. Wow! Now, that is an utterly naive method. Instead, let's look at a popular method like VMWare ESX:

To run 40 VMWare instances…

  • I need some substantially bigger hardware at 2GB of RAM per instance (Solaris containers and other similar technologies have some memory sharing efficiencies).
  • We only have 40 instances here, so going the blade center route seems less compelling.
  • An IBM x3650 should be able to manage about six instances (which aren't peak and can afford some occasional performance degradation).
  • Seven of these at 230W each we burn 14.1 MW-hours per year.
  • This assumes you use local storage. If you need a SAN, you'll have to add that into the power profile too.

One can say they burn 14 megawatts per year instead of 63! But to me, burning 3.5MW is even better. Now, for those financially responsible types, I've only spoken to recurring operational costs. If you run the numbers on initial capital investment you'll see an even more significant savings by simply choosing the right tool for the job (between $80k and $100k by our internal calculations).

This isn't to say that you should never use VMWare or a similar heavy-weight virtualization technology. Those technologies afford you specific advantages (like the ability to run entirely different operating systems in each instance). You could also consider something slightly lighter-weight like Xen. But, if you find that your virtualization requirements on Solaris will fit in the Containers model (or your Linux needs would be satisfied by OpenVZ) you stand to gain a lot. We only have 40 instances, and the choice saved us 10 megawatts over the next best virtualization solution. Imagine if you had 1000.

These concepts are not likely to be foreign to any reader. Most people have considered virtualization approaches along with hardware replacement to reduce energy costs. But please, don't stop there.

Green Computing through Performance Optimization

When I look to virtualization technologies for consolidation, there is one requirement — a single machine has enough horsepower to power more than a single virtual instance. At OmniTI we deal with some large Internet architectures that serve millions upon millions of people. The bottom line is, I can completely saturate any piece of hardware you give me. There is no opportunity for consolidation in many of these architectures. The awful thing is that I see people choose hardware that is more energy efficient and simply leave it at that. The logical conclusion everyone has arrived at is: "if I can get the same CPU cycles and I/O operations for less watts, I win!" Yes, you win. No, this is not the conclusion of anything. It is the beginning. I hope your ultimate goal is not to spend CPU cycles, it is to service users. The obvious progression from here is: "if I can serve the same number of users with less CPU cycles and I/O operations, I win!" Now we're getting some where. That statement starts with the end in mind. This is the land of performance optimization.

I usually try to explain concept through metaphors and analogies, but this multi-resolutioned efficiency concept was a hard one to translate. So hard, that I'm at a loss. Those who know me well, will say: "Theo without a clever analogy at hand?! That's like Denis Leary without a vulgar rant." Alas, I'll just give some examples.

  • We increased both the functionality and the performance in core XSLT technologies for Friendster and we enabled them increase system performance by a factor of over 2.5. That translates to 60% less hardware or 2.5 times as many users. Armed with that, they chose to enter China.
  • We developed a purpose-built content publishing system for National Geographic Magazine and were able to deploy an infrastructure of less than half the size (less than half the power) of the leading competitive offering. This architecture was able to sustain several prolonged front-page exposures on msn.com — delivering, at peak, as many as 3000 new visitors per second.
  • We developed the Message Systems MTA that helps the largest of the large ISPs handle incoming mail volume with as much as 80% reduction in infrastructure when replacing competing commercial incumbents and as much as 95% reduction when replacing open source incumbents.

The goal is to get where you are going while spending less. Less of what? Less money, less power, less heat, less CPU cycles, less, less, less. Less of everything. Not only is it better for our planet, it's simply cheaper. Don't excessively or wastefully use resources. Be responsible: conserve.