John the plumber?

March 20, 2009

After Cisco’s UCS announcement, the media fight with HP is heating up, it is getting an election time feel. HP asked in a PR piece that was mentioned here, whether you would  “let a plumber build your house” meaning whether a mere networking company could be responsible for the whole data center. So John Chambers is now the plumber?

Greg Ferro pointed out that it is really not HP, who strikes back, but Cisco. Over the last years HP ProCurve switches have gained market share. Personally have seen many brand new VMware environments that where sold by an HP team and included HP blades and HP ProCurve switches.

In money terms, Cisco has more to lose if HP challenges them in the switch market. Leaving all questions of technology and competence aside, switches are still a high margin business for Cisco, servers are low-margin for everyone (though Sun may not have heard the news). A typical Cisco 48-port Gb switch starts around $6k, a similar HP ProCurve switch costs around $3k. In both cases we are talking of highly commoditized technology. A price premium of this order of magnitude just does not fly in the server market.

But Cisco has more to win, too, simply because the server market is a lot larger. In order to make servers profitable enough for Cisco standards they have to completely change the market structure. And UCS cleary is designed to do just that. Cisco is not really entering the server market, they are trying to supersede the server market by a new “unifed compute market”.

If you think Cisco cannot move it new markets, read what was written when Cisco started to push their voice products. “They have to sell to a different set of people”, “Voice is different” and “Lucent, Nortel and Alcatel have been doing voice for 100 years”. Have you looked at  Lucent/Alcatel and Nortel stock prices recently? The real question will not be how different the markets and technologies are, but how well HP and IBM execute – and they are a completely different calibre then the traditional phone people  (with Lucent-bred CEO Fiorina out of the picture, that is).

In Hoff’s metaphor, Cisco is Brock Lesnar, the wrestling star who moved to Mixed Martial Arts (MMA). First he lost because he was only a superb wrestler (i. e. plumber), but as he picked up the other fighting styles he won the UFC (Ultimate Fighting Championship title. Nice comparison that makes a very valid point, as Cisco has proven their ability to learn new tricks. But I think Cisco is not trying to become UFC champion (i.e. beat HP and IBM at their game), they are trying to be the next UFC. It is about starting a new, bigger league of their own.


What are the other networking vendors doing about virtualization?

February 14, 2009

This is only partly a rhetorical question,  I actually would like to understand better what they are doing. But impressions go a long way. Everybody in the virtualization space talks about Cisco when talking about networking (and there has been a lot of talk in the last year). At VMworld in September Cisco was all over the place (with Nexus 1000V and vFrame for example), but the other networking players were noticeably absent. OK , there was Checkpoint with a pretty impressive announcement of a virtual version of there firewall technology,  but I was actively looking for the others and there was not much to be found. So I am trying to keep track. This post is starting with Juniper, the next one will be on Force10 (for no other reason than the fact that both had announcements this week).

In December Juniper came out with a VMware Implementation Guide, 7 months after Cisco came out with their guide (jointly published with VMware). But then Juniper only started to ship their first switches around that time. This week Juniper appeared prominently in an IBM cloud computing announcement (sorry for throwing virtualization and cloud together here without further explanation, but I think they should be thrown together). An interesting announcement – as far as IBM was concerned. Juniper featured in the context of hybrid clouds (connecting private and public clouds). Extremely interesting from a networking perspective and much closer to Junipers routing roots since their solution seems to be nothing but old-fashioned MPLS. Proabably not different from MPLS solutions that Cisco could provide, but IBM is a strong partner, and the link-up is naothe indication that IBM and the other large  datacenter players are looking for alternatives to Cisco; they can neither be happy about nor surprised by Cisco’s recent server announcements.

The verdict: Juniper is behind Cisco on may fronts when it comes to VMware-style virtualization, but mostly because they are new to the switch business. The speed at which they are catching up is fairly impressive.

Minor confusion about the release date of Cisco’s Nexus 1000V virtual switch

September 23, 2008

Colin McNamara’s blog is usually excellent, all the more annoying that his post titled  Cisco releases Nexus 1000v  virtual switch for VMware created a lot of confusion by not distinguishing between the terms “announces” and “releases” which mean entirely different things in marketing speak …

Just for the record, Cisco’s announcement states: “The Cisco Nexus 1000V distributed virtual software switch with VN-Link capabilities supported in a VMware Infrastructure environment is expected to be generally available to customers in the first half of 2009.” Most observers agree that this means the release date will actually be June 30, 2009 at the earliest. Btw, the “V” in “1000V” is capitalized.

If you follow the discussion it is, of course, impossible to release an ESX integrated switch until VMware releases the next version of their virtual infrastructure. The current VMware version just does not have the hooks to plug in a switch that replaces the built-in vSwitch.

Except for the word “releases” in the title, Colin’s post is highly recommended reading. What I like best is his analysis of the lack of people that can do both networking and virtualization and the passing remarks about all you have to do in order to do appropriate network configurations in the virtual server world. And, most of all, the post is fun to read!


It’s really hybrid virtualization security

September 11, 2008

Finally I have some time to write about The Four Horsemen of the Apocalypse, the BlackHat version of Chris Hoff’s work in progress by the same name.  Since I have not actually heard the talk, I am only relying on the published presentation which gives me a lot of creative freedom …

First, this is probably the best overall tour of security in virtualized environments I have seen. Obviously it is not a technical paper but rather a (very necessary) propaganda  instrument.  The “Guidelines” at the end (pages 159-175 of the version linked above) are a nice hands-on summary of  where we should go. That propaganda is necessary and confirmed on a daily basis by conversations with VMware users. It is not uncommon to operate thousands of VMs in a single LAN without any separation.

Based on my fairly large sample, it basically boils down to whether networking and security specialists are involved in setting up the virtualized environment. More frequently than not, neither specialty is on board.

What I liked best in the technical area is the classification of security approaches (on pages 78-118, my numbering):

  1. No security (the dominant reality, see above)
  2. External security
  3. Virtual security appliances (VSAs)
  4. APIs (basically what will be in VMsafe)

This is mostly a tour that clarifies what can be done, spiced with a heavy dose of VSA-skepticism. Given that I (among other things) build VSAs for a living, it is a bit surprising that I mostly agree with him: VSAs actually have a fairly limited scope – and a number of problems.

My take is that we will run hybrid environments that combine all of the above for quite a while, with (2) the most important for now, and (4) catching up (as long as VMsafe is not released there are not many API-based options). (1) is, of course, unsatisfying, but a pretty dominant reality and (2) is really only completing the picture (but the heavily touted cases such as securing VM-to-VM traffic are mostly of theoretical interest).

No VMotion around virtual or physical firewalls, please

August 12, 2008

Chris Hoff’s BlackHat presentation titled “The Four Horsemen of the Virtualization Apocalypse” was described here by Ellen Messmer to Chris’ dislike. Her spin may have been slightly too negative, but in any case she reports interesting points, among them Chris’ comment that it just won’t work if you VMotion a virtual firewall (VFW). While Chris is right in general, moving a VFW will in fact work in some simple corner cases, basically when both locations are indistinguishable in networking terms (same subnet, same VLANs, no NAT on the VFW itself etc.). So you can VMotion a firewall, but just a little bit … if you are not so lucky all hell breaks lose.

What’s more worrying is that all the same problems will happen if you VMotion any VM in presence of any firewall (virtual or physical): either you VMotion between two locations that are identical in terms of routing, VLANs etc. or you are in trouble. Any relative motion between firewall and VM will create the same troubles.

The really revealing bit about Chris’ comment is that most VMware deployments are still so simple that all VMotion happens in the simple corner cases. So don’t get too close to your firewall when you do VMotion.

ps. I have, of course, my own agenda here.

Virtual machines and the virtual DMZ

June 5, 2008

An article by Edward Haletky made me think about ESX and the DMZ in general. The schematic picture is simple: services that need to be accessible from the Internet are in the DMZ (the demilitarized zone between the internal enterprise network and the Internet, in case you wonder), all others are in the internal network. In between, we place a 3-port firewall, one port for the Internet, one for the DMZ, and one for the internal network.

In reality, it’s of course, never that simple. Let’s ignore all the complexities of real life physical networks for a moment and think about virtualization in the most simplistic case imaginable. We run public web servers, mail servers, databases, and application servers insides VMs on ESX servers. Obviously public web servers and mail must be in the DMZ, databases must not. So what do we do? One option I have seen is separate ESX servers. Not pretty, because you tie down VMs to a small set of hosts in the DMZ. And for the services console, we have set up a separate network anyway, because we do not want it in the DMZ. The second option is dedicated DMZ NICs. Somewhat better than option 1, it means that certain network interface cards are connected to the DMZ; we can share VM hosts between DMZ and internal guests and do the separation on the virtual network (on vSwitches or VLANs in the case of VMware ESX). Still fairly inflexible.

The case for the virtual alternative to option 1 and 2 is pretty straightforward. Going virtual means to combine vSwitches, VLANs and virtual firewalls to establish a virtual DMZ (VDMZ). Putting a VM in a VDMZ is a clear and simple concept; it means to put the VM on a VLAN that is connected to the DMZ and to shield it with a virtual firewall inside the ESX server.

Dedicating physical NICs for the DMZ is wasteful, both in terms of the cost of the NICs and the lost flexibility. Either I have to dedicate two NICs (assuming that I need redundancy) for the DMZ on every single server, or I have to limit the servers that can host DMZ VMs – which is awfully close to option 1.

Talking to many VMware users, there are still some concerns to overcome. Virtual DMZs are sometimes perceived to be less secure. I cannot share the sentiment having seen too many misconfigured physical firewalls and too many untraceable wires connecting segments that should not be connected, but in the end, the practices from physical networks will carry over. As mentioned in the beginning, real life physical networks consist of multiple DMZs, mostly separated by VLANs. So it’s certain that the much more virtualization minded VMware crowd will go for virtual DMZs, too.

Two Quick Takes on the Business Case for Virtualization

May 23, 2008

CFOs love virtualization! “Replace ten servers with one”, how easy can a business case be? CFOs have seen too many IT business cases that do not make any sense. They appreciate one that can be summarized in five words.

But virtualization is really not primarily about hardware cost. Looking at VMware’s standard business case, two things stand out. On the benefits side, it is the huge impact of improved availability. With virtualized servers, you do not have to bring down the application to replace a fan in the server. You move the application to another server (with VMotion, availability remains at 100%). Without virtualization you have to schedule downtime and find out who the users are in order to inform them. Big impact, but most would consider it “funny money”. Unlike the hard green dollars of a server that is purchased, the savings may never make it to the bottom line.

The second big impact factor is on the cost side. The biggest drive is not that pricey VMware licensing, but setup and management. It’s not only interesting because it’s a big dollar amount, but also because it has a large fixed cost component. Getting a virtualized environment up and running and operating it costs money. And running a pool of 10 ESX servers is not significantly cheaper than running a pool of 50.

Digging deeper into VMware’s standard business case is very instructive. I really like a sensitivity analysis. The point is to find out which of the dozens of parameters that go into the case have the biggest outcome regarding what I care about (typically the return on investment, ROI). So I individually change every input parameter by 10% and see how it impacts R0I. Some parameters are more “sensitive” than others; you guessed correctly, that’s why it’s called sensitivity analysis. But that is for the next post ..

IDC Virtualization Forum, Part III

May 9, 2008

Part I of this post was about the virtualization market, part II about technical vision, part III is about an upcoming product. HP presented Insight Dynamics as their multi-hypervisor virtualization solution. A vendor specific management solution is not inherently exciting, what makes it interesting is how HP tries to counter the threat that VMware might further commoditize server hardware.

In terms of features, many starups have more to offer, but this is about the fight over control of the data center. Here are the differentiators that I could see.

  1. Multi-hypervisor support: HP-ID manages VMware, Xen and Hyper-V. Today it is only VMware that really matters, but the differentiation vs. VMware is evident. No differentiation vs. most startups.
  2. Hybrid management: HP-ID manages both VMs and physical servers and can handle V2P and P2V migration on the fly. This is actually fairly unique, and has created a huge draw among attendants of the forum.
  3. Deep hardware support: HP-ID supports HP blades and various server models at a level that vendor-neutral products will never reach.

From a technology perspective, it’s nothing too great, but operations people loved it for good reasons. While nobody is actually using Xen or Hyper-V, at least the latter is considered pretty much unavoidable. The HP lunch table on hybrid management was clearly beating all other topics. Next to nobody even dreams of a fully virtualized environment and there is clearly unmet demand for management software that crosses between virtual and physical. Practitioners uniformly praise the role of blade technology for dynamic data centers. One friendly user spoke about a standard rack that had only 12 cables running to the LAN and SAN switches; using standard 2U rack mounts the equivalent would be 120 cables. A factor ten is always cool and cabling is extremely manual and can be error prone depending on your setup.

The bottom-line: while technically a mostly unimpressive, hardware specific solution, HP-ID addresses the practical pain points.

IDC Virtualization Forum, Part II

April 30, 2008

As mentioned in the previous post, Simon Crosby, the CTO of Citrix server business, was highlight number two of the IDC Forum in San Francisco. Not only because he was the keynote speaker (which in itself is interesting because it means that CItrix is spending fairly heavily on sponsoring an event that for years used to be driven almost entirely by VMware). Of course he was as entertaining as he usually is and frequently jabbed at VMware. But what was really interesting was the vision for application virtualization (AV).

Virtualization as practiced today reduces the number of physical servers to maintain, but the number of operating system images remains the same, and they are the real cost driver. AV combines applications and operating systems on the fly. Your word processing program is never statically installed as a copy on your desktop machine but is merged into the OS when the user demands it (the merge happens somewhere in a data center, details were not provided). Therefore, the number of different OS + application combinations does not explode and there is only one OS image to maintain.

The basic economics of of linear vs. combinatorial complexity is very compelling. It will be easy to write the business case, but whether that business case beats a SaaS story is a whole different question. For now, the real issue will be timing. Sorry, but what he had to say did not seem very real. Although, after talking to the people at the show, everyone was using VMware and thought AV was very interesting …

As always, I care about the networking angle. The way I understand it, the delivery mechanism for AV across the network is basically Citrix and every mouse click will go over the network. The Ajax model of Google apps and many other SaaS Approaches, by contrast, has clear advantages in terms of responsiveness to the user since mouse clicks are handled inside the browser. AV is better in delivering standard desktop applications and there is a lot of commercial potential for Citrix in the AV story as applied to their longstanding relationship with Microsoft.

My to do is to spend some time on understanding the implications for network virtualization. Also there is highlight number three to follow: HP’s new virtualization management software.

Highlights from the IDC Virtualization Forum in San Francisco

April 11, 2008

The Virtualization Forum really had three highlights for me.

Number one was John Humphreys Introduction, He is IDC’s primary Virtualization analyst and gave a summary of the state-of-the-art.

  • His magic number for the penetration of virtualization in the data center is 20%. This seems high compared to the 5% I have heard from Gartner. My read is that IDC talks about a percentage of the servers run by enterprises that use virtualization, while Gartner’s number is a percentage of all servers.
  • Mid-sized companies are really in the lead. Large enterprises have to overcome too many bureaucratic hurdles and for small companies the benefits are not tangible enough.
  • Disaster Recovery is a big driver, particularly for mid-sized players who do not have a secondary data center. For them, virtualization is the first step toward full blown DR capabilities.

He had lots of other interesting things to say, but these stick out because of the networking angle that is so dear to my heart. And there is one angle for each of John’s three topics mentioned above:

  • The 20% penetration confirms my own anecdotal evidence that virtualization deployments are about to hit the point where networking becomes a problem. Even at a decent sized enterprise 20% means a few hundred servers at most. That’s a number that still can be handled on one LAN. If you follow Cisco ‘s recommendation 100 would be the limit, but I commonly see 200 on one LAN (fewer than if we are talking all Windows, since Windows is a lot more chatty than Linux or Solaris).
  • The networking angle of John’s insight regarding mid-sized enterprises is this: if you have 20 or 30 routers, switches or firewalls, your environment is still pretty stable. I know plenty of mid-sized IT environments where people basically set up their routers and switches and only touch them again when something breaks. It’s only firewalls and VPNs that require constant attention. This changes dramatically when VMware comes in. If you virtualize one single rack that has one or two switches, you end up with at least one or two virtual switches per VMware hypervisor. So (a) you have more switches in that one rack than in your whole organization and (b) since the whole point of virtualization is flexible the network changes all the time.
  • The disaster discovery experience is confirmed by every single virtualization environment I have seen. It is unavoidable that once virtualization is up and running, somebody will say “Wow, now we can replicate the whole setup easily wherever we have a bunch of ESX servers” (here in Silicon Valley the sentence typically includes some casual statistics about earthquake probability). The networking angle comes into play when the setup is actually replicated: the ESX servers are easily copied, everything is up and running in minutes and, oops, our database server is directly connected to the Internet … The one thing that is not easy to copy is the whole static configuration of switches, routers and firewalls.

I will follow up with highlights number two (the always entertaining Simon Crosby of XenSource and now Citrix) and number three (a new software product presented by HP) in my next posts.