Today I attended the Emulex launch of their OnceConnect Universal Converged Network Adapter (UCNA) at SNW Europe, and I liked what I saw.
Most will know by now that Im all for converged Ethernet unified fabrics. Still, Im well aware that some of the vendor offerings are very “Generation 1” and have that version 1.0 feel to them. Todays launch from Emulex honestly didn’t have that feel to it. In fact it seems that the products shipping around CEE are getting more and more mature by the day. So let me take a few minutes to discuss some of the product highlights…..
At a high level, the OnceConnect UCNA is a high performance single chip CEE adapter.
Lets qualify that statement…..
High Performance: Sure, anybody can call their products, especially 10Gbps Ethernet adapters, “high performance”. However, Emulex can back this up with the fact that this product provides hardware offloads for TCP/IP, iSCSI and FCoE. This is marketed under the name vEngine (add a “v” to any technology and it automatically sounds cooler).
Basically this adapter is a workhorse. It will do the protocol related work for you, freeing up your CPU to do other tasks. This enables it to be truly high performance, not just for FCoE but also for iSCSI and TCP/IP. This is great – especially in Hypervisor estates.
Single Chip: There is a single ASIC on the card that does the above. No requirement for dedicated ASICs for each protocol. ASICs also add to the High Performance claim, as ASICs are still very much the way to go with high performance I/O adapters – so called merchant silicon doesn’t quite cut the mustard.
At the time of writing this article this is the only adapter on the market with all of the above hardware offloads on a single chip. Although it is only fair to mention that FCoE functionality will be released later this year, but as its already late October, this will be soon.
Pay-As-You-Go and future proofing
The above is all good, but a bit overkill if you don’t need it all just yet. Well……. You can buy the adapter as a 10Gbps Ethernet adapter, without any of the additional hardware offloads etc, at the pricepoint of a 10Gbps Ethernet adapter. But in the future, if/when you require iSCSI offload or FCoE then you can easily unlock these features with a license.
Sounds almost perfect for the kind of staged deployments of CEE and FCoE that I think a lot of companies will adopt – Buy future proofed hardware now that allows you to keep your options open. Deploy the UCNA now as a 10Gbps Ethernet adapter and be in a position to press ahead with FCoE etc in the future without the need to rip and replace your I/O adapters! What is there not to like about it?
Remember that we already do this with our switches and storage arrays, and it works well. Why not apply the same model to I/O adapters.
This is all of course assuming that the progressive licensing doesn’t make the overall cost more expensive!! From tweeting about this at SNW Europe today this is the biggest concern from fellow tweeters.
Oh and you can run software initiated iSCSI etc over the adapter without licensing the iSCSI offload…..
All-in-all it seems very flexible to me.
NOTE: Of particular interest to me was the fact that the core features, as well as the base cost, of this adapter are 10Gbps Ethernet. This is very interesting when you consider Emulex are traditionally a Fibre Channel company. Clearly Emulex are moving with the market here and recognising Ethernet as the dominant technology and building on that. Emulex also have people on IEEE 802.1 committees such as DCB. Now that’s what I call not betting against Ethernet.
A must for the Virtual Data Center
In my last post regarding CEE I talked about the importance of CEE and its associated speeds and intelligence in the light of advances being made in Hypervisor based environments.
I would add to that – features and functions such as those seen on the Emulex OnceConect UCNA are equally as vital….
As the number of VMs per CPU core increases to greater than 10, the I/O and networking components need to keep step. What is the point of CPU technologies enabling huge numbers of VMs per CPU core if your I/O subsystems can’t keep up!
I think most people can accept the need for FCoE offloads as this is the norm in FC environments – and FCoE is Fibre Channel, just over a different Layer 2. But….. offloads for iSCSI also become more and more important for iSCSI shops as they also deploy more and more VMs per physical server.
RuptureMonkey opinion: Protocol offloads are absolutely vital to Virtual Data Centers.
There is no doubt that the I/O world is changing in front of our very eyes. Be careful not to blink as you might miss something important. Technologies such as CEE and protocol offloads (as well as many others) are key.
Other related stuff
Also mentioned in the launch were technology agreements with IBM regarding 10Gbps Ethernet NICs and 16Gbps native FC HBAs, both for IBM Power Systems. The 16Gbps FC design win being an industry first! Technology is marching on and Emulex are certainly up at the front.
This is on top of the recent announcement around the IBM Virtual Fabric for BladeCenter – where Emulex, BLADE Network Technologies and IBM have collaborated to bring to market a very good blade based I/O solution, comparable, and possibly superior to, HP Virtual Connect Flex-10. I saw IBM Virtual Fabric demonstrated in a VMware environment today and it looks a great technology. The guys were also kind enough to rip a blade out and let me see inside one of the blades.
Finally, Emulex also announced OneCommand Manager which replaces HBAnywhere, but I havent seen this yet to be able to comment.
Random info: Apparently World of Warcraft has ~15,000 blade servers. Cool!