CEE the future

By | October 23, 2009

Wherever you look these days there’s no shortage of talk about FCoE.  However, sometimes I think a little too much attention is given to FCoE and people sometimes overlook the underlying DCB/CEE Enhanced Ethernet – In my opinion, this where much of the real work and enabling is happening.  So lets spend a couple of minutes talking about CEE……
 

NOTE: Im using the term CEE. I should probably use DCB, but Im not.  For disambiguation of the interchangeable terms CEE, DCB and DCE see here.
 
 

 
 
Half baked

First up, and in the spirit of honesty, I have to mention that the DCB/CEE related standards are not yet fully baked.  Formal ratification is expected early to mid 2010.  However, enough is defined for major vendors to be shipping the technology.  And more than merely shipping it, some are betting large chunks of their business on it. 
 
 
CEE 10 second overview

Really quickly, CEE is currently a 10Gbps full duplex lossless Ethernet with dynamic prioritisation/bandwidth allocation and some other goodies.

Specifically –

  • Priority based Flow Control (802.1Qbb)
  • Enhanced Transmission Selection (ETS 802.1Qaz)
  • Congestion Notification (802.1Qau)

If you know the above technologies, then you will know that they are compelling and pervasive.  If you don’t know them then trust me, they are good.
 
 
The Grand Key

CEE is a key technology of the future.  For starters it is absolutely key to FCoE.  In fact without CEE there would be no FCoE, at least no compelling FCoE.  But CEE is not just for FCoE, its features and benefits can and will increasingly be leveraged by many areas of IT infrastructure…  iSCSI as well as NFS over UDP are just a couple that are commonly talked about.

Point being, FCoE is only one of the many new options made possible by CEE.
 
 
Smarter AND faster

Yes there is non-CEE 10Gbps Ethernet.  For the purposes of this post Im going to refer to it as “Dumb 10Gbps Ethernet” (and Im writing it in pink because its for girls).  This Dumb 10Gbps Ethernet is 10Gbps Ethernet without the goodness of CEE that we just mentioned above.  Sure its cheaper than the CEE version.  But, we all know that cheaper usually isn’t better.  Pay peanuts, get monkeys.   Trust me, in the long run you will want CEE….

Not convinced?  Let me draw a parallel that I hope will help create an “Aha” moment –
 

Just like processor technology, Ethernet can and will get faster and faster and faster.  But, like processor technology, there comes an inflection point – where getting faster and faster, but not smarter and smarter, becomes almost pointless.  The change in focus needs to be towards smarter and not just faster.

Ask yourself the following question – what are the real game changers and compelling aspects of Intel’s current raft of “Nehalem” processors?  Is it the GHz?  Or is it the built-in virtualisation intelligence and all the associated benefits (Im thinking Intel VT….)?

Same goes for Ethernet, speed is not everything, intelligence matters too.  CEE brings both speed and intelligence to the network.

Ask Hypervisor architects and admins if they would like to go back to the old days of non-hypervisor aware CPUs.  Im pretty sure they will tell you where to go.  Same goes, and will go, for people deploying and using CEE.  Once bitten forever smitten.
 

RupturedMonkey statement: “CEE brings intelligence to the network.

 
 
Hubs versus Switches

In my opinion, the whole Dumb 10Gbps Ethernet versus CEE smacks of the old Hubs .vs. Switches debate of days gone bye. 

Is there anybody wishing they still deployed hubs at the centre of their networks?  No.

Point being…. CEE is here and its changing the game
 
 
Cable once and most of most

All of a sudden the possibility of cable once is a reality.  CEE has the capability to run most, if not all, of most companies portfolio of network traffic over a single cable.  But more than that – over a single PCIe card and to single edge switch port.  Who wouldn’t want that? 

But there’s even more – it will simplify changes and network management, as well as bring down the cost of power, cooling and all of that jazz too!
 
 
Unabridged opinions and conclusions

So if you didn’t know previously, you know now – Im liking the look of this whole converged network unified fabric thing.  I see CEE as an essential building block of every modern Data Centre!

So with that, let me finish with a word or two to the naysayers and FUD-slingers –
 

Do yourselves a favour and don’t waste your time and energy trying to slow the unstoppable forward march of technology.  Believe me, it will roll right over you like a steamroller over a cockroach rat something tiny and insignificant 😛  Where would Intel, AMD or any of the major server vendors be if they had resisted the march of VMware?  After all, they would all ship more units if every server image required dedicated hardware…..  Short answer is, they would be in a world of hurt.

RupturedMonkey advice: Don’t get left behind… or rolled over by a steamroller.

Its a brave new world out there, and we as IT pro's as well, as the enabling technologies, need to move with the times.  CEE is doing just that – moving things forward. 

By all means, choose to limp forward with bog standard 10Gbps Ethernet “sans” the goodies of DCB/CEE.  If you do, then good luck to you, but dont look on enviously as the rest of us race forward into the light. 😉

Nigel

Please feel free to share your thoughts below. You can also follow me on Twitter @nigelpoulton.  I only talk about storage and technology and the conversation is often very interesting.

7 thoughts on “CEE the future

  1. Jon Toor

    Nigel,
    I totally agree with your premise.
    "FCoE vs whatever" is not the right debate because it does not get at the key questions. As you point out,
    "The change in focus needs to be towards smarter and not just faster."
    To me, that means discussing the management capabilities of the different solutions, and how they enable smarter, more agile control over the infrastructure.
    Debating the merits of a protocol is fine, but it doesn’t answer fundamental questions such as:

    Is there a single point of mgmt for I/O across all servers?
    How easy is it to deploy and manage redundant I/O?
    Can I quickly migrate I/O to another server if needed?
    How do I converge connectivity yet maintain physical isolation for the networks that require that?
    Is QoS simple to establish (and then to maintain as my environment changes)?

    These questions are simple, but they’re the kinds of questions that that impact day-to-day operations.
    And they get at your point. Solutions need to be smarter, not just faster. Because in the end, "smarter" is what makes it easer to be more efficient. Which after all is the whole point of virtualization, right?
     
     

     

  2. Pingback: blogs.rupturedmonkey.com » Blog Archive » Emulex UCNA at SNW Europe

  3. stephen2615

    I just discovered that HP don’t support Boot From SAN with FCoE.  Anyone know when thats likely?  That makes no difference what switch technology you use (Brocade or Cisco).

  4. Nigel Poulton

    stephen

    what HP configuration do you have in mind?  Is this using VC Flex-10 on c7000 BladeSystem?

    I know HP appear to be taking the "steady as she gows" approach with FCoE.

    If you lay out your config and what your gripe is I will pith in with thoughts and dont mind pining people at HP for opinions..

  5. Pingback: blogs.rupturedmonkey.com » Blog Archive

  6. stephen2615

    Hi Nigel,
    Sorry for not getting back but I missed your reply.  We havelots of  C7000 running normal stuff mostly san pass through to edge switches.  Some are running NPIV and some are real core/edge.  Some enclosures have 9124e’s. 
     
     Our network admin wont consider VC for some reason he refuses to give.  Hmmm.. I could give him something to consider..
     
    HP have made is absolutely clear (so far) that FCoE is not part of the blade infrastructure unless you put a CNA into a blade slot that can take a HBA.   I want our server ops people to come up with an architecture that I can agree on but as per normal, no decision is the best one as you can’t make the wrong decision.
     
    Things like virtualised blades in the enclosure get messy with what mezz slots to use for what.  The Mezz slot 2 and 3 are the best (in 680’s) but our people insist that 10 GbE is the way forward and I have to use slot 1 due to port mapping.  Blades and idiots can be a bad mix.
     
    Anyway, its all hearsay until someone makes a decison on BFS anyway.  If we go BFS, the SAN will go balistic and I can’t even consider the size.  I am getting a bit keen on VC now that they have 24 port 8 Gpbs FC.

  7. Wooch

    Nigel, interesting stuff as always. But your advocacy for CEE raises a question: if CEE is the wave of the future, might it become a replacement for PCIe?
    It's fast, easily managed, relatively cheap, etc…

    How about a PCIe vs CEE knock-down here? Which is really better?

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.