FCoE: Cables and the likes….

By | September 24, 2009

Ewan Leith (@ewantoo) asked if I could whip something up re cables and infrastructure required for FCoE (DCE/CEE).  So here goes. 

I think there has been a lively discussion on FCoE over the last 24-48 hours and it would be good to keep it going…… Before I start I’d be interested in any feedback, updates comments and the likes – I havent had a chance to properly research this and my schedule for the next few days makes it difficult.

Anyway……………… in order to accommodate the enhancements and improvements that come as part of what I have been calling “Enhanced Ethernet” (DCE/CEE), as well as facilitating the goal of consolidation to a single unified fabric, there are obviously some physical infrastructure changes required.  In the next few paragraphs we will discuss some of them.
 

NOTE:  When I say Enhanced Ethernet I’m referring to 10GigE, lossless, low latency, ETS, congestion notification as seen in DCE and CEE…

First up, cables!

Unfortunately our existing install base of Cat5 and Cat6 unshielded twisted pair and the likes, for the most part do not meet the demands of the unified fabric.  They don’t kcik it when it comes to latencies and Bit Error Rate etc.  In order to deploy and use Enhanced Ethernet, and therefore FCoE, we need to lay shiny new cables.  No problems though, that’s cheap and easy, right? 

<cough cough>

In order to keep with some of the major goals of convergence (drive down costs and power consumption), a cable and transceiver combination with low power demands and that has a good cost value is required.  This predominant combination is a passive or active twinaxial copper cable with SFP+ transceivers – sometimes referred to as “SFP+ Copper”. 

Twinaxial cables, usually referred to as Twinax or twin-ax, and named as such because they have a dual core.  The specification generally allows for cable runs of up to ~10 metres, although some kit combinations may support longer or shorter runs.  Twinax copper cables are typically ~6mm in diameter.

Twinax can transmit at 10Gbps full duplex (half duplex is not specified or supported in the IEEE DCB standards for 10GigE) over distances of up to 10 metres.  Although at first glance this is a relatively short distance, it is actually fairly well suited to Data Centre environments, which are typically short range high speed networks (remember we are not running these cables to workstations).  Such run lengths are especially suited for routing within a single cabinet or adjacent cabinets, such as from a server to an access layer switch mounted in the top of the rack. 

If longer runs are required then fibre can be used but at a higher cost. 

Twinax copper has been rated with a bit error rate in the region of 10−18 making it ideal for FCoE. 

Bit error rate:  or BER for short, is the ratio of erroneous bits received divided by the total number of bits transmitted.

SFP+, or to give its full name Small Form-factor Pluggable Plus, is an extension of the popular SFP standard seen in Fibre Channel SANs and legacy Gigabit Ethernet.  The design of SFP+ is relatively simple for a transceiver. 

In saying the design is simple, I refer to the fact that much of the signal processing circuitry and logic, often found in the transceiver module (such as with XFP), is removed from the transceiver and relocated to the switch and CNA (Converged Network Adapter).  This allows SFP+ to be smaller and cheaper in comparison to the more complex XFP, allowing for higher density switches.

As can be seen by the two diagrams below, SFP+ modules exist for both copper and glass and can transmit at 10Gbps.

SFP+ Optical transceiver

SFP+ copper transceiver with casing removed

Obviously this transferring of logic (silicon) from the transceiver to the switch and CNA may merely offset the cost from the transceiver to the switch.  While this may be the case it is generally agreed that such logic is better placed in the switch and CNA and usually allows for overall cheaper manufacturing costs. 

Backplanes

While on the topic of cables, copper and glass…..  As well as standards for carrying 10GigE over copper and fibre cables, the IEEE has also defined standards for backplane implementations.  One such commonly implemented standard is 10GBASE-KR.  This is commonly seen in blade servers as well as routers and switches and utilises a single lane running at a baud rate of 10Gbps, sometimes referred to as 10Gbps Backplane Ethernet.  It supports distances of up to 1m on copper based circuit boards which is plenty of distance for intra-chassis communications.

Other Infrastructure Requirements

At this point I suppose I should mention CNAs.  However, I can’t realistically talk about CNAs without talking about things like NPIV, SRIOV which is a topic and a half in and of itself.  So, for now I’ll skip over CNAs and say a quick word or two about FCoE capable switches.

FCoE capable Switches

A ton of new Ethernet standards as well as a new ULP, new network adapters and new cables inevitably leads to one thing…….. new switches.

Fortunately FCoE and the aforementioned enhancements are evolutionary.  By saying that, I am implying that implementing them in your Data Centre does not have to cause huge upheaval.  Some upheaval yes, but there is no requirement for large scale rip-and-replace.  In fact FCoE and her attendant technologies will happily site side-by-side with the likes of native Fibre Channel and 1Gbps Ethernet and at many levels the adjacent technologies will not even bat an eyelid.

Starting at the Edge

In order to simplify and expedite the adoption of Enhanced Ethernet and especially FCoE, some switch vendors provide edge switches at the access layer that allow companies to start deploying CNAs, at the edge of the network, and have them feed in to existing FC and Ethernet backbones via edge switches. 

For example, a company may deploy a new blade farm fully equipped with FCoE capable 10GigE Converged Network Adapters.  These CNA ports can be connected to the network via edge switches that support GigE, 10GigE, FCoE and FC.  This allows the blades to connect via 10GigE Enhanced Ethernet and then branch out to the existing network core via the 1Gbps Ethernet ports and to the FC SAN via the native FC ports.

These switches tend to be 2u pizza box style switches that are deployed within the server rack.  For some, they are a good place to start, especially in situations where ripping out your existing core and replacing it with shiny new 10GigE Enhanced Ethernet kit is not an option (i.e. just about everywhere).

Also at the Core

Some switch vendors also offer ultra-high performance ultra-scalable 10GigE Enhanced Ethernet FCoE aware switches for the network core.  These are typically modular blade based switches supporting multiple interface types and scaling to hundreds of ports.  Interface types include;10GigE Enhan ced Ethernet, 1Gbps Ethernet, 4Gbps FC and 8Gbps FC.  These switches represent the next generation of data centre switches and represent a huge move toward the virtual data centre and I/O consolidation.

As always, comments and thoughts welcome.

Nigel

You can follow me on Twitter @nigelpoulton – I only talk about storage and virtualisation.

I'm a freelance consultant and can be contacted at nigel at storage-strategist dot com

9 thoughts on “FCoE: Cables and the likes….

  1. Matt Simmons

    I’ve said for a long time that it didn’t matter whether the winner ended up being FC or FCoE (or even iSCSI), in a couple of generations, we’re all going to be running on optical cables. The theoretical throughput is (AFAIK) still undecided, but it’s well above even the Tb/s that Don Lee requested from the Ethernet Alliance.

  2. Stuart Miniman

    Don’t dismiss Cat 6 cabling for 10Gb.  10GBase-T solutions aren’t available for FCoE yet, but are showing up for other 10Gb solutions and should be an option in the future.  I just did an update on my blog site which I hope adds to the discussion.

  3. Julie Herd Goodman

    Nigel, nice job on this overview.  It’s quick enough for a blog post, but with enough details to start to really understand what’s going on.  I looking forward to seeing what you write covering CNA’s, because that is where I start to lose my grip on what’s happening and where with FCOE.

  4. Amnon

    Very nice summary on the physical layer of FCoE. Just wanted to highlight that for the twinax cables there are 2 main variants – passive cables and active cables. Both options right now are limited to 5m. The capability to use twinax (which is far cheaper right now compared to optical transceiver) depends on both the switch and the CNA capabilities. Be sure to validate before you make your decision what is supported.

  5. Nigel Poulton

    Thanks all for the comments and additions to the discussion

    Julie,

    Thanks for stopping by.  I will put something together on CNAs, although it will be hard to give the detail as well as keep it short enough for a blog post.

    Nigel

  6. Kash Shaikh

    All,
    Good discussion, if you’d like to hear from someone who is using FCoE in real life production environment, pls join us for a live Internet TV broadcast featuring our special guest Derek Masseth, Sr Director of IT at the University of Arizona..
    We introduced Nexus 5000 in March 2008. Nexus 5000 was industry’s first FCoE Switch. We now have shipped Nexus 5000 to more than 1000 customers.35% of the systems were shipped with FCoE licenses.
    Derek is one of our Nexus customer who is taking advantage of the benefits offered by FCoE at the server access layer.
    We will also be sharing Cisco’s incremental approach to FCoE…
    When: Tuesday, September 29, 2009, 10:00-11:00 a.m. PDT
    Where: The broadcast can be accessed at the URL below. No registration required.
    http://tools.cisco.com/cmn/jsp/index.jsp?id=90342

  7. Pingback: About Restore » Blog Archive » Pondering Fibre Channel over Ethernet

  8. Etherealmind

    The single biggest driver for Twinax  cable is the low power. Becuase the signal is a radio wave (not an electrical pulse) the physical port consumes less than 4W. An equivalent 10GbE port needs 40W or more to keep the signal envelope in good condition. So 100 ports, plus the switching fabric and supporting silicon will need something like 6KVA, while twinax would use less than one. A data centre with 2500 10GbE ports will use one twelfth of the power for the networking compared to BaseT. game over.

  9. Thomas

    Can these cabels (Twinax) be used on Storage arrays? To be specific, can these be used on Clariion CX4s?
    Thanks.
    Thomas Pinto

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.