Xsigo – Try it out, I dare you!

By | November 16, 2009

OK, if you don’t already know Xsigo Systems, and what they do, then you are seriously missing out!

The way that I see it, Xsigo (pronounced “see-go”) are of particular interest for two reasons –

Firstly, they are playing in the steaming hotbed that is the I/O consolidation and virtualisation space.  This area of Data Center computing is probably experiencing its biggest period of change and upheaval since the birth of Local Area Networking.  Also, the I/O subsystems of modern servers and blades are becoming increasingly important in modern data centers.

In a nutshell, just about everything is changing in the Data Center I/O space!

RupturedMonkey advice to vendors: Now is not a time to stand still or try and defend your traditional core competencies. Move with the times or risk falling behind!

RupturedMonkey advice to consultants and architects: Now is not a good time to take a professional snooze. If you do, you might find that you dont recognise the world you wake up to. Stay awake!

The second reason Xsigo Systems are of interest is because they have an absolutely kick-ass product – the Xsigo VP780 I/O Director.  So lets talk about it……

The Xsigo VP780 I/O Director

Before digging in to the specs and architecture, I should point out that the VP780 I/O Director is the only offering from Xsigo!  Also, as well as being the only product they currently offer, it is also pitched squarely at enterprise customers.  I suppose one of their responses to that would be that it allows them to be laser focussed, but for me, my initial reaction was that this makes them a bit of a one trick pony……  Compare their I/O consolidation portfolio to the likes of Brocade, and especially Cisco, and you will see what I mean.

During their presentation at the recent GestaltIT Tech Field Day they did say that they are working on similar but scaled down offerings for the SMB, but nothing to announce at the moment.

However, despite being the only noteworthy member of the Xsigo family, the VP780 is no wimp!  On the contrary, in a one-on-one it would probably fancy its chances against any of its competitors.  I certainly wouldn’t bet against it from a technology point of view! 

Specs and Techs

The VP780 is a 4RU high-speed low-latency 780Gbps I/O consolidation platform that was over two and a half years in the making.  It provides PXE boot and boot from SAN across 20Gbps connections to your servers and makes cable once and do the rest in software a reality!

Below is a picture of the front panel of the one on display at GestaltIT Tech Field Day –

Xsigo VP480 front view

From a high level architecture point of view the VP780 has –

  • Server-side connectivity via 20Gbps Infiniband XFP ports
  • Network-side connectivity via 15 hot-plug slots that can be loaded with 1Gbps Ethernet, 10Gbps Ethernet and 4Gbps FC.
  • Passive midplane
  • High-speed low-latency internal switching fabric

Hopefully the scribble below will be helpful as I attempt to dig deeper and explain some of the main components. Later in the week I will record a whiteboard session and upload as a complimentary post.

Xsigo scribble

 

What does it do – in a nutshell

In a nutshell, the Xsigo I/O Director does for I/O what VMware does for CPUs.  Only it has an added benefit that it removes the physical limits of the server chassis –> instead of installing your NICs and HBAs in your servers and then carving them into virtual adapters that can only be used by that server, you install your NICs and HBAs in the Xsigo VP780 chassis so that they can be carved up and dynamically allocated to any connected server.  In doing this, you are effectively moving the edge of the network out of the server, enabling servers to be entirely stateless from an I/O perspective. Cool.

Connections to your servers… the hardware stuff

The VP780 has 24 x 20Gbps Infiniband ports for connections to your servers (server-side in the above diagram). They utilise copper or optical CX4 cables and XFP interfaces and are terminated at the server side on Host Channel Adapters (HCA in Infiniband parlance and yes each connected server needs an HCA). These HCAs are not used directly by the OS, instead, host-side drivers work together with the Xsigo I/O Director to ensure that the appropriate vNIC and vHBA devices are available to the OS.  Of critical importance is that thes vNIC and vHBA devices work exactly as physical NICs and HBAs and the OS is none the wiser.

Some quick comments on these physical aspects –

1. The 20Gbps XFP interfaces are not hot swappable and not upgradeable to 40Gbps QDR Infiniband.  These Infiniband HCA and switch ports are more energy efficient, lower-latency and higher-throughput than their 10GigE counterpoarts.  They also support longer distances over copper meaning that copper is an option more than it is for 10GigE which currently has practical limits between 5-10 meters.

2. As these are CX4 Infiniband connections, you will need Infiniband Host Channel Adapters (HCA) in your servers.

3. XFP and copper CX4 is more power hungry than SFP+ and copper commonly used with 10Gbps CEE. However, XFP optical is not more power hungry than SFP+ optical.

XFP back of server back to Xsigo

On point 1 from above – this should not be seen as a major issue. 20Gbps to your servers is lightning fast by todays server standards, and with the current wave of PCIe 2.0, you’re unlikely to be pushing beyond 20bps anyway. 40Gbps models are planned, as well as faster HCA cards, although this will be down to Mellanox as the silicon is 3rd partied form Mellanox.

Also, in reality,  how many people are racing to crack open servers and blades to upgrade I/O cards? Most people seem to be opting to buy newer servers and blades when higher bandwidth I/O is required – may be Nehalem-EX…  20Gbps is more than fast enough for the vast majority of todays applications and servers.

On point 2. Don’t be put off by the word Infiniband!

FUD watch: Infiniband is not a disease, nor is it dead!  It is actually a rock solid ultra high performance low-latency channel interconnect designed for data center use and high performance computing. In fact many of the worlds fastest supercomputers use and are built around Infiniband.  So its definitely alive and well.

You don’t have to learn a boat-load of new Infiniband skills. You will run a copper CX4 cable from the HCAs in your servers to the Xsigo director and that is about as much Infiniband as you will ever see or need to configure in the solution. The rest is normal Ethernet and FC.

 

Connections to your servers… the clever stuff

So, if all of this talk about Infiniband hasn’t scared you off, well done.

The Xsigo VP780 I/O Director allows you to carve its NIC and HBA resources in to virtual NICs (vNIC) and virtual HBAs (vHBA). Each of these virtual vNICs and vHBAs acts exactly like a normal physical NIC or HBA and thanks to some clever work in the server side drivers, Operating Systems (ESX, Windows, Linux etc) see just like they would physical NICs and HBAs.

Another thing not to be underestimated is that fact that the physical NIC and HBA hardware sits outside of the physical server or blade chassis.  This enables the physical servers to be stateless from and I/O point of view, and for virtual resources to be moved around from physical server to physical server with great ease.  The VP780 owns the server profiles which include MAC addresses and WWPNs etc and allows up to 16 vHBAs and 32 vNICs to be assigned to a single physical server, all of which can be created and deployed in literally seconds with no reboots.   Ideal for VMware and the c c cl cll cllllll clllllll cloud!?  Think thats the first time Ive said the “c” word in a blog.

In my opinion, the previous paragraph is of huge importance.  If you don’t think this is huge, then I suggest that you re-read it and take a minute or two to think about it.  This is flexibility like no other solution I know of.

Is it just me, or does this look and feel very much like MR-IOV (PCI-SIG Multi-Root I/O Virtualisation)? 

Does anybody else do anything like this?

NOTE: I’ll post on this in the near future, but I personally think that MR-IOV has huge potential to rock the I/O consolidation world, and I’m not alone in thinking that! However, there is a case for Infiniband being a better Rack Area Networking (RAN?) interconnect than PCIe. One for a future post if people are interested.

Connecting to existing backbones

On the network side of the VP780, there are 15 slots that can be populated with various modules.  Currently 3 module types are available –

  1. 1 x 10Gbps Ethernet module
  2. 10 x 1Gbps Ethernet module
  3. 2 x 4Gbps HBA module

This gives you traditional LAN And SAN Connectivity, with FCoE and iSCSI offload being on the map.  So ,connecting to your existing LAN and SAN is “as easy as organising a tweet-up at TechFieldDay” 😉  Your up-stream LAN and SAN is oblivious to the fact that the I/O is not initiated at the server chassis and just hums away as normal (NPIV is implemented on the SAN side of the HBAs).  The diagram below shows native FC connections coming out of the network side of the VP780 at the demo lab at VMware. 

Back of rack

At the moment, the VP780 has no support for FCoE.  Not a huge drawback as the standard and shipping products are still young, however, if they drag their heels over this they will fall behind in an important new and emerging market.  Something like the Emulex UCNA with its 10Gbps Ethernet, FCoE and iSCSI offload all on a single module would be like the cherry on the cake for this.

FUD Watch:  Be careful to note that the VP780 is not a switch.  True, it can switch frames between servers without passing traffic to the upstream network switch, but it does not HAVE to. It can forward the frames to the upstream switch if the switch supports hairpin switching.  So it does not have to alter existing network management models. However, there are several scenarios such as HPC or RDMA where performing hairpin switching over the internal IB fabric is beneficial for performance reasons. Choice is a good thing!

Nice Management GUI

While visiting with Xsigo at GestaltIT Tech Field Day I got my hands on some Xsigo kit including the management interface.  I was able to present vNIC’s and vHBA’s to ESX servers and have them picked and recognised on the fly by virtual machines. I was also able to play a little with some simple QOS features – increasing and decreasing bandwidth is very simple and also dynamic.  All simple stuff and worked a treat.  Oh and it has a CLI.

There is a ton more I could say, but this is already pretty long so I’ll wrap up with some final thoughts…..

Conclusion

The VP780 does some interesting stuff and in some respects is ahead of the curve. For instance –

  • Performance to the server is 20Gbps over Infiniband
  • Flexibility. Removing the physical NICs and HBAs from the server or blade chassis makes this a hugely flexible solution.
  • Number of vNIC and vHBA devices that can be carved per physical card and presented to each host is more than most of the competition. E.g. HP VC Flex-10 and IBM Virtual Fabric can only create 4 virtual functions per port. This offer superior utilisation as a result.

As always though there is no perfect solution.  There is currently no FCoE, and both the Infiniband and the 10Gbps Ethernet options use XFP transceivers which are not as good as SFP+ when it comes to the likes of size, cost and power consumption.

There is also the fact that Xsigo are a relatively small and new company pitching to the enterprise.  From a technology point of view they are brilliant, but one has to wonder if they will still be around in 10 years time supporting and developing their products?

However, when all is said and done, I really like what they are offering. My final question for Camden Ford after his presentation was “can I have one for my garage”.  Says it all really.

If you are looking in to I/O consolidation and virtualization then you should definitely at least check out Xsigo… unless you’re too chicken!

Thoughts and comments welcome.

Nigel

You can follow me on Twitter where I talk about storage technologies (@nigelpoulton)

I am also available for hire as a free-lance consultant.

23 thoughts on “Xsigo – Try it out, I dare you!

  1. Brad Hedlund

    Nigel,
    Was hoping you could clarify something. In this post you say the I/O Director is not a switch and will not change the network management model. However the words “Internal switching fabric” are written on whiteboard pic. I’m confused. Does the I/O Director switch traffic between two servers connected to the same I/O Director?

    Thanks,
    Brad

  2. Ilja Coolen

    Nigel, As always a very good post.  To check if I understood correctly. There is no fabric services running in the Xsigo box and vHBA’s can be presented out to the customer fabrics without affecting the customer networks. Is there support for CISCO VSAN’s and / or multiple separate fabrics? Can SAN traffic be isolated for vHBA to customer SAN ports? I would at this point expect limitations on the Infiniband port mapping to a customer SAN port. I don’t think you can map one vHBA on a IB port to a FC network port while mapping another vHBA on the same IB to another FC port attached to another physically separated fabric. But this can also be my limited knowledge on the Xsigo product (which obviously will change over the next few weeks).Thanks,Ilja

  3. Cam Ford

    HI Nigel, Thanks for the great post.  Seems there are a few questions above.  Let me jump in and answer. 1st.  Ilja, Let me clarify our implementation of the vHBA a bit.  Our FC IO Module actually has a Qlogic HBA on it.  So, from teh SAN perspective, we are just another Qlogic NPIV device loging in to the fabric.  We are an N-Port and do not participate in the FC Fabric services.  As such, we support 64 NPIV Logins per port (per the Qlogic QLE2462 ASIC we use)…..so the SAN switch (either Brocade or Cisco MDS) just sees the Xsigo IO Director as if it were a Server.  Behind the Qlogic HBA asic, we provide a Xsigo custom ASIC that creates a Virtual HBA corresponding to each NPIV WWN.  Each vHBA can be assigned a QoS dedicated bandwidth adn can be assigned to any server attached on the IB fabric.  Therefore any server can have its own vHBA and can share the physical Qlogic ASIC from a bandwidth perspective.  Since each vHBA has its own WWN, the downstream switches can zone as usual with no knowledge that the HBA is actually shared. 2nd, Brad – The Xsigo IO director has an integrated IB fabric that is used to allow any server to connect to any IO module.  The IO Director can actually provide server to server switching in two different ways.  1st, we can switch vNIC to vNIC allowing servers to talk to each other if configured to do so.  Each vNIC can have its own QoS (guaranteed bandwidth CIR/PIR for both ingress and egress), ACLs, VLAN tagging, etc. just like any other NIC.  vNICs can pass traffic if they are configured to do so on the same ethernet segment (we are a layer 2 device only).  Because an Infiniband fabric is used, you can also use the traditional IB drivers to do IPoIB, SRP, RDMA, etc.  The IB fabric switchign latency is as low as 150ns per hop, so it works particularly well for things like vMotion, Fault tolerance, database cluster communications, and a host of other native RDMA oriented applications.  If you do not need this capability, it can be left off and all management is done over vNICs and vHBAs as in your traditional network. 3rd, CX4, XFP, SFP, and SFP+   Nigel made some inaccurate statements above about the Xsigo IO director (but only minor ones 🙂 ).  We use CX4 copper and optical active cables for connectivity from teh Xsigo IO Director to the Server HCA.  As for Power, the CX4 transceivers are a bit higher power than the SFP+ copper transceivers, but not higher than teh optical.  Also, the IB HCAs and Switch ports are much lower power than equivalent 10GbE NICs and 10GbE Switch ports……so the Combination of IB HCAs, CX4 cables, and IB switch ports are significantly lower power than the equivalent 10GbE equivalent….not to mention lower latency, higher throughput……oh yeah….and a lot  lower cost.  Copper SFP+ copper cables are slighly higher priced than the equivalent CX4, but can only go 5ms.  That means you have to go to Optical quicker.  The Cisco list price of an SFP+ Optical tranceiver is $1795 (that is for 1 end…you need 2 to make a cable).  CX-4 optical is <$1000 list for teh entire active optical cable.  QDR (40Gb ) IB is already out and uses QSFP tranceivers…….and so will 40 GbE in a few years….so IB is actually paving the way for 40GbE.  It is no surprise that Mellanox was the 1st to announce a 40GbE NIC….since it is really the same ASIC as their 40Gb IB adapter…..same QSFP interface…..in theory, you can load an IB driver or an Ethernet driver, or an RDMAover Ethernet driver and have channel IO and/or Ethernet ….now that is true convergence. I would be happy to answer any other questions

  4. Nigel Poulton

    Hi Brad,

    My bad with the lack of clarity. I understand the internal fabric is Infiniband and provides connectivity between all Infiniband ports on the server-side to be able to "bind" (for want of a better term) to any of the network side module ports. 

    I asked the question during the breifing about hairpin turns within physical adapters or even within the Xsigo box itself and for the life of me cant find the answer written down. I remember talking to Ed Saipetch about it afterwards and debating the pros and cons of switching in the adapter etc (bit like VEB in an adapter). I will ping Jon Toor and see if he can clarify.

    Ilja,

    I should have started this post with "Im not an authority on this…."  😉  On the network facing side the Xsigo does not run any fabric services. Imagine if it did from an interop perspective :-S Bad enough trying to get BRCD and CSCO to play together.  NPIV is implemented on the network side of the port and as such is like any other NPIV implementation.

    As for the server-side creation of vHBA I will have to fire the question to Xsigo.  They must do some form of PCIe virtual function but Im not sure how (PCI-SIG IOV etc…) or whos ASIC does it. To my knowledge none of the HBA vendors have an ASIC that can create as many vHBA or vNIC ports as the Xsigo does. So Im stumped on that one :-S

    Thanks for jumping in with the great questions, let me see if I can get some better answers.

  5. Steven Ruby

    So here goes my Question for the day. If it takes a something like 1.5Ghz of CPU to drive 2Gbps (that math is probably way off) how does it make sense $$ to implement a 10Gb/s fabric if the hosts can’t even drive that much I/O. Am I missing something or are all these new I/O vendors just giving away their stuff so the cost is out of play?

  6. Nigel Poulton

    Hi Stephen,

    Id take one for free if they were giving them away 😉  Seriously though I know where you are coming from…

    During the Xsigo presentation they had a slide up showing ~20Gbps combined FC and Ethernet traffic from a single host running off Nehalem processors.  Dont remember the specifics, but they seemed to be sure you could drive 20Gbps on certain configs.  if i have the slide deck will check out later.

    Was probably a tailored config but i imagine wth Nehalem-EX with its oct core architecture and more VM per core…. real world deployments will get there soon enough.

  7. Ariel Cohen

    Ilja,
    The FC ports on the Xsigo FC I/O modules are NPIV ports – these are HBA ports, not fabric ports, and no FC fabric services are needed there. Multiple vHBAs can be assigned to each FC port and they will have their own WWNs. These vHBAs can be assigned to the same server or different servers.
    A server can also have vHBAs on different I/O module ports connected to the same or different SANs. It’s very easy to achieve SAN isolation – simply connect different SANs to different Xsigo FC ports, and assign vHBAs on these ports to servers as needed. No VSANs are needed for this – that’s one of the advantages of using an I/O Director. If you want to use VSANs to achieve isolation on the same FC fabric, you can still do that, but the I/O Director also gives you the abillity to connect dynamically to different isolated FC fabrics without the need to configure VSANs anywhere.
     

  8. Ariel Cohen

    Nigel,
     
    The switching fabric of the I/O Director is indeed used for communication between the servers and the I/O modules such as the vNIC and vHBA modules. That is the primary role of the switching fabric.
     
    There are users who are interested in running various standard high-performance cluster communication protocols over this fabric to benefit from the low latency, high throughput, CPU offload, and RDMA capabilities that it provides. Such applications use the fabric for server-to-server communication, but that is an extra capability of the I/O Director which is unrelated to I/O virtualization, and users can ignore it if they don’t have a desire to use it.
     
    The typical usage model is for server-to-server communication to take place over vNICs. When the vNICs terminate on different Xsigo I/O module ports, traffic between them will be switched by the external Ethernet switches. For vNICs terminating on the same port, there are two options. Currently, most Ethernet switches don’t perform hairpin switching, so vNIC-to-vNIC traffic on the same port is forwarded on the vNIC I/O module itself. The I/O module is also capable of sending these packets to an external switch for hairpin switching, so that can be supported as well when it’s needed.
     

  9. Ariel Cohen

    Steven,
     
    Xsigo demonstrated 20Gbs of combined Ethernet and Fibre Channel traffic on a single server running vSphere at VMworld this year. The server had dual 4-core Nehalem CPUs.
     
    Modern servers can handle a lot of I/O – memory bandwidth and I/O hub bandwidth have been increased significantly, and the number of cores is increasing – 8-core Nehalems are expected soon, so you’ll have 16 cores in a dual CPU machine, or 32 cores in a quad.
     
    As consolidation ratios go up to make use of all this processing capacity, the I/O capacity and connectivity per server will need to be scaled up as well. Stuffing servers with many I/O controllers is obviously not an attractive solution. The Xsigo I/O Director addresses this.
     
     

  10. Nigel Poulton

    Cam/Ariel,

    Thanks for pitching in with the corrections and clarifications. I didnt see you replies earlier as they were placed in the pending queue (along Rodos) as you were all first time commenters.

    Cam, I hope you dont mind but Ive removed your email address, reason being that if people ask questions on here and you respond on here then everyone gets to read them and benefit from your responses.

    Rodos,

    As you well know, Im no geek…. it was too much hanging out with you lot has rubbed off on me. I did learn from you though, the V-Max theory with beer mats video was posted on Vimeo and I edited it with my old MacBook 😉

    Nigel

  11. Brad Hedlund

    Great. So based on Ariel’s answer the I/O Director IS a *switch* and DOES change the network management model. That clarifies. Thank you.

    Cheers,
    Brad

  12. Ilja Coolen

    Thank you all for your responses.
    Very helpfull and enlightning. But one question goes unanswered.
    Maybe I wasn’t clear enough (likely).

    vHBA and NPIV make sense to me. Now, I imagine there are restrictions as to how flexible NPIV exposure to one of the connected SAN fabrics is.
    I’ll try to clarify my question by using some example ports.

    If I were to connect IB port 1 to a host, and created 4 vHBA’s. I obviously would get 4 NPIV ports ready to be exposed to connected FC ports. Can these 4 NPIV ports be exposed to let’s say 4 different FC ports and thereby 4 different fabrics? Or is there a fixed technical boundary were all NPIV ports on an IB port have to be assigned to the same FC port?

    Sorry, another question just popped up? Might be a stupid one, but I’ll post it anyway.
    The vHBA appears like any regular HBA to the OS, like for example VMware. Within VMware I can also create virtual HBA. Knowing that it would make no sense for real world applications, but would it technically be possible to stack the VMware Virtual HBA’s on top of a Xsigo vHBA?

    Thanks again you guys, your elaborate and quick responses are impressive.

    Ilja

  13. Ariel Cohen

    No, Brad. That is not what I said. In fact, there is no change to the network management model.
     
    As to being a switch – you can perform the Ethernet switching in your favorite switches which are external to the I/O Director, and connected to the I/O Director Ethernet vNIC ports. Even the vNIC-to-vNIC switching on the same port can be done externally if desired when using switches which support hairpin forwarding.
     

  14. Brad Hedlund

    Ariel, you said:
    “vNIC-to-vNIC traffic on the same port is forwarded on the vNIC I/O module itself.” That makes the “I/O Director” a network switch.

    You also said:
    “When the vNICs terminate on different Xsigo I/O module ports, traffic between them will be switched by the external Ethernet switches”

    This means is that some traffic might be handled by the Ethernet network, some might not, it all depends on how the I/O Director is configured. How is that not only changing the network management model, other than completly confusing it?

    Cheers,
    Brad

  15. Ariel Cohen

    Brad,
     
    I already said twice that even vNIC-to-vNIC switching on the same port can be performed externally if desired and if the external switch supports it.
     
    I don’t see any "confusion" when it’s performed on the vNIC module in the case where the vNICs terminate on the same port. It’s standard practice for virtualized NICs regardless of whether they are on a virtual switch within a hypervisor, or a hardware virtualized NIC within the server (e.g. an SR-IOV NIC), or an externally virtualized hardware NIC (e.g. Xsigo). Again, cases where users want to do even this externally can be addressed with hairpin forwarding.
     
    An I/O Director is a system providing shared virtualized I/O controllers to servers. Its primary function is very different from that of a switch regardless of whether the I/O Director is configured to perform some switching between vNICs or none at all.
     

  16. Etherealmind

    Looks like not everyone understands that Infiniband allows direct memory access between blades/brcicks. That is, using RDMA an application can write data directly to a memory location of another machine, at nanoseconds delay. Thie concept of sending data between servers using Ethernet is quaint and slow by comparison.

  17. Amnon

    NigelGreat post that gives some background to the Xsigo architecture.Some follow on questions:  Are the HBA and Ethernet modules standard off the shelf (I assume they are) and are they hot pluggable?For the HBAs/Ethernet controller – can any HBA be used or does Xsigo limit to specific vendors? Can the customer use an 8G FC HBA from Qlogic/Emulex?Thanks,Amnon

  18. Cam Ford

    Brad,
    You are absolutely correct.  Xsigo does change the way the datacenter is managed.  We greatly simplify the functions that are being done today by allowing the Server guys to dynamically create vNICs and vHBAs running on live servers anytime they want….and manage those resources in a really easy way.
    No longer will a Server admin be held hostage by a Networking group who takes weeks to months to pull a new cable to a server.  Same goes for storage.
    Equally has important is how we allow the DC architecture to be managed in the same logical topology that it is today.  The Xsigo approoach to IO Virtualization does not force the merging of Storage network management and datanetwork management.  We do not require VSANs and VLANs to overlay in the same switch.  We simply extend the IO bus from the server into an aggregation device (VP780 IO Director) where those resources can be virtualized, shared, and centrally managed.  NICs are still NICs, HBAs are still HBAs, etc….only everything is virtual and managed electronically instead of physically.  SAN management is fully independent from network management, etc…..allthough the mangemetn is actually done on teh same box.
    One more important point.  In a Xsigo environment, IT shops can virtualize their IO across HP, IBM, Dell, Hitachi, Fujitsu, SuperMicro, Verarri, and virtually anyone elses servers….in both Rack and Blade configurations.  No one is locked into a single vendor solution.  And while some folks have prematurely declared IB, FC, ATM, FICON, and numerous other technoloties dead……i will happily let you know that Xsigo has Cisco IB switches and Cisco IB HCAs happily running in our certification environment along side everyone elses gear in the same virtual IO environment.

  19. Cam Ford

    Ethernet and RDMA – Brad, thanks for reminding me to comment.
    For those of you interested in RDMA capabilities, the Infiniband Trade Association (IBTA) is currently in the process of standardizing RDMA on Ethernet, called RoCEE (pronounced Rocky….not my favorite…) which stands for RDMA over Converged Enhanced Ethernet.  Much like its little brother, FCoE, this standard will directly map RDMA transport frames onto Ethernet and leverage the emerging CEE capabilities for reliable physical layer transport.
    This technology already works today on existing Mellanox HCAs.  Some of you may know that Mellanox has an ASIC (ConnectX) that runs both 10GbE and 10, 20, and 40Gb IB on the same chip.  It also does RDMA over Ethernet (demoed at Interop and SC09, but not yet released).
    Its not so surprising then that Mellanox was the 1st company to announce a 40GbE NIC…..as it is basically the same 40Gb IB HCA that also supports Ethernet framing…although I believe it is a step of the ConnectX ASIC (called ConnectX-2).  The really cool part is that both 40Gb IB and 40GbE are going to use QSFP optics…..so the only difference between adn IB port and anEthernet port is the firmware running on the adapter……now this is true covergence.
    Here is the best part…..all IB Host drivers are OPEN SOURCE….let me say that again…..OPEN SOURCE.  Once you get RDMA on Ethernet, you now have full access to OPEN SOURCE drivers for RDMA, MPI, SRP, ISER, IP, etc. and you can map just about any driver for any application on top of it.  Imagine…no more vendor specific Hardware drivers for networking, storage, cluster protocols, etc…..

  20. Pingback: Some Xsigo Links2vcps and a Truck | 2vcps and a Truck

  21. Pingback: Understanding XSigo Architecture – by Nigel poulton | Art of IT Infrastructures

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.