Rack Area Networking: IOV

By | December 7, 2009

One of the key technologies or principles in Rack Area Networking (RAN) is I/O Virtualisation (IOV).  In fact, IOV is about to rock the world of physical server and Hypervisor design.

If you work deploying VMware, Hyper-V, XenServer etc or if you have anything to do with the so called Virtual Data Centre, then you need to be all over IOV.

This is the second post in my mini-series on RAN and IOV.  In this particular post Im going to talk about the concept virtual adapters – Virtual NICs and Virtual HBAs.

The vNIC and the vHBA

The concept is simple: Take a physical NIC, perform some magic on it, and make it appear to the OS as multiple NICs.  Same goes for HBAs.

The diagram below shows a single physical NIC carved into 4 virtual NICs (vNIC) and a single HBA carved in to 4 virtual HBAs (vHBA). 

IOV-1The benefits of such technologies should be obvious – higher utilisation, requirement for fewer physical NICs, fewer cables, and fewer edge switch ports – just to name a few.

Another added benefit is flexibility.  Assume you have a 10Gbps NIC in a server which you have carved in to 2 vNICs.  That server now has a requirement for an additional NIC.  You no longer have to power down the server, open it up, install a new physical card and then wait for new cables to be laid.  Instead, you can simply create a new vNIC, from the already installed physical NIC, and have it dynamically discovered and initialised by the OS.  All done in software – no cracking the server open and no waiting for cables!  Talk about reducing the time taken to implement a change, not to mention reducing the risk (there is always added risk when opening up servers and messing around in the floor void….).

The CNA

In the above diagram we labelled the vNIC solution as Good.  If we swap out that IOV capable NIC and replace it with a CNA (Converged Network Adapter) that can act as both a NIC and a CNA, then we suddenly have the ability to carve vNICs and vHBAs from a single physical Adapter.  The diagram below has been expanded to now include a CNA based solution.  The CNA solution is labelled as “Better”.

IOV-2

NOTE: I should point out that in most IOV solutions most of the  legwork is done in hardware.  The vNIC and vHBA devices are created in hardware, as well as most modern CNAs providing protocol offloads…..

Single Root

The above approach – of creating multiple virtual adapters from a single physical adapters located within a single server – falls under the category of Single Root (SR).  Single Root being another way of saying single server (PCIe root complex).  Single Root approaches are limited to presenting their virtual adapters to a single PCIe root complex – operating systems executing within a single physical server.

While talking about Single Root technologies I need to mention SR-IOV.  SR-IOV is a semi-open PCI-SIG standard for SR style I/O Virtualisation.  As with all standards, it will take time to take-off and become widely deployed, and is open to implementation interpretation (some vendors may implement SR-IOV slightly differently to others).

True PCI-SIG SR-IOV requires the following components to be SR-IOV aware in order to support it –

  • BIOS
  • OS/Hypervisor
  • Physical I/O Adapter
  • Driver

Changes to the above components are required due to the fact that SR-IOV changes the architecture and model for PCIe adapters.  It introduces the concept of Virtual Functions (VF) which look and feel like a normal physical I/O adapter.  However, VFs are a lightweight version of a physical I/O adapter and inherit some configuration options from their parent physical I/O adapter.  As a result, vNICs and vHBAs are enumerated on a servers PCIe device tree as VFs, and the BIOS, OS and driver must understand this.

Citrix recently demoed XenServer working with SR-IOV NICs and Intel VT-d technology.

While SR-IOV is a great technology and is destined to play a role in driving IOV forward, it is very early days for the technology and the many of the currently shipping IOV technologies are using proprietary techniques and not PCI-SIG SR-IOV.  Some of these technologies include –

NOTE: While the Virtual Fabric for IBM BladeCenter solution is not currently SR-IOV, the chip that powers the Emulex CNA that sits at the heart of the solution, is SR-IOV capable….. just waiting for the other components (BIOS, OS, Drivers…) to catch up.

Good, better, BEST!

So far we have talked about SR style solutions where the vNIC and vHBA devices are only available to Operating Systems executing on the same physical server that the adapter is installed in.  While these technologies are all good and a step in the right direction, there exists a superior solution – Multi Root (MR).

Taking IOV to the next step involves removing the physical I/O adapters from the physical server chassis and re-house them in an external device that I am generically referring to as the I/O Aggregator

The diagram below has been expanded to include an example I/O Aggregator approach.

IOV-3

Such technologies can be referred to as Multi Root (MR).

There are already Multi Root I/O Aggregator style solutions shipping from the likes of NextIO, VirtenSys and Xsigo – all are delivering next generation IOV benefits today!

Of the currently available solutions, these MR technologies offer the greatest levels of virtualisation and flexibility and for me represent the future.  By removing the I/O adapter from within the physical confines of the server chassis, you enable any vNIC or vHBA to be assigned to any server.  Your physical server becomes entirely stateless from an I/O perspective!

I used to be excited about LOM style CNA implementations…. until I discovered I/O Aggregators.

NOTE: PCI-SIG also have a specification for MR-IOV.  However, I do not know of anybody deploying it at the moment

Moving Home

Opinion time here, but they way I see it, the I/O adapter is folding its underwear and packing its bags ready to ship out of the server chassis into a bigger, better and more comfortable new home – the I/O Aggregator. 

PCIe adapters in servers…… don’t be so yesterday 😉

 

Would love to know what anybody else thinks.

Nigel

You can follow me on Twitter. I’m @nigelpoulton and I only talk about technology.

I am also available as an independant freelance consultant and can be contacted via the Contact Me page.

Other RAN and IOV related posts –

IOV and introducing hairpin turns

Introducing the RAN concept

26 thoughts on “Rack Area Networking: IOV

  1. Brad Hedlund

    Nigel,
    If the goal is to "enable any vNIC or vHBA to be assigned to any server" … well, that can be done in a "LOM style CNA implementation".  Example: Cisco UCS.
    Nice work, as always.

    Cheers,
    Brad
     

  2. Nigel Poulton Post author

    Hi Brad,

    I have too many grey areas when it comes to UCS. I was under the impression that UCS with Palo was a single root implementation. Am I wrong?

    Can you shed any more light on how UCS does any vNIC or vHBA to any server?

    Cheers,

    Nigel

  3. Brad Hedlund

     

    Nigel,
    In Cisco UCS the vNIC and vHBA are defined in what is called a "Service Profile" at the UCS Fabric Interconnect (the "aggregator" all the chassis connect to).  The Service Profile contains all the configuration and identity elements of a server, not just I/O. For example the Service Profile also defines things such as BIOS and Firmware versions, boot order, UUID, and more, <but I digress>.
    Now that the vNIC and vHBA are defined in a Service Profile at the Fabric Interconnect, all of the settings in that Service Profile, including the vNIC and vHBA, can be loaded onto *any blade*.  The chosen blade is programmed with all of the settings (including I/O).  When the Service Profile is associated to a blade it will power, boot up, and the OS running on the blade will see the defined vNIC's and vHBA's as if they were physical adapters.
    Cheers,
    Brad

  4. Nigel Poulton Post author

    Thanks for the info Brad.

    Im cool with service profiles and am impressed with how comprehensive they are with UCS (totally stateless server).

    From what you say, and the conversation we had on Twitter yesterday, it appears that two vNICs from the same physical NIC cannot be shared out to two separate servers (obviously that would be very confusing considering the physical I/O Adapters in UCS are “in” in the server chassis.

    Im not having a pop or saying this is an inferior solution – just trying to add clarity to my UCS understanding.

    Nigel

  5. Cam Ford

    Hi Nigel,

    Great explanation of the different approaches to IO Virtualization.

    This is a great debate….and it will be interesting to see how the world shakes out. I think the real key is to take a look at the work flow model, the management model, and at the end of the day whether you want to buy into a single vendor stack….or an open systems approach.

    The UCS approach requires significant changes to the edge of the network to make this all work. 1st you need a really advanced CNA that includes full SR-IOV capabilities and a lot more (as evidenced by the 18W power spec on the Palo adapter). It requires a boat load of new standards including the base CEE/DCB standards, but also new standards such as VN-Link Tags or VEPA to handle the switching aspects of virtualized ports. Now that you have a pile of new features and new capabilities…..you need a lot of interoperability testing to make sure that all these pieces work together (or you buy a single vendor end to end solution and pray that it works as advertised). Then you get to the management layer…..how do I migrate my vNICs adn vHBAs from server to server? My BIOS is expecting one thing….now it has been removed and shows up on another systems BIOS. Another layer of complexity. Lastly, the mapping of the ports down in to the aggregation layer for unified management is very complex. I will use UCS as an example, as they are the first to head down this path (….that is when Palo actually ships). If you look at the management model, it is a very complex set of configuration steps to configure every detail of how this complex system works. There are also a very complex array of failure scenarios and reconfiguration steps required to manage…..so it will take some time to shake out. On top of all of the new technology, new standards…there is the fundamental scaling issues that will inevitably require new standards such as TRILL and/or L2MP to scale out the fabric topology of CEE fabrics, allow them to work in Mesh and multi-path configurations. Ultimately, these fabric issues have been resolved already in both FC and IB fabrics, so it is likely that these standards will also be leveraged to scale this out.

    This model is much simplified in the aproach that you describe above for MR-IOV or Remote IO. You can still get all of the benefits of CNAs…..but in a shared IO environment, you get the added benefit of easy upgrades, easy to add in new technology……for example, Gen 1 CNAs dont have much in the way of SR-IOV, but Gen 2 CNAs will. Gen 3 CNAs will likely include some additional switching and packet manipulation capabilities, Gen 4 may include RDMA and some other stuff. In the Remote IO model, not only do you get all of the advantages fo SR-IOV, but you can spread the cost of HW across many servers…..and you can upgrade technology easily for all servers in the enviornment.

    Lastly, the management model stays consistent….Ethernet is Ethernet, FC is FC, iSCSI is iSCSI, CNAS are CNAs….but they are all much easier to manage because they are all now centralized…..instead of managing all of the edge devices on a server by server basis.

    Just my $0.02

  6. Nigel Poulton Post author

    Cam,

    Thanks for dropping by and putting in your 0.02cents worth. As yet I dont know enough about the Cisco UCS solution to say a lot on the topic, at least not without risk of making myself look a fool 😉

    However, I really like the Remote IO (as you refer to it) approach taken by Xsigo and others. Makes sense to me and so simple.

    The larger companies and major server players like HP and Cisco have enough muscle and marketing to get their products in without always being the best technologies. They can also play a little unfair if they wish (semi-closed standards). Whereas companies the likes of Xsigo have to have awsome technology in order to make any headway. You have a great solution in the VP780, but you need to keep it ahead of the curve and even more importantly – raise awareness. Im not kidding you when I say 99% of people I speak to you have never heard of you (Xsigo).

    You have a tough job but have a product that gives you a chance. Although I have to admit that you are an easy target with your IB component and must get sick of the FUD 😉

    Nigel

  7. Rob Ellison

    Hi Nigel, good post but supprised you have left the Cisco UCS out after yesterdays conversations on Twitter.
    Just reading the trail in the comments here i’m pretty confused as to why you would ever need vNICs from the same physical NIC to appear on different servers. however i do think i know what you are getting at..
    if you think of the UCS as a whole system the NiC that is connected to the outside world (in the fabric extender) CAN have in effect virtual NICs that appear on different physical servers. so at that ingress point to the UCS system, that 10Gb port can switch through to any physical server via the vNICs that the Palo adapters provide.
    From Brads comments yesterday it appears that a major advantage with the Palo adapter is that it fakes being a normal physical NIC or HBA so the OS on the physical server hardware doesn’t have to worry about IOV. obviously adding new NICs and HBAs (to the physical server) would probably require a reboot but how often would you do that anyway? the vNICs in vSphere would use the Palo vNICs as their ‘physical’ ports and those vNICs (the vSphere ones) would be able to migrate between physical servers using the normal vmware vmotion etc.
    all very impressive stuff!

  8. Nigel Poulton Post author

    Rob,

    I actually posted this article the day before I had the twitter conversation with Scott (@scott_lowe) and Brad (@bradhedlund), and in fact I wrote it a few weeks ago when I was still blogging at RupturedMonkey – one of the diagrams above still has the Monkey logo on it! 

    But I think Brad sums it up well in his comment above.

    I personally need more Cisco UCS knowledge before I post technical material concerning it – I need to work on that somehow :-S

    As for having vNICs form the same physical NIC appearing on different servers – this only makes sense if the physical NICs are housed away from the physical server chassis in an I/O Aggregator as in the Xsigo, VirtenSys type solutions (Diagram 3 above).

    It doesnt make sense with a hardware architecture like UCS.  But I *think* I see where UCS adds similar value via the Fabric Extender (or is it the Fabric Interconnect??).  Hmmm theres that confusion I have again 😀

    Thanks for your comments though!

    Nigel

  9. Brad Hedlund

    Nigel,
    Lets for a moment step away from the tree and look at the forest.  With MR-IOV, Server1 can use an adapter that actually belongs in Server5, for example.  OK — now lets ask ourselves the tough and critically important question:  What problem is that solving? How and why would a customer actually use that capability?
    However, if  a vNIC or vHBA could be defined in a central management tool and quickly provisioned to either Server1 or Server5 — Now THAT is a capability that solves a huge provisioning challenge facing data centers today.  And that provisioning capability is exactly what I described above with UCS using standard Ethernet adapters (CNA) in the server.  A special niche kind of I/O fabric adapter and "I/O Aggregator" with MR-IOV or Infiniband was not needed at all.  
    Why worry about if your ISV (e.g. Oracle) or OSM (e.g. EMC) supports your "I/O fabric" when Ethernet (CNA) based servers are clearly widely accepted and supported solution with all the major ISV's and OSM's?
    Furthermore, why just address just the I/O piece of the provisioning equation when you apply the same concepts described above to the entire suite of server settings — creating a "Virtual Bare Metal Server" — not just "Virtual I/O"?
    Cheers,
    Brad 

  10. Craig Thompson

    Nigel – Great set of posts on IOV and rack-level networking. At Aprius, as with Xsigo, Virtensys, NextIO and others, we agree that a virtualized 'pool' of data and storage network resources should be provisioned and shared by the entire rack of servers. This allows you to provide these network interfaces and services as virtual functions (better matched to VMs) 'on-demand', and provides the ability to flex up and down as VM workloads change or move around.  And the aim is not to complicate equipment choices and infrastructure designs, but to simplify them by providing a transparent gateway to existing as well as new network and storage infrastructure (not just FC and E) using open market cards and drivers.

    Craig Thompson, VP product marketing for Aprius (http://www.aprius.com

  11. Nigel Poulton Post author

    Brad,

    Looking at the forest, rather than the tree…. I agree that IF the adapter physically resides in the server chassis then sharing vNICs with other servers is not great.

    But I think that having the adapter physically located outside of the server (tree) into an external device allows the adapters to see the forest rather than just the tree that they are installed in 😛

    MR style solutions only make sense if the adapter is not glued on to the motherboard of a server. And that is the future – IMHO

    Great discussion as always!

    Nigel

  12. Nigel Poulton Post author

    Craig,

    Thanks for stopping by.

    Ive been intending to get around to looking at Aprius – too much to do and not enough time…

    First impression on Aprius – who decided to use the term "Gateway" in your product name!?  Call me old fashioned, but the word Gateway when used in a networking context does not conjure warm fuzzy feelings.

    Nigel

  13. Bob Napaa

    Hi Nigel
     
    As usual, a great post on I/O Virtualization. I love the tagline at the end.
     
    Let me shed some lights on the Virtensys IOV solution and bring Brad Hedlund up to speed on the capabilities of our IOV switches (we use the term IOV switches rather IO aggregators as they actually switch IO)
     
    The Virtensys VIO 4000 series uses off-the-shelf standard Ethernet NICs, CNAs, FC HBAs and SAS/SATA RAID adapters. These are I/O adapters available from companies like Intel, Qlogic, LSI and others and are shipping in volume today. The Virtensys solution doesn’t require the I/O adapters to support the PCISIG SR-IOV or MR-IOV standards,  and it will work with such I/O adapters as well.
     
    The I/O adapters are consolidated inside the VIO 4000 series (which uses PCIe as the interconnect to the servers).  The VIO 4000 switches then virtualize each of the adapters and present a virtual copy (vNIC, vCNA, vHBA and vRAID) of the same adapter to each servers. These virtual adapters are provisioned in a  “central management console and quickly provisioned to either Server1 or Server5 — Now THAT is a capability that solves a huge provisioning challenge facing data centers today”  <<as Brad says>>.  The Virtensys I/O virtualization switches have the additional capability to virtualize the RAID adapters and thus allowing servers to share local disks within the switches. The servers become true compute and memory nodes without any I/O or disks. 
     
    The Virtensys solution is also vendor independent and works with servers from any vendors as compared to other more proprietary and “single-vendor” solutions.
     
    Regards
     
    Bob

  14. Brad Hedlund

    Nigel,
    I think we agree the core problem we are trying to address here is I/O provisioning (tree), and to larger degree server provisioning (forest).
     
    With that in mind, having the "adapter physically located outside of the server" is really an obsolete and incomplete approach to the provisioning challenge.  It's obsolete because this solution tries to work around a problem that no longer exists … the lack of integration between network and compute.  The network switch can now quickly provision server I/O using standard Ethernet adpaters *located in the server*, example Cisco UCS.   
     
    The fact is, the "location of the adapter" is irrellevant to the I/O provisioning process.
     
    With that in mind, why bother with niche I/O cards and "I/O aggregators" to build an "I/O fabric" just to solve a provisioning problem that is already solved with a standard Ethernet fabric? (rhetorical question) 🙂
     
    Cheers,
    Brad 

  15. Nigel Poulton Post author

    Brad,

    The way I see it, both (Cisco UCS and the approach of having the adapter outside of the server|) address the same issue. Just one does it better than the other 😀

    It is also possible to improve on something – where a specific *problem* might not exist. The way I see it, the external adapter via an IO Aggregator, is the more flexible solution. You dont need an IO adapter on every blade server you deploy. The industry is trending toward reducing servers to pure compute and having I/O etc outside of the server chassis |MO.

    Its only a matter of time before Cisco either buys up one of these companies or comes to market with a similar solution themselves. Same goes for HP and others.

    Oh and its either FUD or a misunderstanding to suggest all of the IO Aggregator style solutions use “niche I/O cards” – not true for every vendor. And I dont think the Aggregators are niche either. They plug smoothly in to almost any environment.

    Nigel

  16. Cam Ford

    Let me throw a very simple example into the mix here. Cisco touts that UCS is 40G ready or 40G capable. Browsing the Palo adapter specs, I noticed that they are using a PCI-e Gen2 x16 interconnect for the Palo adapter. thsi suggests that it would be easy to add in 40Gb capability in to the existing UCS system when a 40gb Palo Adapter is available (Brad, when can we expect to see this?). In order to upgrade this to the new 40G adapter, you have to pull every server, add a new mezzanine card to each server, upgrade your Fabric extender switches, then upgrade your Fabric interconnect…..essentially a fork-lift upgrade.

    In the remote IO model, all you need to do is add in a new adapter into the IO Aggregator…then ALL servers can see the 40g adapter and share it….no upgrades to the server, no upgrades to teh aggregator….just a new IO card. Not only is this more flexible and more cost effective…..but you can also continue to use your existing legacy interfaces as well (FC, 1GbE, 10gbE, etc.).

    Lets be clear……Cisco, Brocade, Emulex, and Qlogic all hate the IO aggregator model as they make their money on expensive and complex CNAs that cannot be shared…..so it is clear they will continue to search for justification for that model.

    Lets also look to the past a bit. Server virtualization and sharing resources is not new….just like most of the things we do in commoditiy PCs, we are just reinventing what Mainframes have done for almost 40 years. For those who remember that world, IBM and others used to sell a separate IO processor chassis that was very flexible and could share a multitude of IO resources amongst Mainframe processor resources…..they dubbed it Channel IO or Channelized IO.

    VMware is providing Mainframe like virtualization to commodity servers, and the Remote IO model….whatever marketing term you give it…..is the next generation of IO processor. I am afraid that given Cisco’s lack of experience in the Compute industry, they have yet to learn many of the lessons that the rest of us learned many years ago……but they will learn…..just give them a few years and I am sure they will figure it out.

  17. Brad Hedlund

    Nigel,

    Just my personal opinion here, but it’s highly unlikely Cisco would acquire any of these players. Why not? Simply because these products do not fill any gap in the current Cisco data center offering. In fact, these products only address I/O provisioning – whereas the Cisco offering provides total server provisioning, no just the I/O part.

    Cheers,
    Brad

  18. Nigel Poulton Post author

    Brad,

    Sure these products only address I/O provisioning. But they potentially do it better and with more flexibility than the way Cisco currently do it with UCS.

    And when these companies either get acquired by or major design wins with the likes if HP and IBM then Im sure Cisco will come to market with something similar. Either that, or Cisco take notice now and get there before HP, IBM and the likes. Believe me, Flex-10 is a dog in comparison to the competition so is either due a refresh or replaced with one of these solutions….

    Obviously on forums like this there is a need to defend what the company currently does. But defending what you already do/already have is not a winning formula – moving with the times and implementing the best technologies is a winning formula. I expect (and hope) that Cisco is all over stuff like this behind the scenes. Obviously you/they cant come out and announce stuff like that here 😀

    Im sure you’re biting your tonigue 😀

    Nigel

  19. Brad Hedlund

    Nigel,
     
    I'm not biting my tongue at all.  The current Cisco UCS offering is the new thinking here that effectively obsoletes these I/O only solutions.  These players are further recognition of the market for changing the way customers provision services in the data center.  Cisco responded to this market by creating a tight integration of the server, network, and virtualization platform with UCS.  
     
    Players like Virtensys and Xsigo, on the other hand, since they cannot provide the complete server + network integration, they responded with an integration of just I/O + network, and they ask the customer to manage the rest of the server separately.  This results in a disjointed service provisioning process with two or more different management tools that are not in lock step with one another.  
     
    One simple example: You may be able to move a vNIC and vHBA from Server1 to Server5, but did you also move the UUID and BIOS boot settings? You wont need to worry about that with UCS, but this could become a real problem with the other I/O only based offerings.
     
    This is exactly why I say Cisco will likely never acquire theses companies (in my opinion), because Cisco has already leap frogged their provisioning capabilities with UCS.
     
     
    Cheers,
    Brad

  20. Nigel Poulton Post author

    😀

    Yep, right now they only do the I/O portion of provisioning. BUT……. the more complete integration will come as part of any of the following –

    1. Design wins and integration with OEM Management software
    2. When they are acquired and integrated by server venors
    3. When Cisco provide hooks in to UCSM
    4. When Cisco do their own version (and obviously integrate with UCSM)

    So I think we agree that the power of the Xsigo or VirtenSys solution with the power of the Cisco UCSM gives us the best solution 😛

    Only playing – Have a good weekend.

    Nigel

  21. Cam Ford

    Nigel, Brad is absolutely correct. Customers do want an integrated management platform for their virtual resources. Xsigo integrates its IO Virtualization management platform (XMS) into Virtual Center as a plug-in, so customers can manage their virtual Servers and Virtual IO resources all from a single management console.

    This allows users to select best in class servers, networking, and storage platforms just like they always have….and can centrally manage all fo their virtual resources……while monitoring all fo their physical hardware with their existing SNMP-based frameworks.

    Unfortunately, Cisco’s UCS solution doesnt quite get that customers want choice and they force you to purchase an an….almost end to end solution….and you are forced to manage your Cisco gear one way and your other servers another way. With Xsigo, we can work across existing servers, both rack and blade….across vendor, and across OS and hypervisor. Customers want choice…..while large system vendors want lock in (Cisco, HP, IBM, etc.).

  22. Cam Ford

    HI Nigel,  Brad is absolutely correct.  Customers do want an integrated mangaement platform for their virtual resources.  That is exactly why Xsigo has integrated its Virtual IO management framework into VMware's VCenter for a single console to manage all of your virtual resources.  Unfortunately, large system vendors like Cisco, HP, IBM ,etc. really want to lock you into their vertical solutions…..where as customers really want choice.  Their "single management platform" locks you into managing your Cisco environment one way….and the rest of your servers a different way (HP, Dell, IBM, etc.).  With Xsigo, we are platform indepdent, OS independent, Hypervisor independent, and vendor independent……so you can run us on existing rack or blade servers from any vendor…..and still manage your virtual server and virtual IO resources from a single pain of glass.  Also, most people do their hardware monitoring and management through existing SNMP-based frameworks anyways….and will need to continue doing that.

  23. Cam Ford

    PS…..Xsigo can move the boot profile and boot device for a Server WITHOUT changing the BIOS….that is the beauty of the "remote IO" model……and BTW, our SAN boot actually works 🙂

  24. Frank DAgostino

    This is a great discussion on technologies between now and the rainbow.  The rainbow is never reached, of course, as it keeps moving as we get closer.  I think the topics being addressed are somewhat forecasting the future of compute, IO and memory.  The thought I have is if memory, cpu, and IO are completely distilled, then the topic of MR-IOV is much more relevant as the technical issues (addressing, bus number, blah), become more relevant when the IO hardware can be associated with any CPU.  Brad and I share an affinity to UCS, but the key is the UCS today provides an SR-IOV implementation that provides a greater number of interfaces than can be provisioned than "at least" the Flex-10 solution mentioned previously.  I do not know of the total number of VF functions being provided by SR-IOV capable solutions, but the UCS specifically provides up to 128 (some used for control channels), they can be FC or Ethernet, and they can be for virtualized or bare-metal deployments.
    The key to understand however is that the SR-IOV mezzanine in UCS has a specific hardware relationship to the CPU and memory on the PCB.  The construct of a server can move anywhere within the management domain, or be exported via its XML profile to another domain, and can use any physical hardware based on its profile.  If my understanding is correct, the MR-IOV implementation removes the mezzanine / hw attachment from the cpu/memory on the blade and that is not provided in UCS today, as it is an SR-IOV implementation.  Any logical adapter can be provided on a blade within the UCS construct today, but it is applied to the mezzanine associated with the compute hardware.
    Great blog by the way –

  25. Nigel Poulton Post author

    Hi Frank,

    Thanks for stopping by and glad you like the blog.

    I personally try not to argue against UCS as I do not know the technology that well – I wish I did.  But I know one thing for sure – I like it more than I like Flex-10 😉

    Thanks for the info on the UCS SR-IOV implementation. 

    To me – and this is just my opinion – it makes more sense to disaggregate the I/O from the compute. 

    I will probably say more on Cisco once I get my hands on some UCS, but for now, Im liking  lot of what Ive seen and am told if coming in the pipeline form the likes of Xsigo, NextIO and VitenSys.

    Nigel

  26. Pingback: physical server - StartTags.com

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You can add images to your comment by clicking here.