Home > Cisco, SAN > Update on FCoE: The Current State of Real World Deployments

Update on FCoE: The Current State of Real World Deployments

FCoE has been out in the marketplace now for approximately two years and I thought it’d be good to discuss what we’re seeing in the real world regarding deployment.

Background

For those not familiar with Fibre Channel over Ethernet (FCoE), it is being hailed as a key new technology that is a first step towards consolidation of the Fibre Channel storage networks and Ethernet data networks. This has several benefits including simplified network management, elimination of redundant cabling, switches, etc., as well as reduced power and heat requirements. Performance over the Ethernet network is similar to a traditional Fibre Channel network, because the 10Gb connection is “lossless”. Essentially, FCoE encapsulates FC frames in Ethernet packets and uses Ethernet instead of Fibre Channel links. Underneath it all, it is still Fibre Channel. Storage management is done in a very similar manner to traditional FC interfaces.

Adoption

Across the R0undTower customer base in the Ohio Valley, adoption is still relatively low. I would attribute this to the fact that many customers in the Ohio Valley have found that traditional 1GbE iSCSI bandwidth will suffice for their environment. They never had a need to implement Fibre Channel, hence, there is little need to move to a FCoE environment. The most common FCoE switch is the Nexus 5000. Although some customers may not implement FCoE, we are seeing significant adoption of the Nexus line, with the 5000 often being used as a straight 10GbE switch. Even for medium-sized businesses that haven’t seen a need to adopt 10GbE, the drive to virtualize more will require greater network aggregate bandwidth at the ESX server, making 10GbE a legitimate play. In this case, the customer can simply continue to run iSCSI or NFS over this 10GbE connection, without implementing FCoE.

NFS and iSCSI are great, but there’s no getting away from the fact that they depend on TCP retransmission mechanics. This is a problem in larger environments, which is why Fibre Channel has continued to remain a very viable technology. The higher you go in the network protocol stack, the longer the latencies that occur in various operations. This can mean seconds, and normally many tens of seconds for state/loss of connection. EMC, NetApp, and VMware recommend that timeouts for NFS and iSCSI datastores be set to at least 60 seconds. FCoE expects most transmission loss handling to be done at the Ethernet layer, for lossless congestion handling and legacy CRC mechanisms for line errors. This means link state sensitivity is in the millisecond or even microsecond range. This is an important difference that ultimately is behind why iSCSI didn’t displace Fibre Channel in larger environments.

Until recently, storage arrays were not supporting native FCoE connectivity. NetApp was first to market with FCoE support, though there were some caveats and the technology was “Gen 1”, which most folks prefer to avoid in production environments. Native FCoE attach also did not support a multi-hop environment. FCoE has been ratified as a standard now, some of the minor “gotchas” have been taken care of with firmware updates, and EMC has also released UltraFlex modules for the CX/NS line that allow you to natively attach your array to a FCoE enabled switch. These capabilities will most certainly accelerate the deployement of more FCoE.

At the host-level, early versions of the Converged Network Adapter (CNA) were actually two separate chipsets included on a single PCI card. This was a duct-tape and bailing wire way to get host support for FCoE out to the market quickly. Now, Gen2 CNA’s are hitting the market, which are based upon a single-chipset. FCoE on the motherboard is also coming in the not-too-distant future, and these developments will also contribute to accelerated adoption of FCoE.

Recommendations

The best use case for FCoE is still for customers who are building a completely new data center, or refreshing their entire data center network. I would go so far as to say it is a no-brainer to deploy 10GbE infrastructure in these situations. For customers with bandwidth exceeding 60MB/sec, it will most certainly make sense to leverage FCoE functionality. With a 10GbE infrastructure in place already, the uplift to implement FCoE should be relatively minimal. One important caveat to consider before implementing a converged infrastructure is to have organization discussions about management responsibility of the switch infrastructure. This will particularly apply to environments where the network team is separate from the storage team. Policies and procedures will have to be put in place for one group to manage the device, or create ACL’s and a rights-delegation structure that allow the LAN team to manage LAN traffic and the storage team to manage SAN traffic over the same wire.

The above option is a great use-case, but it still involves a fair amount of pieces and parts despite being streamlined as compared to an environment where LAN and SAN were completely separate. Another use case for implementing FCoE today that is incredibly simple and streamlined is to make it part of a server refresh. The Cisco UCS B-series blade-chassis offers some impressive advantages over other blade options, and FCoE is built right in. This allows the management and cabling setup of the Cisco UCS to be much cleaner as compared to other blade chassis options. With FCoE already being part of the UCS chassis right out of the box, there is relatively little infrastructure changes required in the environment, management is handled from the same management GUI as the blade chassis, and there is no need to do any cabling other than perhaps add a FC uplink to an existing FC SAN environment if one exists.

Categories: Cisco, SAN
  1. No comments yet.
  1. No trackbacks yet.

Leave a comment