James Morle's BlogRSS Feed
Sane SAN 2010: Fibre Channel – Ready, Aim, FirePosted on 10:45 am September 30, 2010 by James Morle
In my last blog entry I alluded to perhaps not being all that happy about Fibre Channel. Well, it's true. I have been having a love/hate relationship with Fibre Channel for the last ten years or so, and we have now decided to get a divorce. I just can't stand it any more!
I first fell in love with Fibre Channel in the late 90s: How could I resist the prospect of leaving behind multi-initiator SCSI with all it's deep, deep electrical issues? Fibre Channel let me hook up multiple hosts to lots of drives, via a switch, and it let me dynamically attach and detach devices from multiple clustered nodes without reboots. Or so I thought. The reality of Fibre Channel is that it was indeed a revelation in its day, but some of that promise never really materialised until recently. And now it's too late.
I have a number of problems with Fibre Channel as it stands today, and I'm not even going to mention the fact that it is falling behind in bandwidth. Whoops, I just did - try to pretend you didn't just read that. The real problems are:
- It is complex
- It is expensive
- It is unreliable
- It is slow
Complexity. Complexity, complexity, complexity. I hate complexity. Complexity is the IT equivalent of communist bureaucracy - it isn't remotely interesting, it wastes colossal amounts of time, and it ultimately causes the system to go down. Don't confuse complexity with challenge - Challenge is having to solve new and interesting problems, Complexity is having to fix the same old problems time and time again and having to do it standing on one leg. So why do I think Fibre Channel is complex? For these reasons:
- The stack
If you have ever tried to manage the dependencies associated with maintaining a fully supported Fibre Channel infrastructure then you can probably already feel a knot in your stomach. For everyone else, let me explain.
Every component in a Fibre Channel stack needs to be certified to work with the other components. Operating System version, multipath I/O (mpio) drivers, HBA device drivers, HBA firmware, switch type, switch firmware and storage array firmware. So what happens when you want to, for example, upgrade your MPIO drivers? It is pretty standard for the following process to occur:
- I want to upgrade to MPIO v2
- MPIO v2 requires array firmware v42
- Array firmware v42 requires HBA device driver v3.45
- HBA device driver v3.45 requires the next release of the Operating System
- The next release of the Operating System is not yet supported by the Array firmware
- etc, etc
I think you get the point. But also remember that this wonderful array is shared across 38 different systems, all with different operating systems and HBAs, so the above process has to be followed for every single one, once you have a target release of array firmware that might work across all the platforms. If you are really really lucky, you might get a combination within those factorial possibilities that is actually certified by the array vendor.
Complex enough? Now add YANT...
Yet Another Networking Technology. I'm all in favour of having different types of networking technology, but not when the advantage is minuscule. All that training, proprietary hardware, cost, and so on: To justify that, the advantage had better be substantial. But it isn't. Compare Fibre Channel to 10Gbps Ethernet, which is a universal networking standard, and it just doesn't justify its own existence. To be fair to Fibre Channel, it was the original version of what we are now calling Converged Networking - it has always supported TCP/IP and SCSI protocols, and used to be way faster than Ethernet, but it just never got the traction it needed in that space.
It's tough to argue against this one, Fibre Channel is expensive. 10Gbps Ethernet is also expensive, but the prices will be driven down by volume and ubiquity. In addition, Ethernet switches and so forth can be shared (if you must, that is: I'm still a fan of dedicated storage networks for reasons of reliability), whereas Fibre Channel must be dedicated. Infiniband is expensive too, and will probably stay that way, but it is providing a much higher performance solution than Fibre Channel.
Yes, it's true. It's not an inherent problem with the technology itself; Fibre Channel is actually incredibly robust and I can't fault that fact. However, the promise of real-life reliability is shattered by:
- Large Fabrics
What is the point of large fabrics? I can see the point of wanting to stretch I/O capability over a wide area, such as remote replication and so forth, but that does not imply that the whole storage universe of the enterprise should be constructed as a giant fabric, does it? Networks should be composed of relatively small, interconnected, failure domains, so that traffic can flow, but the impact of a failure is limited in scope. Building a large fabric is going against that, and I've lost count of the number of catastrophic failures I've seen as a result of building The Dream Fabric.
Complexity; we're back there again. Reliability is inversely proportional to complexity: High complexity = Low reliability, and vice versa. This is particularly true while we still entrust humans to administer these networks.
This is the final nail in the coffin. Times have changed, and Fibre Channel has no space in the new world. The way I see it, there are now just two preferred ways to attach storage to a server:
- Ethernet-based NFS for general use
- Infiniband-based for very low latency, high bandwidth use
The former approach is a 'high enough' performance solution for most current requirements, with ease of use and well understood protocols and technology. I'm not saying it's quicker than Fibre Channel (though it certainly can be), just that it is fast enough for most things and is easy to put together and manage. The latter method, Infiniband (or similar), is a step up on both Ethernet and Fibre Channel in both higher bandwidth and lower latency, especially when used with RDMA. Infiniband has been a technology searching for a commercial purpose for some time now, and I believe that time has now come, via the route of semiconductor-based storage devices. Consider the following numbers:
- Fibre Channel Latency: 10-20us (est)
- Infiniband/RDMA Latency: 1us (est)
Now let's see how these latencies compare to the those of a physical disk read, and a read from a DRAM-based storage device:
- Disk Read: 8,000 us (ie 8ms)
- DRAM-based Storage read: 15us (source: TMS Ramsan 440 specification)
- Ratio of FC latency to Disk Latency: 1:800 (1.25%)
- Ratio of FC latency to DRAM Latency: 1:1.5 (80%)
- Ratio of IB latency to Disk Latency: 1:8000 (0.125%)
- Ratio of IB latency to DRAM latency: 1:15 (6.67%)
When comparing to disk reads, the Fibre Channel latency does not add much to the total I/O time. However, when accessing DRAM-based storage, it becomes a hugely dominant factor in the I/O time, whereas Infiniband is still single-digit percentage points. This is why I suggest that Fibre Channel has no role in the forthcoming high-performance storage systems. Fibre Channel is neither simple enough for simple systems, nor fast enough for high-performance systems.