What is NVMe?
The storage world has gained a number of new terms in the last few years. Let’s start with NVMe. The abbreviation stands for Non-Volatile Memory express, which isn’t very self-explanatory. It all began a few years back when NAND Flash started to make major inroads into the storage industry, and the new storage medium needed to be accessed through existing interfaces like SATA and SAS (Serial attached SCSI).
Back at that time, FusionIO created a NAND flash-based SSD that was directly plugged into the PCIe slot of a server. This eliminated the bottleneck of the ATA or SCSI command sets and the interfaces coming from a time of rotating storage media.
The FusionIO products shipped with proprietary drivers, and the industry set forth to create an open standard that suits the performance of NAND flash. One of the organizations where the players of the industry can meet, align and create standards is the SNIA (Storage Networking Industry Association).
The first NVMe standard was published in 2013, and it describes a PCIe based interface and command set to access fast storage. This can be thought of as a cleaned up version of the ATA or SCSI commands plus a PCIe interface.
What is NVMe-oF and NVMe/TCP?
Similar to what iSCSI is to SCSI, NVMe-oF or NVMe/TCP are standards that describe how to send the NVMe commands over networks. NVMe-oF requires an RDMA capable network (like InfiniBand or RoCE), and NVMe/TCP works on every network that can carry IP traffic.
There are two terms to be aware of. The initiator is where the applications run that want to access the dataset. Linux comes with a built-in initiator, likewise other OSes already have initiators or will have them soon.
The target is where the data is stored. Linux comes with a software target built into the kernel. It might not be obvious that any Linux block device can be made available as a NVMe-oF target using the Linux target software. It is not limited to NVMe devices.
What does this have to do with Swordfish?
While the iSCSI or NVMe-oF standards describe how the READ, WRITE and other operations on block data are shipped from the initiator to the target, they do not describe how a target (volume) gets created or configured. For too many years, this was the realm of vendor specific APIs and GUIs.
SNIA’s Swordfish standard describes how to manage storage targets and make it accessible as NVMe-oF targets. It is a REST API with JSON data. As such, it is easy to understand and embrace.
The major drawback of Swordfish is probably that it is defined as an extension of Redfish. Redfish is a standard to manage servers over the network. It can be thought of as a modernized IPMI. As such, Redfish will usually be implemented on a BMC (Baseboard Management Controller). While Redfish has its advantages over IPMI, it does not provide something completely new.
On the other hand, Swordfish is something that was not there before, but as it is an extension to Redfish, an implementation of it usually means that the BMC of the machine needs to have a Redfish-enabled BMC. That may hinder or slow down the adoption of Swordfish.
Since version 0.7, LINSTOR is capable of working with storage provided by Swordfish compliant storage targets as well as their initiator counterparts.
LINSTOR has gained the capability of managing storage on Swordfish/NVMe-oF targets besides working with DRBD and direct attached storage on Linux servers.