This article is more than 1 year old

Let's talk about NVMe, let's talk to Dell EMC: Let's talk about $1bn baby DSSD

Ecosystem's maturation will provide D5 springboard next year

Interview The most high profile NVMe-using array is Dell EMC's all-flash, 10 million IOPS D5, the much anticipated product of its billion-dollar acquisition of DSSD.

We asked Mike Shapiro, VP Software for DSSD, questions about how DSSD views NVMe as part of our NVMe interview series. His answers are below.

During the process, news hit that the President of the Dell EMC Division responsible for DSSD, C J Desai, had quit, and then that Bill Moore, President of DSSD, had also quit. Is there an issue over DSSD's future?

That prompted another question-and-answer session with EMC, and that clarified EMC's commitment to DSSD:

  1. On Bill Moore: "Bill joined EMC's Office of the CTO more than a year ago from DSSD. He wrapped up his employment at Dell EMC last month."
  2. On the amount of EMC investment in DSSD: "Sorry ... we can't verify any of these numbers.
  3. Does Dell EMC view DSSD as a large-scale growth opportunity or is it more of a niche market product? "DSSD is a rack-scale flash array designed for use cases that require the highest levels of performance. It will continue to play a strategic role in Dell EMC's all-flash portfolio and all-flash strategy going forward."

DSSD is strategic to EMC and, that said, here are its SW President's views on NVMe.

El Reg: Will simply moving from SAS/SATA SSDs to NVMe drives bottleneck existing array controllers? Must we wait for next-generation controllers with much faster processing?

Mike Shapiro: As a general statement, NVMe drives offer higher IOPS and bandwidth capability, but they also consume fewer CPU cycles per IOP on a host or storage controller (by running a far simpler software stack than does SCSI/SAS/SATA). So depending on how such drives are used in each product line, and in what quantity, and other system design properties, they may or may not require new controllers.

For example, if you look at the DSSD system which is designed for delivering the absolute highest performance in the industry from NVMe drives, this required a completely new controller and I/O fabric design to fully realise these new levels of performance. Whereas other types of systems that use a small amount of flash storage for caching, such as Dell EMC’s mid-range solutions, might benefit from NVMe drives as an enhancement to their current production architecture.

Secondly, systems that use a modular architecture of controllers and drives, such as VMAX, can incorporate NVMe without needing new controllers. VMAX currently uses NVMe within its controllers and due to its modular architecture the VMAX architecture will be able to take advantage of NVMe drives without waiting for next generation controllers.

El Reg: Will we need affordable dual-port NVMe drives so array controllers can provide HA? What does affordable mean?

Mike Shapiro: For dual-controller HA systems, of which we have many in the Dell EMC portfolio, we certainly will provide dual-port NVMe drives at affordable cost. The hardware cost for the drives is essentially no different for such drives: the flash media is the same as for any SSD, and PCIe NVMe controller ASICs all provide PCIe lanes and endpoints sufficient for dual-porting.

So as the NVMe ecosystem of servers and drive enclosures rolls out, we expect dual-port NVMe drives to be available everywhere we see dual-port SAS drives today and at essentially the same cost structure as SAS dual-port SSDs. We believe that in the 2017-18 timeframe dual-port NVMe drives will be comparable in price to their SAS counterparts.

VMAX is uniquely positioned to add next-generation (3D XPoint) memory tiering due to the built-in performance-based tiering feature of the VMAX architecture.

El Reg: Are customers ready to adapt NVMeF array-accessing servers with new HBAs and, for ROCE, DCB switches and dealing with end-to-end congestion management? Do they need routability with ROCE?

Mike Shapiro: There are multiple pieces to the NVMeF readiness: one is the switch ecosystem, where we do already see widespread deployment of DCB-capable switches. Two is client-side RDMA NICs, where we are seeing a set of new chips available in 2016-2017 that will provide low-cost RDMA NICs for Ethernet (including both RoCE and iWarp options), Omnipath, and Infiniband.

For Ethernet in particular, new NICs providing RDMA at the same time as conversion from 1 to 10, 25, or 40GbitE will accelerate NVMeF readiness.

Three is host software being available in all operating systems: since the NVMeF spec was only recently finished, this is something we expect to mature rapidly over the first half of 2017. So all the necessary pieces for readiness are happening with significant industry momentum behind NVMeF.

We do expect for Ethernet that most customers will use RoCEv2 which is routable, although not all solutions require routability. As an example, many high-performance storage clusters might consist of only dozens of servers and shared storage in a handful of racks, and therefore not require routing. Single-rack solutions can be built today from our DSSD product line, using NVMe over shared PCIe as a fabric, which is the fastest possible solution for single-rack and similarly does not require external switches or routers.

Solutions that require wide-area routing will push the industry to continue to work on end-to-end congestion management for RDMA, and Dell EMC is participating in multiple hardware and software efforts related to this area. Dell EMC is in a great position to enable industry adoption through its end-to-end offerings across servers, networking, storage and management software.

El Reg: Could we cache inside the existing array controllers to augment existing RAM buffers and so drive up array performance? With flash DIMMs say? Or XPoint DIMMs in the future?

Mike Shapiro: First, essentially all enterprise storage controllers provide caching in some form, whether it be for metadata or data. Workloads that have higher locality greatly benefit from this cache to improve application response time. The new NVDIMM technology provides the ability to increase the amount of cache by providing higher density memory at lower cost.

So in places where DRAM caching is used today and it would benefit to significantly expand the cache, these technologies may find a home in future products. We are always looking at new ways to expand caches with these types of DIMM alternatives where the resulting price/performance of the system is benefited.

Second, 3D XPoint technology will also offer opportunities for performance improvements as a high-speed tier for user data. VMAX is uniquely positioned to add next-generation memory tiering due to the built-in performance-based tiering feature of the VMAX architecture. And performance-focused products such as DSSD will incorporate next-generation memory in the form of new storage modules.

It might well make sense to provide NVMeF connectivity not just to high-speed data with NVMe drives, but other kinds of services like a data lake.

El Reg: Does having an NVMe over fabrics connection to an array which is not using NVMe drives make sense?

Mike Shapiro: Yes, one can imagine scenarios where that would be of benefit, just as in previous product generations it has been useful to speak FC to an array that no longer contains FC connected drives. Fundamentally once customers choose an overall server/storage deployment model, it will be convenient to make other types of products that augment that deployment plug into the environment using the same protocol.

For customers who move to NVMeF in the future to gain advantages of high-speed networks with RDMA and converged storage and network traffic, it might well make sense to provide NVMeF connectivity not just to high-speed data with NVMe drives, but other kinds of services like a data lake, backed by non-NVMe drives on non-Flash media, or by a hybrid pool.

El Reg: When will NVMeF arrays filled with NVMe drives and offering enterprise data services be ready? What is necessary for them to be ready?

Mike Shapiro: At the start of 2016, Dell EMC launched the industry's first NVMe enterprise shared storage system, the DSSD D5, filled with the industry's densest NVMe drives and supporting enterprise applications like Oracle. Dell is also already shipping the industry's leading portfolio of servers supporting NVMe 2.5-inch SSDs.

NVMe drives and NVMeF protocols will continue to be added to other products in the overall Dell and Dell EMC portfolio as we enhance more of our overall software and hardware platforms to support this new technology. These new storage offerings will include the full complement of enterprise data services that customers expect and rely on when they purchase a Dell EMC storage system.

Comment

No question but Dell EMC sees NVMe drive and fabric adoption necessary in the shared storage array market. How about in the hyper-converged area?

An emerging possibility is the notion of having hyper-converged nodes' storage connected by an NVMe type fabric, one using RDMA, to speed inter-node linking and virtual SAN operations. An Excelero NASA Ames case study illustrates the idea.

NVMe's array access latency-killing future is shining bright and strong, and DSSD seems to be charging ahead to take advantage of that. ®

More about

TIP US OFF

Send us news


Other stories you might like