Understanding SSD application classes simplifies selecting the right storage solution

By Scott Philips

VP Marketing

Virtium

October 22, 2014

Understanding SSD application classes simplifies selecting the right storage solution

Selecting the right embedded storage has gotten a lot more complicated. Gone are the days of simply using the dollar per gigabyte metric when evaluati...

Selecting the right embedded storage has gotten a lot more complicated. Gone are the days of simply using the dollar per gigabyte metric when evaluating storage solutions. To find the most efficient storage, developers have begun to realize they must look at their given application related to its data-type usage model. Adding to the complexity, SSD manufacturers continue to pursue ways to competitively position their products through technology differentiation and application-specific branding. This has led to the creation of several application classes known as client, enterprise, data center, and embedded-industrial “infrastructure” SSDs.

SSDs are essentially built with the same components: an ASIC or FPGA controller, NAND flash and possibly DRAM. Manufacturers integrate these components either into a solder-down multi-chip package or combine them with other passives and mount them onto a printed circuit board of some type. If SSDs are basically made up of the same components, what differentiates ones built for a particular application class?

The most frequent answer from SSD companies is an explanation about how the product is built rather than telling designers what the product does. The explanation typically involves MLC vs. SLC vs. TLC (and now vs. 3D), write amplification optimization, read disturb mitigation, voltage threshold shifting and myriad other proprietary technologies. Frankly, these parameters are rarely important to embedded systems developers. What they really need is the right storage solution that meets their applications’ usage model objectives, including the systems’ specifications and budget. Therefore, a better use of a designer’s valuable time is to understand the key external metrics of client, enterprise, data center and embedded-industrial SSDs instead of being overly concerned with the underlying technology and how these metrics are accomplished.

SSD application classes

An overview of the different SSD application classes is helpful to fully understand why each was developed.

Client

There are many well-known use cases and metrics associated with client desktop, notebook/ultrabook, tablet, and smartphone applications. SSDs are used for storing operating systems and user data, and performance is subjective based on individual needs. The most demanded features are instant-on and application response time, so SSDs are optimized for read speed. There is substantial down time associated with client applications, enough so that the SSD can take care of any flash management tasks to help it achieve higher performance, greater reliability, or longer endurance.

Enterprise

Enterprise class SSDs were originally developed to replace racks of short-stroked enterprise-class hard drives. Recently, SAS has become the preferred interface for storing mission-critical enterprise data, which dictated that enterprise class SSDs use the same interface. SAS offers higher reliability than SATA, however, the performance capabilities of SSDs showed the bottlenecks of traditional hard drive interfaces causing higher enterprise performance demands. PCIe answered this need. For reference, enterprise class SSDs typically fall into following three categories: SAS, PCIe and flash storage arrays.

Data Center

Data center SSDs are the main storage building block for application-specific servers and appliances that typically support Internet search and social media sites. SSDs for the data center are generally 6 Gbps SATA SSDs in capacities 120 GB and higher. SATA is typically chosen because it is well-known and is highly compatible and cost efficient compared to SAS and PCIe. In this analysis, data center SSDs are positioned for a lower cost per gigabyte while maintaining adequate IOPS and low latencies, and generally feature read/write speeds around 500 MBps and IOPS in the 60K+ range.

Infrastructure

SSDs for embedded-industrial systems are mainly deployed in equipment that supports the infrastructure. Infrastructure applications include networking/communication routers, switches, and base stations; enterprise network security and monitoring devices; medical and gaming equipment; factory automation systems, and digital signage.

Unlike the well-known usage models for client and enterprise SSDs, infrastructure SSD applications are highly fragmented, making it tough to segment them into a particular application class. That’s because infrastructure SSDs need to support a wide range of mixed function workloads. Two opposite examples: casino gaming SSDs might be written to once and then write protected, but are read multiple times as games are played, whereas base station SSDs are continuously written with cell phone traffic log information. Infrastructure data patterns range from 99 percent read/1 percent write to just the opposite and can cover every scenario in between (Figure 1).

 

Figure 1: It is important to look for the longest product lifecycles to help minimize costly and disruptive requalifications. For example, Virtium’s SLC-based StorFly PE class products are guaranteed to not cause a requal for at least four years.
(Click graphic to zoom)


21

 

 

Infrastructure applications are often mission-critical and designed for 24/7 operation; many times in harsh, extended temperature environments ranging from -40 °C to 85 °C or higher. Infrastructure-based SSDs are characterized by smaller, lower power, lower capacity form factors such as Slim SATA, mSATA, CompactFlash, or 10-pin eUSB. They support applications that require capacities under 100 GB. As an example, Linux and RTOS-based systems require less than 4 GB.

A common viewpoint is that infrastructure SSDs need to be based on SLC NAND, which makes them considerably more costly than client SSDs. This is not necessarily true. While SLC is more expensive on a dollar per GB basis, there are applications where the lowest cost 120 GB client SSD is still more expensive than an optimized 8 GB SLC infrastructure SSD on a dollar per unit basis. Many mission-critical systems absolutely require SLC-based SSDs making the expense necessary for improved endurance, reliability, and a longer product lifecycle.

Also a concern is the high cost of requalifications with the reality of up to three iterations of MLC for every one of SLC necessitating a requal for every iteration. SLC is probably too costly for high capacity applications, but at lower capacities, SLC is very compelling from a total cost of ownership (TCO) and performance perspective.

No set rules for infrastructure SSDs

Taking into consideration the diverse set of applications, it becomes obvious that SSD application classes need to be defined by usage model and their associated workload requirements rather than technology. While helpful, not all SSD suppliers follow these guidelines and it is not mandatory to do so. Currently, the JEDEC JC-64.8 SSD committee defines application classes only for client and enterprise SSDs in document JESD218. The workloads associated with these application classes are explained in JESD219.

A given SSD specification isn’t useful or meaningful if it isn’t based on a set of common rules. Therefore, it is the OEM’s responsibility to carefully review datasheets to evaluate how a given SSD has been developed (Figure 2).

 

Figure 2: Virtium’s full line of StorFly SATA, TuffDrive USB, and CompactFlash storage solutions are designed to match the workload needs of a multitude of industrial embedded systems. StorFly SATA SSDs are optimized for higher reliability, increased performance and are available with an industrial temperature operating range of -40 °C to +85 °C.
(Click graphic to zoom)


21

 

 

The process of validating SSD endurance for an infrastructure application is an excellent exercise where designers examine requirements including active use (power on) time and temperature, retention use (power off) time and temperature and functional failure, and uncorrectable bit error rate. The difficulty is that the metrics below are all interrelated when it comes to endurance and changes in assumptions for one parameter often lead to changes in another.

  • Workload – consists of the types of data, file sizes, whether that data is sequential or random, and the read and write requirements of the application.
  • Active use – identifies the assumed case temperature inside the host system, generally on the SSD case, at which the SSD is written and read. It also defines how often the SSD is used.
  • Retention use – defines the storage temperature and the length of time the SSD can be powered off while still keeping the data intact after the SSD has reached its endurance specification.
  • Data retention time – an important metric point for industrial SSDs that indicates if the SSD has scarcely been written, the retention time is substantially longer than an SSD in use for a long time.
  • Functional failure requirement – outlines the number of “acceptable” failures for a given sample size subject to specifically defined conditions.
  • UBER – measures the number of sectors that return an uncorrectable bit error based on the number of bits that have been read.

This endurance exercise shows why understanding the applicability and effectiveness of the use case for which an SSD is specified is critical. Thus, if SSD specifications do not provide use case data, they provide limited design applicability and need to be questioned.

SSDs in tune with embedded infrastructure

The varied and fragmented nature of storage requirements for embedded-industrial infrastructure applications causes OEMs to evaluate multiple options to match their individual system needs. For an SSD to be in tune with embedded infrastructure application developers’ unique requirements, it should provide a broad range of integrated value-added capabilities. To adequately support infrastructure equipment, optimized SSDs must provide power-down protection, 24/7 availability, reliable operation over a wide temperature range, low-power/low-heat, high endurance, long product lifecycles, and more.

Important metrics of reduced total cost of ownership and enhanced storage efficiencies are achievable when embedded systems OEMs fully understand SSD application classes. Selecting the most optimal SSD to meet budget and application specifications for a particular design is possible with the help of storage suppliers that have the experience and product portfolio in place to play a larger role in serving the needs of the diverse embedded infrastructure market.

Scott Phillips is Director of Marketing at Virtium.

Scott Phillips, Virtium

Professional experience includes 20+ years of marketing, product/brand management, and business development in the high-tech industry. Have worked for large multinational companies as well as small companies with responsibility for domestic and overseas accounts, primarily in the OEM space.

More from Scott

Categories
Storage