Those that have been following the SSD industry for a couple of years are likely aware that Micron does not sell retail drives under its own brand (unlike for example Samsung and Intel). Instead Micron has two subsidiaries, Crucial and Lexar, with their sole purpose being the handling of retail sales. The Crucial side handles RAM and SSD sales, whereas Lexar is focused on memory cards and USB flash drives. The Micron crew is left with business to business sales, which consists of OEM sales as well as direct sales to some large corporations.

In the past the difference between Micron and Crucial branded SSDs has merely been the label and packaging. The M500 and M550 were the same for both brands and the Micron version of the Crucial m4 was simply called C400 instead. However, that strategy came to its end with the MX100, which was a retail (i.e. Crucial) only product. Micron decided to separate the product planning of Micron and Crucial drives and the branding was also made clearer: the M lineup is now strictly Micron; Crucial drives will be using the MX branding that was introduced with the MX100.

The main reason for the separation was the distinctive needs of the two markets. Products that are aimed towards OEMs tend to require longer validation cycles because of stricter quality requirements (a bad drive in a laptop will hurt the laptop's brand, not just the drive's brand) and PC OEMs generally do their own validation as well, which further increases the overall validation time. The retail market, on the other hand, is more focused around being the first and providing the best value, which frankly does not play very well with the long OEM validation cycles.

Since the Crucial MX100 was strictly for retail, the Micron M600 will in turn be OEM-only. The two have a lot in common as both are based on Micron's latest 128Gbit 16nm NAND, but unlike previous releases the two are not identical. The M600 comes in a variety of form factors and also includes a 1TB model, whereas the MX100 is 2.5" only and tops out at 512GB. It makes sense to offer more form factors for the OEM market because the PC laptop industry is moving more and more towards M.2 for space savings, but on the other hand the retail SSD market is still mostly 2.5" because people are upgrading older laptops and desktops that do not have mSATA or M.2 slots.

Micron M600 Specifications
Capacity 128GB 256GB 512GB 1TB
Controller Marvell 88SS9189
NAND Micron 128Gbit 16nm MLC
Form Factors 2.5" 7mm, mSATA & M.2 2260/2280 2.5" 7mm
Sequential Read 560MB/s 560MB/s 560MB/s 560MB/s
Sequential Write 400MB/s 510MB/s 510MB/s 510MB/s
4KB Random Read 90K IOPS 100K IOPS 100K IOPS 100K IOPS
4KB Random Write 88K IOPS 88K IOPS 88K IOPS 88K IOPS
Idle Power (DevSleep/Slumber) 2mW / 95mW 2mW / 100mW 2mW / 100mW 3mW / 100mW
Max Power 3.6W 4.4W 4.7W 5.2W
Encryption TCG Opal 2.0 & eDrive
Endurance 100TB 200TB 300TB 400TB
Warranty Three years

Aside from the form factors and capacities, the M600 also brings something new and concrete. As I mentioned in the launch article, the M600 introduces pseudo-SLC caching to Micron's client SSDs, which Micron calls Dynamic Write Acceleration. I will talk a bit more about it in just a second but as an overview, Dynamic Write Acceleration increases the write performance at smaller capacities and also allows for higher endurance. As a result even the 128GB model is capable of 400MB/s sequential write and 88K IOPS random write and the endurance sees a boost from 72TB to 100TB despite the use of smaller lithography and higher capacity 128Gbit 16nm MLC. The endurance also scales with capacity and although the scaling is not linear, the 1TB model is rated at 400TB of host writes under typical client workloads.

Hardware encryption in the form of TCG Opal 2.0 and eDrive support has been a standard feature in Micron/Crucial client SSDs for a while now and the M600 continues that trend. Currently the M600 has not been certified by any third party software vendors (e.g. Wave and WinMagic), but Micron told us that the M600 is in the validation process as we speak and an official certification will come soon. The M600 lineup also has SKUs without any encryption support for regions where encryption is controlled by the government.

Dynamic Write Acceleration

During the past year there has been an industry-wide trend to include a pseudo-SLC cache in client-grade SSDs. The driving force behind the trend has been the transition to TLC NAND as an SLC cache is used to compensate for the slower performance and lower endurance of TLC NAND. Despite sticking with MLC NAND, the M600 has also added a pseudo-SLC feature.

Micron's implementation differs from the others in the sense that the size of the SLC cache is dynamic. While Samsung's and SanDisk's SLC caches have a fixed capacity, the M600's SLC cache size is determined by how full the drive is. With an empty drive, almost all blocks will be run in SLC mode and as the drive is filled the cache size decreases. Micron claims that even when the drive is 90% full, Dynamic Write Acceleration offers higher acceleration capacity than competing technologies, although unfortunately Micron did not share any actual numbers with us.

At the wafer level, the 128Gbit 16nm die is the same as in the MX100, but there are some proprietary processes that are applied on the die before it is packaged that enable the use of Dynamic Write Acceleration. The SLC cache is not fixed to any specific location as the firmware and special NAND allow any block to be run in either SLC or MLC mode, which is different from SanDisk's nCache 2.0 where every NAND die has a portion of blocks set in SLC mode. As a result Dynamic Write Acceleration will move data from die to die when transferring data from SLC to MLC, so there is a bit more controller and NAND interface overhead compared to SanDisk's implementation. There is no predefined threshold as to when data starts to be moved from SLC to MLC as that depends on a variety of factors, but Micron did say that idle time will trigger the migration (Micron's definition of idle time is anything over 50ms).


I ran HD Tach to see how the performance is affected when writing sequentially across all LBAs. You can clearly see that there are actually three stages. First writes go to SLC and performance is a steady 400MB/s, and it seems that an empty drive runs about 45% of its NAND in SLC mode. The first drop indicates that the drive is now switching to MLC NAND, hence the big difference between 128GB and 256GB models. The second drop, on the other hand, means that the drive is now moving data from SLC to MLC at the same time as it is taking in host writes, so there is significant overhead from the internal IOs that result in <100MB/s host write speeds.

Notice that Dynamic Write Acceleration is only enabled at 128GB and 256GB capacities (excluding mSATA/M.2, which have it enabled for 512GB too), so with the 512GB and 1TB models the write performance will be the same across all LBAs. The larger capacities have enough NAND for high parallelism, so DWA would not really bring any performance improvements due to SATA 6Gbps being the bottleneck.

Obviously, in real world client scenarios it is unlikely that one would constantly write to the drive like HD Tach does. Typically write bursts are no more than a few hundred megabytes at most, so all writes should hit the SLC portion at full speed. We will see how Dynamic Write Acceleration does in a more realistic scenarios in a moment.

The Truth About Micron's Power-Loss Protection

I want to begin by saying that I do not like calling out companies' marketing. I believe marketing should always be taken with a grain of salt and it is the glorified marketing that creates a niche for sites like us. I mean, if companies were truly honest and thorough in their marketing materials, you would not really need us because you could compare products by looking at the results the manufacturers publish. Hence I do not usually spend much time on how this and that feature are just buzzwords because I think it is fine as long as there is nothing clearly misleading.

But there is a limit. If a company manages to "fool" me with their marketing, then I think there is something seriously wrong. The case in point is Micron's/Crucial's client SSDs and their power-loss protection. Back when the M500 was launched a bit over a year ago, Micron introduced an array of capacitors to its client SSD and included power-loss protection as a feature in the marketing material.

I obviously assumed that the power-loss protection would provide the same level of protection as in the enterprise SSDs (i.e. all data that has entered the drive would be protected in case of a sudden power-loss) as Micron did not make any distinction between power-loss protection in client drives and power-loss protection in enterprise drives.

M500DC on the left, MX100 in the middle & M600 on the right

I got the first hint when I reviewed the M500DC. It had much larger and more expensive tantalum capacitors as the photo above shows, whereas the client drives had tiny ceramic capacitors. I figured that there must be some difference, but Micron did not really go into the details when I asked during the M500DC briefing, so I continued to believe that the client drives have power-loss protection as well, but maybe not just as robust as the enterprise drives.

In the MX100 review, I was still under the impression that there was full power-loss protection in the drive, but my impression was wrong. The client-level implementation only guarantees that data-at-rest is protected, meaning that any in-flight data will be lost, including the user data in the DRAM buffer. In other words the M500, M550 and MX100 do not have power-loss protection -- what they have is circuitry that protects against corruption of existing data in the case of a power-loss.

So, what are the capacitors there for then? To understand the technical details, I need to introduce some new terminology. Lower and upper pages stand for the two bits that are programmed to each cell in MLC NAND. Basically, lower page is the first bit and upper page is the second -- and with TLC there would be a middle page as well. Do not confuse lower and upper page terms with the typical 'page' term, though, which is used to refer to the smallest amount of data that must be written at once (usually 16KB with the currently available NAND).

MLC NAND programming is done in two steps by programming the lower and upper pages separately, where lower page programming is essentially like SLC NAND program. If the bit of the lower page is '1', then the cell will remain empty (i.e. no charge stored). But if it is '0', then the threshold voltage is increased until it reaches the '0' state.

Once the lower page has been programmed, the upper page i.e. the second bit can be programmed to the cell. This is done by fine tuning the cell voltage to the required state because the bit of the lower page already limits the possible bit outputs to two. In other words, if the lower bit is '0', then the possible bit outputs can be either '01' or '00'. Given that the cell is already charged to '0' state during lower page programming, only a minor change is needed to tune the cell voltage to either '01" or '00' state.

The bit of the lower page being '1' does not really change anything either. If the upper page is '1' too, then the cell will remain in erased state and the bit output will be '11'. If the upper page is '0', then the cell will be charged to '10' state. Both the '1' and '0' states are more like intermediate states because the upper page will define the final cell voltage and I suspect that pseudo-SLC works by only programming the lower pages.

The reason why MLC programming is done in two steps is to reduce floating gate coupling i.e. cell-to-cell interference. I covered cell-to-cell interference in more detail in our Samsung SSD 850 Pro review, but in a nutshell the neighboring cells introduce capacitive coupling to each other and the strength of this coupling depends on the charge of the cell. Going from empty to '00' state would present such a large change in the coupling that the state of the neighboring cells might change, which in turn would corrupt the bit value.

The graph above should give you a general idea of the programming algorithm. Simply put, the lower pages of neighbor cells are programmed first because lower pages have larger voltage distributions as shown above, which means that when the upper page of a neighbor cell is programmed the change in coupling is not significant enough to alter the state of the lower page.

It is time to connect this with Micron's power-loss protection. I hope everyone is still following me as that was a long and technical explanation, but I wanted to be thorough to ensure that what I say next makes sense. If power is lost during upper page program, the bit associated with the lower page of that cell may be lost. Because lower and upper page programs are not necessarily sequential, the data in the lower page might have been written earlier and would be considered data-at-rest. Thus a sudden power-loss might corrupt old data and the function of the capacitors is to ensure that the lower page data is safe. Any ongoing upper page program will not be completed and obviously all data in the DRAM buffer will be lost, but old data will be safe.

I want to apologize for spreading misinformation. I know some readers have opted for the MX100 and other Micron/Crucial drives because we said the drives feature full power-loss protection, but I hope this is not a deal-breaker for anyone. It was not Micron's or our intention to "fool" anyone into believe that the clients drives have full power-loss protection and after talking to Micron we reached an agreement that the marketing material needs to be revised to be more clear on the fact that only data-at-rest protection is guaranteed.

On the positive side, what Micron/Crucial is doing is still something that the others are not. I have not seen any other client SSDs that had capacitors to protect against lower page corruption, although there may be alternative methods to work around that (e.g. ECC). Anyway, I did not want this to come out as too negative because the capacitors still provide vital protection against data corruption -- there was just a gap between our and Micron's comprehension that lead to some misunderstandings, but that gap no longer exists.

Test Systems

For AnandTech Storage Benches, performance consistency, random and sequential performance, performance vs transfer size and load power consumption we use the following system:

CPU Intel Core i5-2500K running at 3.3GHz (Turbo & EIST enabled)
Motherboard ASRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

For slumber power testing we used a different system:

CPU Intel Core i7-4770K running at 3.3GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe (BIOS 1707)
Chipset Intel Z87
Chipset Drivers Intel 9.4.0.1026 + Intel RST 12.9
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 7 x64
Performance Consistency & TRIM Validation
Comments Locked

56 Comments

View All Comments

  • milli - Monday, September 29, 2014 - link

    The MX100 already had terrible service time. The M600 is even worse.
    I mean if it's even worse than this showing the MX100 delivered (http://techreport.com/r.x/adata-sp610/db2-100-writ... then forget about it.
  • milli - Monday, September 29, 2014 - link

    http://techreport.com/r.x/adata-sp610/db2-100-writ...
    Link got messed.
  • BedfordTim - Monday, September 29, 2014 - link

    If service times are such an issue why did Tech Report give the MX100 an Editor's Choice award?
  • milli - Monday, September 29, 2014 - link

    Because everybody is a sucker for low prices.
  • menting - Monday, September 29, 2014 - link

    i guess you go out and buy the fastest, regardless of price then?
  • milli - Monday, September 29, 2014 - link

    Obviously not. I'm just giving one of the main reasons why the MX100 wins so many awards.
  • Samus - Monday, September 29, 2014 - link

    It's still a better drive than competing products in its price segment. The only other drive that comes close is the 840 Evo (which apparently has some huge performance bugs on static data - and support is terrible...the bug has existed for over a year.)

    You could consider spending more money on an Intel drive or something from Sandisk, but most consumers need something "reliable-enough" and price is always the driving factor in consumer purchases. If that weren't true, you wouldn't see so many Chevy Cobalt's and Acer PC's.

    The irony is, for price and reliability, the best route is a used Intel SSD320 (or even an X25-M) off eBay for $60 bucks. They never fail and have a 15 year lifespan under typical consumer workloads. They're still SATA 3Gbps, but many people won't notice the difference if coming from a hard disk. Considering the write performance of many cheap SSD's anyway (such as the M500) the performance of a 4 year old Intel SSD might even still be superior.
  • Cellar Door - Monday, September 29, 2014 - link

    My X-25M failed after 2 years of use. So please don't use the word 'never' - Intel sent me a 320 as a replacement, due to 3 year warranty. Performance wise, it's ancient but still an ssd.
  • Samus - Monday, September 29, 2014 - link

    Like many SSD's, they are prone to failure from overfapping.
  • Lerianis - Friday, October 3, 2014 - link

    Eh? Overwriting, I think you mean. That said, all of these drives should be able to handle 20GB's write per day at least for years without issues.

Log in

Don't have an account? Sign up now