Other World Computing announced today the Mercury Viper Solid State Drive, the next generation of high performance OWC Mercury brand SSDs. Designed for professional users and gamers that require uncompromised workflow performance, Mercury Viper is the industry’s fastest 3.5 inch 6G Solid State Drive with speeds that nearly eclipse the fastest internal data interface — SATA 3.0 6Gb/s — offered in today’s Mac and PC computers. OWC will highlight the Mercury Viper along with its broad line of award-winning SSDs in its booth #5812 in the LVCC North Hall during the Consumer Electronics Show (CES), which begins Tuesday in Las Vegas, Nevada.
Capacities for Any Need
In addition to offering data rate performance up to 600MB/s, the Mercury Viper is available in capacities from 240GB to a massive 2TB (1920GB) to handle any user’s data storage needs. Pricing and shipping dates for the Mercury Viper SSD models to be announced. Visit the Viper page to sign up for notification when pricing & ship dates are announced.
“The introduction of the Mercury Viper spotlights OWC’s continued leadership role in the evolution of Solid State Drives,” said Larry O’Connor, Founder and CEO, Other World Computing. “Offering up to 600MB/s speed and capacity up to 2.0TB, Viper truly is a performance beast that will more than satisfy the most demanding Mac and PC user who seeking nothing but the best.”
Any updates on pricing or availability?
For notification when pricing & availability is announced visit the Viper page to get announcements via email.
Good show. 2 X 2TB Mercury Viper in RAID 0 would seem an interesting solution — though I wouldn’t be surprised to see 1920 GB Mercury Accelsior options around the same time.
I’d think a RAID 0 array (I believe this is possible, isn’t it?) of 2 Mercury Accelsior cards will still likely perform better. Regardless, I believe OWC is right in thinking there are those of us who will gladly jump on this product, regardless of whatever it’s priced at.
Personally, I’m getting desperate for a 960GB Mercury EXTREME Pro 6G, or something of that sort, to become available. We have an application in that 2.5″ form factor that calls for that performance and capacity, and — perhaps foolishly — have been holding off on some other associated orders we also need to make. I wish we could just go with the 3G unit now. Oh, the idiosyncrasies of dealing with requisitions…
How about the durability from multiple writes? For example, can we put a cache on it which is a no-no with most other SSDs?
Kara, I worried about this, too. However, I don’t believe it’s generally much of an issue, given that if you boot your Mac off an SSD, you’re already using the SSD for virtual memory, which is essentially another form of disk cache.
Often Macs, like the original 2008 MacBook Air 1,1 with only 2GB RAM and the 64 GB SSD, make heavy use of virtual memory on the SSD, and this doesn’t seem to have arisen as a problem. But even in a system with lots of RAM to spare, if you’re booted off an SSD, you’re still hitting the SSD for virtual memory.
Yes, writing to flash memory reduces its write life — any and all writes do — but apparently not significantly compared to 1.) current SSD flash write tolerances (which are much better than for the flash used in generic USB flash drives), 2.) how writes and errors on SSDs are currently and intelligently managed, and 3.) expected drive service life. An SSD drive will likely outlast the useful life of the system — and probably any lower-performing, platter-based hard drive you might use instead. Write wear to an SSD is inherently more predictable than read/write wear failure for any particular, individual hard drive mechanism.
Anyway, that’s my understanding. However, comment from OWC on this would certainly be welcome here.
Thanks, I have heard about SSD lives being drastically reduced when they were used as temporary areas.
As to virtual memory, I believe you can turn it off or move it to another drive. So yeah, having OWC weigh in would help quite a bit!
I am planning on using a pro, assuming it gets an update this year. Then I can swap out/upgrade parts as needed. Based on my experience with other macs I would be surprised if it is not going strong a decade+ from now. Of course, SSD technology will have grown over that time but I do not want to burn out a viper in a year or two either. Which means coming up with a plan of what makes for good candidates on the SSD and what should go on a HDD.
Sounds good, Kara. I’m in the same boat. So, fwiw…
> I have heard about SSD lives being drastically reduced when they were used as temporary areas.
This has certainly been possible with some SSDs, although those drives usually exhibit substantially decreased performance long, long before they fail, i.e. potentially within hours of continuous hard use.
It really depends on the quality and size of the drive — specifically, the quality of the flash used, and the sophistication of the flash storage management strategies (or lack thereof) employed.
“Our [OWC] Mercury Extreme Pro SSD series utilizes the SandForce 1200 Series Processor with exclusive DuraClass Technology. DuraClass combines DuraWrite (which extends the life of your SSD by utilizing intelligent block management and wear leveling), RAISE (which gives you RAID-like data protection and redundancy, but without the overhead) and over-provisioning (placing flash memory in reserve for replacement of retired or worn out blocks) of either 7% (for Pro models) or 28% (for the Pro RE). This gives you the one of highest-performing, longest-lasting SSD on the market today.”
— from “Not All SSDs Are Created Equal (Buyer Beware)”
Thursday, February 17th, 2011 | Author: OWC Ron
http://blog.macsales.com/8924-not-all-ssds-are-created-equal-buyer-beware
Independent testing reports on the web seem to support and confirm OWC’s claims in this regard. AnandTech offers some solid, introductory technical articles on this topic, as well as on Apple’s new SDD/HDD Fusion Drive system. Technical, performance-oriented Mac users tend to recommend OWC’s SSDs, even over Apple’s.
> As to virtual memory, I believe you can turn it off
Yes, this is currently still possible at the command line. It is generally not a good idea. YMMV. Note that Apple, and software developers, generally assume that virtual memory (VM) is always on. Depending on what you’re doing and how carefully you monitor and manage your current RAM usage (which can become time consuming and tedious), significant performance hits — and even hard freezes — can result. Some who have tried it report, “disabling Virtual Memory in Mountain Lion (10.8.x) comes with *huge* performance penalties,” and don’t recommend it any more under any circumstances. Whether this will still be true with whatever system runs the next Mac Pros remains to be seen, of course.
> or move it to another drive.
Sure, while noting that, this defeats one of the significant performance advantages of using an SSD or an Apple Fusion Drive setup as a boot drive — and, it is also not practical with all of the Mac notebooks currently shipping with SDDs as the only available drive options.
If you’re a very technical power user, moving VM and caches to a secondary, smaller (and hence less costly) SSD or SSD RAID might be an interesting possibility to reduce long-term wear on your main SSD system(s), but probably not worth the trouble.
Note well: in general, the more space you keep free on an SSD, the longer its flash will last with respect to write wear. Also note that this is true with respect to the *total* free space on the *entire* drive, *regardless* of how the drive space is logically partitioned. This is directly analogous to how SSD over-provisioning works.
If I were caching really, really heavily to SSD, I’d probably want to calculate the maximum written size of the swap file in question, and set aside at least double that space for it on the SSD system. The “free” space actually gets used for automatic wear-leveling and to help reduce write amplification. But, for most production purposes, OWC SSDs are already designed to take care of that for you, without your having to think about it.
> I do not want to burn out a viper in a year or two either.
OK, unless the drive is defective, that is certainly not going to happen. And even if you were to manage it somehow, OWC’s SSDs already come with three or five year warrantees, depending on the model. If you’re only worried about a few years out, you’ve really nothing to worry about — unless you’re performing continuous, high performance scientific computation with large data sets and huge amounts of cache writing. But even in that case, you’re still going to want to use something like the Viper and/or Accelsiors anyway, because you need the performance regardless.
> coming up with a plan of what makes for good candidates on the SSD and what should go on a HDD.
Apple says to put the system, VM, caches, and all, on their SSDs. OWC’s SSD systems are at least as good as Apple’s, if not better. In a system with one or more SSDs — including an Apple Fusion Drive system — HDDs and HDD RAIDs are used for infrequently accessed files, massive cheap storage, and redundant backups.
Hope this helps. If this doesn’t answer all your questions, I suggest calling OWC, and/or Googling for more info.
Thanks, yeah, I forgot about the warranty. I am used to thinking iMac lately where if one component breaks and the machine is not under applecare you need to replace the whole thing … but a dynamic where individual components can be replaced/upgraded is a MUCH nicer world! Assuming a mac pro this year it would be much easier swapping an SSD out of there if it did go bad.
I am already planning on an SSD at least twice the size of what I will need it for so caches, logs, and VM can take full advantage of wear levelling. Then put things like music, movies, pictures, documents, and any other data like that on the HDD.
Ah, yes, iMac thinking. Quite understand. Currently working in front of my gf’s 27″ mid-2010. It made a visit to the Clarendon Apple Store a couple months back for the Seagate drive recall, thank goodness. For months before the recall announcement, the hard drive would just wake up and start enthusiastically burbling away on its own at all odd hours, for absolutely no apparent reason. It was rather disturbing. We were about ready to deosil and salt the poor thing.
iMac 1TB Seagate Hard Drive Replacement Program
http://www.apple.com/support/imac-harddrive/
iMac in its latest form will make a lot more sense when they go 100% SSD, and/or Thunderbolt costs come down. But, they’re actually still somewhat upgradable, and repairable, AppleCare or no. OWC already has kits, and videos up showing how, and well as a ship-it-to-them upgrade program. And Apple will often fix things that aren’t under AppleCare, to a point. Sometimes it’s even worth the cost. Sometimes, as with a recall or an acknowledged issue, it’s even gratis.
As far as over-provisioning goes, as mentioned above, OWC sets aside 7%, which is quite a reasonably good trade-off — or, 28% for their pricey Pro RE (Enterprise Pro) RAID/enterprise models, which are presumably very well-optimized designs, and come with a 7 year warranty.
In fact, the efficient optimum to maintain SSD *performance* for most drives does seem to be just around 25%.
Unless, you’re really worried about burning out the NAND flash with enterprise-class transactional workloads in a datacenter environment, like the very expensive Micron RealSSD P300 is designed for. It’s a good example of a small, dedicated-caching SSD such as I referred to earlier, and though it does use SLC NAND, its factory over-provision is also only 27%. There are some extreme scientific computing applications where I could see using the 200GB model in a RAID and/or increasing the over-provision to 50%, but that would probably be overkill for most projects, where the budget for computational resources would be more efficiently allocated in another manner. If you need it, you’d have no doubt of it, and the cost-no-object funds to swing it as well, but price/performance wise for most purposes, OWC beats it easily.
Micron RealSSD P300 Review (100GB)
http://www.storagereview.com/micron_realssd_p300_review_100gb
More than 28% or so seems to be the point of rapidly diminishing returns. Anand recently published some very interesting findings on this, and his interactive graphs make it very clear that there’s generally very little point in setting aside as much as 50% for most applications:
Exploring the Relationship Between Spare Area and Performance Consistency in Modern SSDs
by Anand Lal Shimpi — 12/4/2012
http://www.anandtech.com/show/6489/playing-with-op
The upshot is this:
“If you want to replicate this [setting aside of spare area on an SSD] on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn’t absolutely necessary in every case but it’s an easy way to make sure you never exceed your allocated spare area. It’s a good idea to do this from the start (e.g. secure erase, partition, then install [the operating system]), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives I’ve tested here but not all controllers may behave the same way.”
“… Whatever drive you end up buying, plan on using only about 75% of its capacity if you want a good balance between performance consistency and capacity.”
For example, the “240GB” OWC Mercury EXTREME Pro SSD is actually a 256GB drive with 7% of the drive’s capacity already allocated for over-provisioning. If you’re heavily caching to the drive or otherwise performing a lot of writes, it’s possible to very effectively and usefully increase the drive’s dedicated “over-provisioning” yourself, to guarantee it remains available.
Remember, 7% is already set aside. For a total of 28% over-provisioning —
100% – 28% = 72%, or .72 of the drive left available for data storage.
256GB * .72 = ~ 184GB
So, in Disk Utility, create a partition, or partitions, adding up to a total of no more than 184GB, leaving the rest of the space *unallocated* — and you ought to be golden. I’ll be doing this for a client with an OWC drive next week. It may not be the equal of an OWC Enterprise Pro SSD, but it ought to help maintain maximal performance as well as the life of the NAND. (But, we’d spring for the Enterprise Pro SSD before even considering more than 28% over-provisioning, which would otherwise simply be a waste of resources.)
This is why we really need a 2TB SSD like the Viper for high transaction systems with a lot of data — do this, and you’ll be left with about only 1470GB per unit for effective data storage.
But, for most folks, OWC’s stock configuration of 1920GB as it ships ought to be more than fine.
Whoah, Clarendon? Arlington VA? I am in DC and my macbook is from there! Back to SSDs though…
Part of the problem of computer repairs is that I do not drive. While it does not make it impossible it makes the logistics a bigger pain than if it was a matter of hauling the mac down to a car. Then with the imac designs they have been doing lately that glue things together and yikes. Software coding I can handle. Upgrading a tower with detailed instructions I can handle. Working on what is basically a giant laptop is where I call it quits.
Actually I hoped the 2012 iMac would be as upgradeable as the ones before it. Before the release and seeing how little wiggle room in upgradeability we had I was planning on having an iMac shipped from apple to OWC when here was a turnkey program and getting it maxed out.
Basically my idea of double what I needed is typical for what I do when it comes to hardware and expanding the overprovisioning is a bonus. Even though I may not need it now I am pretty much guaranteed to need it in the future. I like the idea of making a smaller partition than what is free though to expand the overprovisioning, thanks, better than my idea of just not using all of the space.