Overclock.net - An Overclocking Community - Reply to Topic
Thread: [O3D] Samsung reveals its 980 Pro PCIe Gen4 SSD with 6,500MB/s read speeds - CES 2020 Reply to Thread

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Please enter a password for your user account. Note that passwords are case-sensitive.
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:


  Additional Options
Miscellaneous Options

  Topic Review (Newest First)
01-15-2020 05:38 PM
Quote: Originally Posted by Zam15 View Post

"Samsung reveals its 980 Pro PCIe Gen4 SSD with 6,500MB/s read speeds - CES 2020
Samsung has officially announced its first consumer-grade PCIe Gen4 SSD, an M.2 NVMe drive that promised to destroy its Phison-based competition and leave them in the dust.

You think 5GB/s sequential reads are fast? Samsung's new 980 PRO promises sequential read speeds of 6.5GB/s and sequential write speeds of 5GB/s. These speeds will require a PCIe 4.0 compatible system, making the 980 PRO's performance only available on AMD Ryzen 3rd Generation systems at launch.

To achieve these speeds, Samsung created a new PCIe 4.0 compatible SSD controller, updated its 3D V-NAND to deliver improved power efficiency and access speeds and increased its V-NAND stack size to 128-layers.

PC Watch has reported that Samsung plans to release its PCIe Gen4 complaint 980 PRO SSD in the US and Korea on January 21st, with a release in February for the rest of the world. Full specifications for the driver are unknown, but it will deliver sequential read/write speeds of 6,500MB/s and 5,000MB/s respectively (at least for the 1TB model) and release with capacities of 250GB, 500Gb and 1TB. "

At this time the price tags of Samsung's 980 Pro series SSDs are unknown. We hope that our local Samsung representative will report back to us with more information soon.

I am already getting those speeds on my Sabrent 2TB pciE 4.0 NVME m.2 drive in my system/ I bought it paired with a solid copper heatsink with 3 heat pipes. Not a stutter since installing it 3 weeks ago. It is considerably less expensive than the Samsung 980 Pro. I got it on sale on Amazon with the heatsink for $379.
01-14-2020 08:18 PM
TK421 Can't wait for 21st
01-14-2020 07:18 PM
MRFS HighPoint RocketRAID 2720A RR2720A PCI-Express 2.0 x8 Low Profile SATA / SAS


NOTE WELL: the description at Newegg also adds the following text IN ERROR:

RocketRAID 3720A 8-Channel 12Gb/s PCIe 3.0 x8 SAS / SATA RAID Controller

I highlighted that TYPO in 2 comments recently:

That appears to be a typo. Click on "Specifications" and see: 8 x 6 Gb/s SAS / SATA Ports; 2 x SFF-8087 Mini-SAS Ports
The prior 2720SGL shared the same specifications. The 3720A is a PCIe 3.0 card with different SFF-8643 connectors.
See also 8 x 12Gb/s SAS/6Gb/s SATA Channels (RR3720A) at Highpoint's website. We have years of experience
with the 2720SGL. Please confirm for yourself at Highpoint's official Internet website.


p.s. I just downloaded the driver and bios software files, and
I don't find any documentation describing what is new with the model RocketRAID 2720A.
The following is all I can find at Highpoint's Internet website:
"RocketRAID 2720A - 2 x SFF-8087 (Mini-SAS) - (replacement for RocketRAID 2720SGL) "
"RocketRAID 2720A, 2720SGL (EOL) / 2722 - 8 x 6Gb/s SAS/SATA Channels"
"EOL" usually means "End Of Life".
01-14-2020 07:04 PM
MRFS > how many drives are in the array for that controller?

one 2720SGL supports 8 drives total: 2 x SFF-8087 connectors @ 4 x drives per connector

the SFF-8087 cable "fans out" to 4 x standard SATA/SAS data cables/connectors

we do prefer the Icy Dock 5.25" enclosures which support 4x, 6x and 8x 2.5" SSDs @ 7mm thickness.

Also, I believe the Highpoint RocketRAID 2720A has superseded the model 2720SGL:
that's what I'm seeing at Newegg.com now.

Confirm this at Highpoint's Internet website.

If you are downloading software from that website,
we recommend that you create the following sub-folders:


(Don't bother with the non-RAID software
because you won't need it to enable RAID arrays.)

We also archive the contents of the CD-ROM
that is shipped with the retail version,
in this Windows folder name:


If your optical drive letter is "O:",
then in Windows Command Prompt:

cd \
xcopy \ E:\Highpoint.RocketRAID.2720A.cd-rom /s/e/v/d
01-14-2020 06:44 PM
MRFS I'll be blunt here: Highpoint's hardware is excellent:
not one hardware failure among ~10 Highpoint controllers we have running.

However, their documentation leaves a LOT to be desired.

Here's what we learned from Hard Knocks University:

(a) the factory default of INT13 ENABLED conflicts with
chipset RAID controllers; in fairness to Highpoint,
there is a warning about this in a readme.txt file,
but one must download the latest driver files from their website
to obtain that readme.txt file;

(b) to guarantee that 6G SSDs are detected properly,
the latest bios must be flashed; if not flashed with the
latest card bios, some SSDs will only run at 3G speed;

(c) there has been, in the past, a Windows program
that supports bios flashing: this step is necessary
to DISABLE INT13; there is also a batch program
for bios flashing that runs from a bootable floppy
(NOT inside Windows Command Prompt, however);

The sequence that we recommend, if one is
migrating a Windows OS to a RAID-0 array
controlled by a Highpoint 2720SGL, is as follows:

(1) switch motherboard BIOS to boot from a JBOD drive,
and be sure to disable "raid" mode on all integrated SATA ports;

(2) install 2720SGL in a compatible x16 or x8 PCIe slot,
but do NOT connect any drives yet;

(3) when prompted, install the latest driver after
downloading same from Highpoint's website;
an installed driver is necessary for the flashing
sequence to work correctly;

(4) shutdown, connect drives to 2720SGL, and
invoke 2720SGL Option ROM with Ctrl-H ;
we buy the SFF-8087 cable from StarTech:
works every time;

(5) initialize all member drives inside Option ROM;
this step is necessary before the 2720SGL
will configure any RAID array(s);

(6) assign member drives to RAID array,
and select desired RAID mode;

(7) finish startup and format RAID array,
after reaching Windows Desktop,
using Disk Management;

(8) do some preliminary testing by running a few
simple programs e.g. ATTO;

(9) erase all partitions formatted at (8),
and leave entire RAID array as "unallocated";

(10) download, install and run Partition Wizard freeware
from your C: partition on your JBOD drive;

(11) run the "Migrate OS" feature in Partition Wizard
to migrate the entire Windows C: partition, and any
"System Reserved" partition, to the RAID array;

(12) after that "Migrate OS" step has finished successfully,
re-boot into the motherboard BIOS and change the boot drive
to the RAID array where a cloned copy of the OS now resides.

As long as INT13 remains ENABLED, that controller should show up
in the motherboard BIOS list of bootable devices.

And, if you have configured multiple RAID arrays,
the card's Option ROM will need to be launched again
with Ctrl-H, to select which array to boot from:
it defaults to booting from the array at the top of that list.

We always limit C: to 64GB-to-100GB, and format the remainder
as E: dedicated to data.

This overall approach has continued to work beautifully for us
for many years now, across multiple Windows PCs.

Hope this helps.

p.s. We have also figured out how to install 2 x Highpoint 2720SGL controllers
in the same motherboard. If you need help doing that, let me know here
and I'll run thru that unique sequence. HINT: one ENABLES INT13 and
one DISABLES INT13, but the sequence for achieving that goal is not obvious,
nor is it documented anywhere that I am aware of.

/s/ Paul
01-14-2020 05:21 PM
skupples how many dives are in the array for that controller?

I've tried to find flash specific raid solutions in the past, but I always end up using the board & windows. people highlight your concern as to why I should stick to local ports. seems weird really.
01-14-2020 05:03 PM
MRFS I might be wrong about this, but we have tried to purchase SSDs that are described as being capable of some internal error correction.

And, I believe the Samsung 840, 850 and 860 SSDs that we have assembled in RAID-0 arrays also do their own internal garbage collection.

As such, our setups do run the risk that the performance of these C: partitions on RAID-0 arrays of SSDs will slowly deteriorate, without doing periodic secure erasures and TRIMs.

Our Highpoint 2720SGL controllers do NOT support TRIM on RAID-0 arrays.

But, we haven't noticed any serious performance deteriorations with these RAID-0 arrays.
01-14-2020 04:30 PM
skupples I've had 0 SSDs die. Even my first OCZ 128 was still alive when I tossed it.

I've had zero SSD drive failure while running in windows stripe/mobo raid, n I've been using it consistently at home for at least 5 years now. now @ 2x2tb660p & 4x1tb sammy 8x0 sata ssd, nvme is sys + installed, sata is everything else. I don't do anything valuable enough to warrant the need for proper redundancy. Keeping installers, docs, etc on the separate sata stripe at least provides some risk mitigation. I rarely have to re-install windows , which i believe is partly due to my using LTSC enterprise. I'll eventually grab one of those high speed wifi NAS units for proper backups now that I no longer have my X79 build

the inherent risk is still there, either way.... was my original point.
01-14-2020 04:18 PM
MRFS And, with 20/20 hindsight, I now recall that we did a secure-erase of our older Intel SSDs
because we were trying to isolate a very serious, intermittent problem that was corrupting
our drive image files being written by an older, working version of Symantec GHOST.

I remember now the sequence: one experiment involved flashing the latest motherboard BIOS.
When we re-booted with the latest motherboard BIOS, a new default enabled the long DRAM test,
and during POST the counter STOPPED before reaching the DRAM total installed in that PC.


This proved that we had a failed DRAM stick, and the vendor mailed us a check for the
original retail amount (because that DDR2 was no longer being manufactured by that vendor).

We installed some new G.SKILL DDR2, and our problem was solved!

So, in the end, our older Intel SSDs really did NOT need to be secure-erased, after all.

I was frankly to blame for not considering a failed DRAM stick much earlier during trouble-shooting.
01-14-2020 04:08 PM
"4x4" AICs offer lots of potential with PCIe 4.0

With the arrival of PCIe 4.0 chipsets, builders now have an opportunity to design very high-performance storage
using multiple "4x4" AICs, provided that the chipsets support bifurcation. Also, some of these "4x4" AICs
are described to work in multiple x16 Gen4 slots. The latter configuration should allow a RAID-0 array
to span multiple AICs, e.g. the ASRock Ultra Quad M.2 AIC. As such, picture two "4x4" AICs with a
total of 8 x Gen4 NVMe M.2 SSDs. The MiDrive (see above) should be objectively reviewed with
such an 8-drive config. p.s. Contrary to lots of sincere warnings we received before doing this,
we have standardized all of our workstations on a RAID-0 array for the Windows C: system partition:
64-to-100GB are formatted for C: and the remainder formatted as a dedicated data partition.
None of the "dooms" predicted by those warnings have ever occurred, over a period spanning
several years now. Looking back, I believe ONLY ONCE did we ever have to secure-erase some
older Intel 6G SSDs: even after secure-erasing all 4, the measured performance stayed the same!
This thread has more than 10 replies. Click here to review the whole thread.

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off