Overclock.net banner

581 - 594 of 594 Posts

·
Registered
Joined
·
157 Posts
I haven't tried the card myself. So the card isn't controllable at the moment, or in the last couple of bios releases?
The card is not controllable in the BIOS using the Zenith II Extreme Alpha. It has not been controllable since at least the 1101 BIOS.
 
  • Rep+
Reactions: The Stilt

·
Premium Member
Joined
·
2,749 Posts
Discussion Starter #583
The card is not controllable in the BIOS using the Zenith II Extreme Alpha. It has not been controllable since at least the 1101 BIOS.
Yup, replicated this on Z2E as well.

It's pretty weird, since once the controls / monitoring is enabled in the bios setup GUI, everything appears to be working perfectly fine...
I asked if there is some actual reason for these being disabled, or if the bug in this case is the fact that the controls / monitoring is hidden.
 

·
X5O!P%@AP[4\PZX54(P^)7CC)
Joined
·
771 Posts
Is the Asus driver page broken / missing for anyone else?
 

·
Registered
Joined
·
157 Posts
Yup, replicated this on Z2E as well.

It's pretty weird, since once the controls / monitoring is enabled in the bios setup GUI, everything appears to be working perfectly fine...
I asked if there is some actual reason for these being disabled, or if the bug in this case is the fact that the controls / monitoring is hidden.
Thank you very much.
 

·
Registered
Joined
·
2 Posts
FYI, 1402 is the first release I've been able to get my 256GB, 3600CL18 Trident Z kit actually stable at 3600. Every other release I've had to down clock the memory a bit to maintain stability.
 

·
Registered
Joined
·
3 Posts
Hello guys. I built my kit a few days ago (see: https://pcpartpicker.com/b/dT7TwP ) but I am facing the following issue:
I am using Samsung 980 Pro in RAID 0 and initially I had managed to get seq read speeds of up to 12GB/s and seq write of up to 10GB/s.
Now with my disk 70% full, these figures are down to 4GB/s and 2GB/s.
I just got another two Samsung 980's, that will install them this time round in the DIMM M2 card (adjacent to the memory slots).
The question is what to do with them? Shall I make them RAID 0 and just make sure that I will only have my SQL Graph Servers and VM there ensuring that they do not reach more than 30% of their capacity?
Move the image of my OS to the new RAID and just leave the existing to have the data?

Are there any performance improvements with the card DIMM M2 card for NVMe that make a system move to them worthwhile?

Please also note that I do have 64GB of RAM so there is plenty to justify not moving the system to the new RAID.

Thanks for the advice.
 

·
X5O!P%@AP[4\PZX54(P^)7CC)
Joined
·
771 Posts
Hello guys. I built my kit a few days ago (see: https://pcpartpicker.com/b/dT7TwP ) but I am facing the following issue:
I am using Samsung 980 Pro in RAID 0 and initially I had managed to get seq read speeds of up to 12GB/s and seq write of up to 10GB/s.
Now with my disk 70% full, these figures are down to 4GB/s and 2GB/s.
I just got another two Samsung 980's, that will install them this time round in the DIMM M2 card (adjacent to the memory slots).
The question is what to do with them? Shall I make them RAID 0 and just make sure that I will only have my SQL Graph Servers and VM there ensuring that they do not reach more than 30% of their capacity?
Move the image of my OS to the new RAID and just leave the existing to have the data?

Are there any performance improvements with the card DIMM M2 card for NVMe that make a system move to them worthwhile?

Please also note that I do have 64GB of RAM so there is plenty to justify not moving the system to the new RAID.

Thanks for the advice.
So the problem with the DIMM.2 slots, is they go through the chipset for PCIe lanes. AKA they and everything else are sharing lanes.

I have 4x sabrent NVME PCIe4 drives. I tested using the AMD native raid in raid0 and had a similar experience. What I found out is that the implementation of raid that AMD is using does not allocate and de-allocate blocks in the right way to allow the NVME drives to perform trim and wear levelling algorithms like they want to with free space.

From the drive's perspective it is 100% full and data is only being modified, not new data being written. This makes the drives terribly slow. And anything but a completely new raid will be much slower than the drives could perform.

What I ended up doing was using a single NVME drive with windows installed on it. Then present all the other drives to windows as individual drives. Then configured a windows storage space to "raid" them together with no parity. I also did a lot of performance testing to figure out what strip/stripe size was the best.


These are the commands and what I settled on:

$PhysicalDisks = (Get-PhysicalDisk -CanPool $True)

New-StoragePool -FriendlyName NVMEPool -StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks $PhysicalDisks -LogicalSectorSizeDefault 4096 -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Simple -MediaTypeDefault SSD

New-VirtualDisk -StoragePoolFriendlyName NVMEPool -FriendlyName NVMEDisk -ResiliencySettingName Simple -UseMaximumSize -ProvisioningType Fixed -NumberOfColumns 4 -Interleave 131072 -MediaType SSD
 
  • Rep+
Reactions: alaskajoel

·
Registered
Joined
·
3 Posts
Thank you so much for the great insight!

What kind of performance improvement are we looking at? Do you get with Microsoft "Raid 0" get the 150% improvement on the three disks (over the single disk's performance)?
 

·
X5O!P%@AP[4\PZX54(P^)7CC)
Joined
·
771 Posts
This is AMD raid 0 when its is a brand new created array:
2479999


This is AMD raid 0 after a few months of use:
2479998


(Notice that the last test is not the same between the two above screenshots)





This is Microsoft's Storage Spaces implementation of raid 0 using the commands I mentioned:
2480000

2480001



This is Microsoft Storage Spaces after many months of use:
2480002

2480003




As you can see, after many months of use the storage spaces array does not slow down by much. However the AMD Raid 0 implementation does significantly. Especially with queue depth 1 tests.


EDIT---------------------
All testing done on a Asus Hyper M.2 X16 PCIe 4.0 X4 Expansion Card
with 4x Sabrent 2TB Rocket NVMe 4.0 Gen4 PCIe M.2 SSDs
 
  • Rep+
Reactions: dv357

·
Registered
Joined
·
18 Posts

·
Registered
Joined
·
3 Posts
Thank you so much @rush2049 for the lovely post. As soon as I get the time I will convert it to the Microsoft Storage Spaces.

It is a bloody disgrace for AMD to have a HW raid being outperformed from the software RAID. I just wonder Microsoft does not offer to license to them their code, as it is bound to help the ecosystem.

Best regards!
 

·
Registered
Joined
·
157 Posts
Yup, replicated this on Z2E as well.

It's pretty weird, since once the controls / monitoring is enabled in the bios setup GUI, everything appears to be working perfectly fine...
I asked if there is some actual reason for these being disabled, or if the bug in this case is the fact that the controls / monitoring is hidden.
Did you found out if there was an actual reason for the Fan Extension Card II being disabled in the BIOS or if it was a bug?
 
581 - 594 of 594 Posts
Top