I use at the Beginning TestMem5 with the 1usmus Config. When you will get Errors, then you can check the Error List and fix them. After, 1usmus is running fine for 6 Hours. I run Testmem5 again with the PCB Destroyer Config for 6 Hours after that VST+VT3 in Y-cruncher for 8 Hours or OCCT Mem Test for 1 Hour.
I really highly recommend using the 1usmus Config when you Overclock the Ram. When there is an Error, so you can easily see what's wrong. Which Value you have to change...
First check if the Ram is really stable and do you were trying to Raise the LLC? I think 8 where the Best Value for Asus Boards.
Would you happen to know if there is some other utility that can do the kind of testing as well as Testmem5, for RAM stability?
Testmem5 has several versions from the "old" versions up to 12 by one author, and then 13.1 from a new author, but (as discussed in the thread about the utility's recent pages ) there are some concerns about the safety of the utility. Its quite possible that its perfectly fine and that a lot of the calls the make it suspicious to looking like malware come from the low level access needed to test the memory, but unfortunately there's very little documentation and less in English. Uploads to virustotal and hybridanalysis/falcon sandbox of the newest version show suspicion as well, The latest release 13.1 on Github claims to offer "source code" zips, but its both dated significantly before the release of the compiled executable (ie program itself is from July, "source" is from May) , but for the heck of it I went and downloaded the source zip to find no source at all or anything about the program inside! There's just a .md file that has the Github readme text, and a .png of the screenshot shown on Github - no source or documentation about the program. This may be an issue of language barrier and the program itself may not be containing malware, but especially after the incidents of malware being inserted into forks of projects on Github/Codeberg and other issues, I'd be much more comfortable using an alternative until there's more information available.
Since I was using VST+VT3 alongside all the rest of the tests to completion in YCruncher, I decided to just move right to the OCCT Memory test and see what happens. I think I used the default options except I changed the percentage of RAM being tested from 80% to 90%, the rest I left at Auto (I could also specify SSE or AVX2 etc). If it makes it through the default hour of OCCT Memory without errors, should I consider the current memory settings (- EXPO1 setting in BIOS, set the speed to 6000mhz or MT/s , and the major timings are 30-36-36-96 - basically EXPO speed advertised for the kit) stable?
Also unrelated to the above, but for the heck of it I checked the AIDA64 SHA3 test to see where it begins to error, all-core. It seems that -5 is fine, but -6 it reboots! So there's some core somewhere that won't tolerate it. I guess once I get the memory stability assured I can look at the possibility of trying ECLK? Otherwise, I'm not quite sure what to do. CoreCycler with YCruncher Kagari (all tests to completion per core, 2 threads per core) I've been finding can run in excess of 6 hours with several complete loops of the test, but occasionally at some really long period - 8-12hr etc it may throw an error on a given core whereas I back it down another 2-3 points or so and test again, but its getting harder and harder to detect any error there which should be a good thing, but not if it doesn't pass SHA3 sadly.
Edit:
Well, I'm not sure what's going on with the OCCT Memory test, but something wrong somewhere. The first time I ran it, it ran for an hour without issue. After that I was wondering if I should run it again and this time force it to run AVX2 just so that it was doing something heavier than SSE - this time, it started erroring nearly immediately! I thought maybe this would be helped by going back to the BIOS and changing from EXPO1 settings to EXPO2 (which seemed to specify certain sub-timings instead of leaving them at Auto). Came back and ran it again, instant error on AVX2, and even BSOD'd when I ran SSE! I figured this was due to the EXPO2 profile stipulating tighter timings than what could have been in EXPO1, which as I said left most if not all sub-timings on Auto. So I go and change it back to EXPO1, boot back in and start up OCCT Memory test again on Auto - this time, it begins to error out! Tried on SSE, AVX2, and Auto and they all begin to error near immediately as soon as all the cores are loaded up and working (I can see workload on Ryzen Master and the test is using 32 threads etc)! I also tried going to AIDA64's Stability Test and selected just Memory, and even that errors near immediately! I'm going to load up CoreCycler again with Kagari to test and see if it starts throwing errors when it gets to the more memory heavy tests regardless of the core or if it acts like it did in the past where it would run for hours without an error of any sort;
So clearly there's some sort of memory instability, but the question is what is causing it and was it always there or did something just recently make it worse? If so, why? In any case the idea that its not stable at EXPO1 is really frustrating , if that is truly the case. It could have been undermining my entire process with the -CO ! What should I look to do to correct the issue or is it possible this whole kit was basically a dud?
Edit2: I just realized that the particular Gskill kit (64GB 6000mhz 30-36-36-96 1.4v) I have is not on the mobo (Asus Crosshair X670E Extreme) QVL, but the older 6000mhz CL30-40-40-96 1.4v is. I wonder is this could be a factor? In addition, I am wondering if there is any impact by using the integrated GPU? I'm doing so currenty because I know it uses some RAM, so I'm just trying to think of anything that could be leading to instability, particularly of the RAM at the moment.
That is not how you fix the problem.
What you described is how you use an external tool to manually do for each game the same thing that a correct installation of the AMD drivers would do automatically for all games. You are just hiding the problem and adding extra steps.
I agree, manual core pinning with Process Lasso and similar should not be required in order to get things to work properly. Despite hearing that in general users seem to find 7950X3D performance in Windows working more or less as it should after all this time thanks to drivers+gamebar optimizations (which is undoubtably a good thing vs the early days post launch)etc.. it has always rubbed me the wrong way that AMD did not have a more comprehensive plan from the start. The 7950X3D is a top of the line halo CPU that offers both the benefit of extra cache and high frequencies as well as the most cores on the platform at its launch; one would think AMD would want all of this performance to "just work", so it really boggled my mind especially when relying so much on 3rd party tools like Windows' Xbox Game Bar in such a simplistic "If game, then cache core preference" way!
Knowing that AMD was not going to add an on die governor of any sort the way Intel did to balance workloads as it saw fit, I hoped they would use their past successful approaches with free/libre and open source software (F/LOSS) and, after doing what they could on the AGESA and board firmware side, they'd create some sort of FOSS driver package + a "profiler" that would have monitoring and heuristics plus both provided default and user-added rules. Not unlike some of the SLI/CrossfireX profiles in the past, these could be both specific to a certain game or other software, or more general in nature; a wider array of features like Process Lasso could be included as well. Instead of relying on GameBar to make the decision (which is a very Windows centric solution of course) , manual core pinning, or using a substitute 3rd party utility like Process Lasso , it could have been some platform independent open source utility, plus driver packages for supported OSes.
For any other who use Linux, how does the 7950X3D fare these days? What drivers/software is necessary/optimal? I've heard conflicting opinions, some saying that the Linux scheduler is a lot smarter about the workloads from the start , others claim the opposite and they have to do a lot of manual core pinning for certain applications. Some say there are utilities/drivers that help, others that there are few or not necessary outside of manual core assignment commands. What has been the experience of those Linux users here?