okay. What I'm trying with this mod client is to have MW run on the 3 TVs and Rosetta on the CPU. The Ros CPU tasks do not seem to interfere with the MW download stream (as MW CPU does), so I'm seeing if I can run both projects at the same time. Left to it's own, MW can only run 2 tasks with Ros running at the same time... lets see if this will work (and if the PSU can keep up!)That's it. :thumb:
You should be able to run on all threads. Check Options->Computing Preferences->Computing and make sure that it's not set to 50%. I'm currently running 12 tasks on my 1700 with SMT on, and 23 tasks on my 2700WX with SMT off. I use an app_config to adjust the number of tasks to run at one time, so this should work on your machine, unless there are other things changed behind the scenes on that modified client.
Demonstrate my ignorance I guess. I thought Tdie was the actual temperature and Tctl had an additional value (18) added to it? So a Tctl of 68C is really 50C Tdie and your temps are fine. It was not until some recent kernel that Tctl started showing up in psensor, I have been managing my processors using Tdie.
Or maybe I am just cooking my processor...
Why does the 2700x default to 8 threads instead of 16? I have 2 1700x and they default to 16. If what TicToc was saying about PPD being similar with smt disabled, not sure I would expect a big difference in temps, but I have not done any testing.
Thanks Guys. I have not used an app_config on that x470 rig for rosetta yet, but I will now. Rosetta config on the web is set to 90% of CPUs - and that is not the mod client on that machine, so I have no idea why it chose 8 threads. I'll bump in to 12 and see what happens to the temps. I've moved off NF and am still dialing in the rosetta 7980XE threads on the MW machine (without cutting into MW hopefully, which uses 11 threads for 21 MW tasks) - wantr to run MW and Rosetta simultaneously. 10890XE is running Rosetta on 32 threads - gotta watch the VRM temps on that as it's pulling over 300W just on the CPU 12V lines. :worriedsmTdie is the actual temp on Ryzen CPUs. The offset with Zen and Zen+ CPUs varies per CPU model, so that consistent fan profiles can be set. I think Tctl is +10C for the 2700x, and I know it is +27C for my 2700WX.
Hey tictoc, what data is the link in the post above pointing to? I'm burning down the house and not getting anywhere. I think I signed up correctly :worriedsmLink to individual OCN team member stats:
I had to use an app_config.xml file in the folder in the pic below for rosetta, you'd put it in the similar Universe folder.What did you do to run more than 8? Running [email protected] on my 1700X and she wont go over 12 threads no matter what CPU% I change it to. My other rigs I see the extra cores being used but the 1700X aint happy for some reason. No other projects being run on that machine at the moment.
Ah, okay. I'll click on over to Free-DC to check if this gear is actually doing anything beyond heating my office. Luckily, it's been unseasonably cool here in PA the past few days.You are there, and cranking out a bunch of Rosetta work. That GDoc (when it updates) is only tracking stats for Rosetta and NumberFields. Unfortunately Universe has the whole GDPR user consent thing, which makes it impossible for me to grab stats in any kind of an automated and sane way. Google has also made things difficult. I have to manually edit the source sheet to get it to update, since it is doing some sort of caching behind the scenes, rather than updating the sheet with the new source data every hour.
Here's a link to the source file for those stats. https://drive.google.com/open?id=1MGXXscjuzo4tPXN5VIUMWLXy0hwmKt4s It is a csv file that updates automatically every hour.
yeah, "only" 32GB of ram on that rig. Page file is getting hammered (intel 900P).Nothing you dod wrong. I'm guessing you have (4) 8GB sticks in that rig? The memory usage is system memory (8GB) rather than vRAM. It is a a somewhat common practice in HPC for tasks that need to do some calculations on the CPU, to load everything into system memory to feed the GPU as fast as possible.
I'm not sure if they have really optimized Amicable to that extent, but it seems possible. The required system memory usage makes it a bear to run on most "regular" multi-GPU systems.