Overclock.net banner
861 - 880 of 1,281 Posts
Yep Windows 7 is way better than 10 in that regard, but it doesn't matter since sooner or later everyone has to make the switch to 10. And on 10 you won't ever get a result as good as this :(

That is IF these scores on Win 10 are legit reported at all..
 
LatencyMon shows perfect 62-108us with all the daily software running in the background + music playing too. I don't bother with readings as I'm sure MouseTester can't cope with HPET, since all that madness is not happening at all on my screen. Else I'd had a choppy experience will all these dips to below 500Hz polling.
What madness, what is this test even supposed to test, i don't see title. How you can have frequency 10 000. Is this supposed to test polling rate stability, if so shouldn't there be 1000 instead 10000 :D ? And than second test i think it was interval vs time is supposed to test, how fast interrupts from polling rate are handled by system.

If your motherboard support it you can set usb to msi-x.
Use usb 2 top ports, usb 3+ is worse for mouse.

Otherwise use ps/2 if you can, it has lower input lag, don't know about dpc latency, but lower as well probably. Even some expert on linus tech tips recommended it. https://linustechtips.com/main/topi...com/main/topic/1125935-help-picking-mobo-for-low-dpc-latency-and-low-input-lag/
You can find about it on the internet, that it is better than usb for mouse and keyboard, mechanical keyboard support even full key n rollover for ps/2. Now i am not sure actually how much it is better and with higher polling rate difference narrows supposedly, but if you use 500hz polling, it could be better, don't know currently, long time i was looking into that and i can't use ps/2 anyways unfortuantely.
 
I use the USB 2.0 port, 800dpi + 1000Hz polling rate, which is correctly read with HPET off. I bet my ass this software goes monkey **** when HPET is on. I see no reason to tamper with anything as these scores have literally no reflection on reality.
 
I use the USB 2.0 port, 800dpi + 1000Hz polling rate, which is correctly read with HPET off. I bet my ass this software goes monkey **** when HPET is on. I see no reason to tamper with anything as these scores have literally no reflection on reality.

HPET Off in bios and windows Bcdedit useplatformClock?

Please specify
 
No HPET switch in bios (most likely ON by default) so only by deleting the useplatformclock the software works properly (reporting 1000Hz +-50Hz). Now a wild part - after deleting the useplatformclock through cmd NieR Automata started to utilize my GPU in constant 99%, instead of jumping 65-90, which results in keeping a nice 57-80FPS instead of getting dips below 40 lol. No changes in Deus Ex Mankind Divided, can't say if there are other changes as I did not test more titles. So just an engine specific issue.
 
The end accuracy of mine is perfect and I see no reason to dip into polling voodoo further. I also meant that the software reads properly, not that the score itself is proper.
 
Is it pci-e card ? Dedicated nic is always better than any pci-e card, because pci-e cards generate high amount of dpc latency. Unless there is a exception and your card is godlike. But i haven't seen yet pci-e card that would be better than dedicated.



Test setting - think it was called inteval vs time, it shows time, in which your polls are being handled by system. This looks like it just shows your polling rate stability.
Also problem is, how do you test this setting objectively, there is not same load, in your system at any time, even in idle there can be small variance and who knows what can affect it. I get like 88us, but sometimes i get spike to 200us as highest execution in latencymon. But you can tell if there would be huge difference.

Btw i am currently thinking, if would be possible stream audio to second pc and render it there, so you avoid dpc latency from sound card. I was looking at couple programs, but not sure if they do what i want. Because sound card makes a lot of dpc latency.
Maybe you got not the best drivers or windows was setup wrong
I try not to get spikes when I dont want them. my latencymon attached
 

Attachments

With the affinity program you need to deselect Core0-1 and leave the others, this way you offload all the processes to other cores because every single crap and driver works on Core0-1 so having it without GPU and USB processing makes it better.
Yeah, if you have like 16, or 32 cores, thats good thing to do theoretically. Problem is expert from latmon, said interrupt affinity can be ingored, on driver, or hardware level, so it doesn't always work. Also most of drivers should use msi, or msi-x, so they should use all, or multiple cores and spread evenly... Still if you have like 16, or 32 cores, if you could reserve for interrupts some, it would be good. I can't otherwise see how many cores each driver using, latmon shows only one driver with maximum latency per core. Maybe it is possible somehow, don't know about such feature in latencymon.

Maybe you got not the best drivers or windows was setup wrong
I try not to get spikes when I dont want them. my latencymon attached
I get like highest 700 on nvddklm.sys or how it calls - nvidia drivers and usb i have usually 80, when maxing polling rate, one time i run that i got spike to 200 for some reason, both in mouse tester and latmon i have 80 usually when moving mouse a lot. Overall latency is in green and quite low, slightly increases when i switch into game, from only thing i get most dpc latency is usb, because it won't work in msi-x mode on my motherboard. Still i have everything tweaked and still very low dpc latency.
I got asrock phantom gaming itx/ax, which is supposed to have very low dpc latency even, it is hard to test, because it different on hw and sw configurations. Unfortunately i got bad socket, so i decided to wait for intel 10 gen cpus. I would buy amd, but i need single core performance and it will still max out my gpu so i don't care.

No HPET switch in bios (most likely ON by default) so only by deleting the useplatformclock the software works properly (reporting 1000Hz +-50Hz). Now a wild part - after deleting the useplatformclock through cmd NieR Automata started to utilize my GPU in constant 99%, instead of jumping 65-90, which results in keeping a nice 57-80FPS instead of getting dips below 40 lol. No changes in Deus Ex Mankind Divided, can't say if there are other changes as I did not test more titles. So just an engine specific issue.
Ye platformclock should be only used for debbuging and you shouldn't force hpet on, but let windows decide based on hpet expert mail conversation - can be found on tweakhound.com write platformclock.

What i did not know about is platformtick (same syntax), which disable synthetic timers and use hpet as timer. Platformcock is counter, for driver and appplications to optimize performance and time out events "says microsoft". I switched on platformtick and mouse feels more accurate, you can try it, for delete i think: deletevalue useplatformtick, again it is on tweakhound.

Yep Windows 7 is way better than 10 in that regard, but it doesn't matter since sooner or later everyone has to make the switch to 10. And on 10 you won't ever get a result as good as this :(
Why are you measuring polling stability, are you afraid your mouse polling isn't stable. Every good mouse today should have stable 1k, unless it is faulty, or sw bug like in win 8.1, rather measure inteval vs time, which tests when your polls are handled by os, make sure you max out polling rate, aka move mouse quickly in circles and than zoom and check biggest curve's maximum time.

Btw this site contains interesting info about latency etc. https://www.resplendence.com/latencymon Main thins is disable all thottling features in bios and keep cpu in c0 state aka disable idle saver- google it if you don't know what that is. Process lasso can turn it on by app , but than can't restored balanced mode, which is stupid.

Btw manufacturers experimenting with 480 hz monitors and DP has already bandwidth for 1000 hz gaming pog, tho you get barelly 60 fps in some games with 2080 :D, still on games like cs go it would be epic, if devs upgraded engine. Unfortunately some of most competetive games like sc2, still run only on 60fps.
 
Yeah, if you have like 16, or 32 cores, thats good thing to do theoretically. Problem is expert from latmon, said interrupt affinity can be ingored, on driver, or hardware level, so it doesn't always work. Also most of drivers should use msi, or msi-x, so they should use all, or multiple cores and spread evenly... Still if you have like 16, or 32 cores, if you could reserve for interrupts some, it would be good. I can't otherwise see how many cores each driver using, latmon shows only one driver with maximum latency per core. Maybe it is possible somehow, don't know about such feature in latencymon.



I get like highest 700 on nvddklm.sys or how it calls - nvidia drivers and usb i have usually 80, when maxing polling rate, one time i run that i got spike to 200 for some reason, both in mouse tester and latmon i have 80 usually when moving mouse a lot. Overall latency is in green and quite low, slightly increases when i switch into game, from only thing i get most dpc latency is usb, because it won't work in msi-x mode on my motherboard. Still i have everything tweaked and still very low dpc latency.
I got asrock phantom gaming itx/ax, which is supposed to have very low dpc latency even, it is hard to test, because it different on hw and sw configurations. Unfortunately i got bad socket, so i decided to wait for intel 10 gen cpus. I would buy amd, but i need single core performance and it will still max out my gpu so i don't care.



Ye platformclock should be only used for debbuging and you shouldn't force hpet on, but let windows decide based on hpet expert mail conversation - can be found on tweakhound.com write platformclock.

What i did not know about is platformtick (same syntax), which disable synthetic timers and use hpet as timer. Platformcock is counter, for driver and appplications to optimize performance and time out events "says microsoft". I switched on platformtick and mouse feels more accurate, you can try it, for delete i think: deletevalue useplatformtick, again it is on tweakhound.



Why are you measuring polling stability, are you afraid your mouse polling isn't stable. Every good mouse today should have stable 1k, unless it is faulty, or sw bug like in win 8.1, rather measure inteval vs time, which tests when your polls are handled by os, make sure you max out polling rate, aka move mouse quickly in circles and than zoom and check biggest curve's maximum time.

Btw this site contains interesting info about latency etc. https://www.resplendence.com/latencymon Main thins is disable all thottling features in bios and keep cpu in c0 state aka disable idle saver- google it if you don't know what that is. Process lasso can turn it on by app , but than can't restored balanced mode, which is stupid.

Btw manufacturers experimenting with 480 hz monitors and DP has already bandwidth for 1000 hz gaming pog, tho you get barelly 60 fps in some games with 2080 :D, still on games like cs go it would be epic, if devs upgraded engine. Unfortunately some of most competetive games like sc2, still run only on 60fps.
I literally just gave the program from Microsoft to do the affinity change specific to the Drivers. and you should do it even with 4 Cores because Core0-1 has too much on it.

And you see the different load in LatencyMon on the CPU tab. clearly Core0 20.000 while others are 5.000 instead Core0 100.000 other cores 20.000 is a big improvement.
 
I literally just gave the program from Microsoft to do the affinity change specific to the Drivers. and you should do it even with 4 Cores because Core0-1 has too much on it.

And you see the different load in LatencyMon on the CPU tab. clearly Core0 20.000 while others are 5.000 instead Core0 100.000 other cores 20.000 is a big improvement.
That's interesting, but it doesn't mean system will always abide by these settings, as latmon expert said it can be ignored at driver/hardware level. I know what you mean. I already tried that in past, but i didn't see change, i tried put gpu on all cores. But still it may be good tweak, what if it works in some cases. People were saying about certain things it doesn't work, while it seemed to work. Like about putting gpu to msi-x, nvidia dev said it shouldn't have any effect, but i can clearly tell difference in input lag. Yeah i too have 90% of interrupts on core 0 from short test, which is strange - given i have everything in msi-x except usb, so drivers should utilize multicore. I will test it once again and see if it helps.

Problem is i don't see how many dpc calls come from specific driver on each core. I can see only overall count of dpc calls on each core and which driver had highest execution time on specific core. So only way too see if it helped i guess is look for overall highest execution time, if it lowers. How do you measure if it helps ? After i set gpu in regedit to IrqPolicyAllProcessorsInMachine to 3 in hexadecimal, i didn't see much more interrupts on other cores.

I have so contradictory results, overall latency is great only 400 max after 20 minutes of testing for nvlddmkm.sys, which is usually at least 700. Again these results can vary, it is hard to test. But other cores are still barely touched, core 0 - 353k dpc calls, core 1 - 23k, others 700-5000, barely touched. And i set it in enum, under affinity policy for my gpu hardware id, so i don't think i set it wrong. I checked it yet in performance monitor and same results, everything on core 0... So i don't think results are wrong..

I would like to have interrupts spread evenly on all cores, despite i have everything in msi/ msi-x mode, i have still 90% of interrupts on core 0 and setting core affinity in registry didn't help much, if at all, in latmon result dpc count on other core didn't change !

I don't think it is even possible to change, interrupt handling is done by apic chip in cpus. It depends on hardware and drivers and these settings in registry don't do anything: i tried both that utility and manual entry like DevicePolicy 3 under device id. It also depends on how drivers are written, it is strange tho, because hardware definitely should have capabilities, same as drivers, i have up to date drivers and everything in msi-x so it doesn't make sense...

What are your loads on each cores ?

EDIT: okay now interrupt affinity works, after i switched gpu to msi mode, i didn't do that because nvidia dev said, msi is support is specified in its drivers coding and setting, which nvidia drivers put to registry are only for old devices, which wouldn't handle more. But after i switch gpu to msi mode in registry i get less input lag, so it is strange...
 
I've found out what was causing me mouse and keyboard lag input. it was the MOBO Fan Controller, now it's not a single case only to my motherboard now, it's my old motherboard and many other people with any motherboard. Need to set the Fan controller to specific FAN power, DC or PWM depends on your fan, do not leave it on Auto.

I am still investigating but I think I am finally done. can say I fixed it just need to know what exactly did it on the technical level.
 
I've found out what was causing me mouse and keyboard lag input. it was the MOBO Fan Controller, now it's not a single case only to my motherboard now, it's my old motherboard and many other people with any motherboard. Need to set the Fan controller to specific FAN power, DC or PWM depends on your fan, do not leave it on Auto.

I am still investigating but I think I am finally done. can say I fixed it just need to know what exactly did it on the technical level.
you need to be able to reproduce the issue by switching back and forth between settings, and issue must be measurable by something
 
Yep Windows 7 is way better than 10 in that regard, but it doesn't matter since sooner or later everyone has to make the switch to 10. And on 10 you won't ever get a result as good as this :(
Every Maxwell and higher + AMD Vega and higher generation GPU uses tiled rendering, which seems to render with higher latency in non-hardware accelerated desktop mode (Windows 7), which is one of the many reasons I'm a 5%'r that refuses to budge off Windows 8.1. Probably doesn't affect full-screen mode to clarify, but I don't like having a crappy desktop mode either. If you wanted to stay on Windows 7 and don't want desktop cursor movement to be crappy, the highest GPU you can use is an AMD 580.

The only problem there is - from the old days of Bitcoin mining - I've had an entire room full of GPUs at one time and noticed cursor movement was always worse on non-reference R9 290's, and it's probably difficult to find any reference AMDs from that generation nowadays. I'm not sure how they do it, but 3rd party vendors seem to screw up their BIOS implementations somehow. It could be numerous things from random UEFI crap, to timing issues like in Bitcoin mining on AMD cards how you had to set both the core and memory clocks to an exact number to get proper performance and adding say +200 mhz might give you worse performance than the lower clocked card.

I don't really know or care what the cause of the issue is, all I know is that many non-reference cards have issues. The problem also seems to be worse for AMD 3rd party vendors - might be due to architectural differences requiring some sort of specific timing as I was talking about. I would also prefer a reference Nvidia over a 3rd party, but reference Nvidia vs 3rd party hasn't been as big of an issue for me where you notice some 3rd party AMD having extreme floaty cursor feel compared to reference. Maybe some type of issue inherent to GCN timings, maybe not.
 
Every Maxwell and higher + AMD Vega and higher generation GPU uses tiled rendering, which seems to render with higher latency in non-hardware accelerated desktop mode (Windows 7), which is one of the many reasons I'm a 5%'r that refuses to budge off Windows 8.1. Probably doesn't affect full-screen mode to clarify, but I don't like having a crappy desktop mode either. If you wanted to stay on Windows 7 and don't want desktop cursor movement to be crappy, the highest GPU you can use is an AMD 580.

The only problem there is - from the old days of Bitcoin mining - I've had an entire room full of GPUs at one time and noticed cursor movement was always worse on non-reference R9 290's, and it's probably difficult to find any reference AMDs from that generation nowadays. I'm not sure how they do it, but 3rd party vendors seem to screw up their BIOS implementations somehow. It could be numerous things from random UEFI crap, to timing issues like in Bitcoin mining on AMD cards how you had to set both the core and memory clocks to an exact number to get proper performance and adding say +200 mhz might give you worse performance than the lower clocked card.

I don't really know or care what the cause of the issue is, all I know is that many non-reference cards have issues. The problem also seems to be worse for AMD 3rd party vendors - might be due to architectural differences requiring some sort of specific timing as I was talking about. I would also prefer a reference Nvidia over a 3rd party, but reference Nvidia vs 3rd party hasn't been as big of an issue for me where you notice some 3rd party AMD having extreme floaty cursor feel compared to reference. Maybe some type of issue inherent to GCN timings, maybe not.
holy balls its really him! I thought he was just a myth around here! I thought he abandoned these forums.
 
Every Maxwell and higher + AMD Vega and higher generation GPU uses tiled rendering, which seems to render with higher latency in non-hardware accelerated desktop mode (Windows 7), which is one of the many reasons I'm a 5%'r that refuses to budge off Windows 8.1. Probably doesn't affect full-screen mode to clarify, but I don't like having a crappy desktop mode either. If you wanted to stay on Windows 7 and don't want desktop cursor movement to be crappy, the highest GPU you can use is an AMD 580.

The only problem there is - from the old days of Bitcoin mining - I've had an entire room full of GPUs at one time and noticed cursor movement was always worse on non-reference R9 290's, and it's probably difficult to find any reference AMDs from that generation nowadays. I'm not sure how they do it, but 3rd party vendors seem to screw up their BIOS implementations somehow. It could be numerous things from random UEFI crap, to timing issues like in Bitcoin mining on AMD cards how you had to set both the core and memory clocks to an exact number to get proper performance and adding say +200 mhz might give you worse performance than the lower clocked card.

I don't really know or care what the cause of the issue is, all I know is that many non-reference cards have issues. The problem also seems to be worse for AMD 3rd party vendors - might be due to architectural differences requiring some sort of specific timing as I was talking about. I would also prefer a reference Nvidia over a 3rd party, but reference Nvidia vs 3rd party hasn't been as big of an issue for me where you notice some 3rd party AMD having extreme floaty cursor feel compared to reference. Maybe some type of issue inherent to GCN timings, maybe not.
So if I am going to buy AMD GPU example 5700XT or the new RX RayTracing support 2020, or Nvidia GPU, which one should I buy? What test we would be able to do for a review that they might fix this crap for every released card like FrameTime was a long issue till it was fixed.
 
So if I am going to buy AMD GPU example 5700XT or the new RX RayTracing support 2020, or Nvidia GPU, which one should I buy? What test we would be able to do for a review that they might fix this crap for every released card like FrameTime was a long issue till it was fixed.
There is no fix. Windows XP was a hardware accelerated desktop, then Win 7 wasn't, then Win 8.1 was again. This is why desktop cursor movement vs exclusive full-screen 3d mode feels the same in Windows 8.1 but not Windows 7. What's bizarre is Nvidia doesn't support Windows 8.1 on 2060-2080 series but does on 1600 series. I refuse to use Windows 10 New World Order edition myself. It's Microsoft attempting to transition to a fully locked down, Apple-style OS, and I would use it solely as a game box and nothing else due to that, but cursor movement is worse than Win 8.1, so I have no use for it at all.
 
There is no fix. Windows XP was a hardware accelerated desktop, then Win 7 wasn't, then Win 8.1 was again. This is why desktop cursor movement vs exclusive full-screen 3d mode feels the same in Windows 8.1 but not Windows 7. What's bizarre is Nvidia doesn't support Windows 8.1 on 2060-2080 series but does on 1600 series. I refuse to use Windows 10 New World Order edition myself. It's Microsoft attempting to transition to a fully locked down, Apple-style OS, and I would use it solely as a game box and nothing else due to that, but cursor movement is worse than Win 8.1, so I have no use for it at all.
need hdr thought.. wouldn't mind much windows 10. but it has all the support and drivers needed and have new features. so bad technology always goes backwards.. all the time. if it's money related and mind and spirit is all about the final product we get.
 
861 - 880 of 1,281 Posts