Overclock.net banner
81 - 100 of 132 Posts
Yea thats what i said. That if you ask most pros the most they do is just set their power plan to performance and change their nvidea control panel settings, that is about it. There are very few csgo / quake pros that don't at least do that and quite a few are into getting the most out of their system. The majority of Fortnite pros and streamers however don't even set up their own streams.
I wanna ask, which settings did you use for your OP?

Cause I just tried everything on right now HPET on BIOS and everything on. It seems to be pretty good ngl.

So far the best I tried is useplatformclock deleted with everything else on Yes including HPET on bios. So I would have useplatformtick Yes and disabledynamictickyes.

Useplatformtick Yes seems to only be good if at least disabledynamictick yes is also enabled.
 

Attachments

I really love people x7007 and Offler man. They really truly put in the work to find all this stuff out. This is what you call a true enthusiast and live up to the site's motto: The pursuit of performance.

Now all that's left is, which is, which is the best combinations?

We have:
HPET BIOS On and Off

bcdedit /deletevalue useplatformclock
bcdedit /set useplatformclock ture
bcdedit /set useplatformclock false

bcdedit /set tscsyncpolicy Enhanced
bcdedit /deletevalue tscyncpolicy

bcdedit /set disabledynamictick yes
bcdedit /set disabledynamictick no
bcdedit /deletevalue disabledynamictick

There's just so many different answers.
https://www.reddit.com/r/Amd/comments/5zx0by/ryzen_memory_timer_and_ccx_windows_10_possible/

https://www.pathofexile.com/forum/view-thread/2602685

These timer tweaks are so extremely complicated, it really hurts my brain to wrap around doing mental gymnastics and testing in general is so bothersome. This really is the final frontier of my final boss for input lag. 99% of my input lag problems since then have been from these timer issues. It's time to find the final tweak once and for all: Which timer setting is the best?

Also disabling before you install seems to give reduced DPC latency.
Disabledynamictick:
This feature when Enabled causes OS to execute tasks asociated with system core periodically. Its best to combine it with disabled core parking or features which allow CPU cores to go to low-power state.
Effects:
- Amounts of DPC calls per minute will increase
- All actions will be performed without any delay

If Disabled, those will perform only when necessary.
- Amounts of DPC calls per minute will decrease.
- Activity of OS core will be postponed until "there is enough things to do".
- This feature is designed to lower power utilization.

In general:
If you are on a desktop PC - Disabledynamictick = Enabled/True
If you are on a notebook and you have to rely on the battery you may consider to keep it as it is.

For HPET Enabled/Useplatformclock:
Either:
a) Usedplatformclock = Enabled/True while "High Precision Event Timer" in Device manager is enabled.
b) Usedplatformclock = Disabled/False while "High Precision Event Timerr" in Device manager is disabled.

You have to check which one works for your application the best.
 
Discussion starter · #83 ·
I wanna ask, which settings did you use for your OP?

Cause I just tried everything on right now HPET on BIOS and everything on. It seems to be pretty good ngl.

So far the best I tried is useplatformclock deleted with everything Yes including HPET on bios. Useplatformtick Yes seems to only be good if at least disabledynamictick yes is also enabled.
HPET bios on (new z390 has no option to disable it anyways) bcdedit /deletevalue useplatformclock (was default) , bcdedit /set disabledynamictick yes, bcdedit /deletevalue useplatformtick (was default)
 
Lets add soe info from Microsoft:
https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/bcdedit--set

tscsyncpolicy [ Default | Legacy | Enhanced ]
Controls the times stamp counter synchronization policy. This option should only be used for debugging.

useplatformclock [ yes | no ]
Forces the use of the platform clock as the system's performance counter.

useplatformtick [ yes | no ]
Forces the clock to be backed by a platform source, no synthetic timers are allowed. The option is available starting in Windows 8 and Windows Server 2012.

disabledynamictick [ yes | no ]
Enables and disables dynamic timer tick feature.

Some other resources:
https://www.tweakhound.com/2014/01/30/timer-tweaks-benchmarked/


Updated notes:

Useplatformclock:
a) HPET is a backup clock which should prevent skewing of timers since Windows 7/2008 R2 on Multiprocessor systems.
However this policy changed a bit in Windows 8 and 8.1 where clock skew became a source of cheating in competetive overclocking.
b) Windows will not use HPET unless specifically told so (usedplatformclock True).
c) Apps may or may not use HPET if its present in Device Manager.

Disabledynamictick:
a) Its a power saving feature and you may use it on desktops (disabledynamictick true).

Useplatformtick:
a) More restrictive than Useplaformclock. If enabled ONLY platform timers will be used.
b) If enabled, it may resolve issues with stability.

Tscsyncpolicy
a) An experimental feature which allows system to tweak usage of clocks embedded in CPUs.
(with my current system it seems to have beneficial effects on how system uses multithreading).

Why and how these settings do matter.
1. TSC clock embedded in CPUs are fast and reliable, however, most modern systems have multiple of them = one in each CPU core.
2. Due this different game threads may run out of sync, not mentioning situations when overclocking is involved.
3. TSC clocks are in CPUs, HPET is on southbridge. This generates a delay and extra communication outside of CPU. Therefore using HPET will be always slower, but system will be more stable.
4. You will never know HOW are games coded. Developer might have opted for optimization with or without HPET.
 
Have any of your tried bcdedit / set useplatformtick yes and used Intelligent Standby List Cleaner or Timer Resolution and set it to 1?
It should be just 1 and not 0.9998 or 1.00098.
many people did, obviously, including those in this thread. but why would you care about the timer resolution at all?
 
many people did, obviously, including those in this thread. but why would you care about the timer resolution at all?
It helped with more precise twitch aiming in fast fps games. I did it for Modern Warfare in 6v6 or 2v2 matches were twitch shooting was required (Do to Skill Base Match Making). Personally, I did notice an improvement on that extreme side of things. When I turned it off I could tell twitch headshots was harder. But for anything else, not worth it. Placebo? Perhaps, I won't rule that out. But when I turn it off and play I do notice an ever slight difference in aiming on the extreme end of "twitch shooting". Not in regular aiming. It's not major by any stretch of the imagination.

When you do use it MSI Afterburner's Hardware Monitor for Framerates goes "nuts" with huge spikes. What was determined, at a guess, that those frametimes might be spiking do to something else spiking in win10 while gaming. What that is...don't know.

But I am talking about using bcdedit /set useplatformtick yes, reboot then set timer resolution to .500 then start Modern Warfare.
 
It helped with more precise twitch aiming in fast fps games. I did it for Modern Warfare in 6v6 or 2v2 matches were twitch shooting was required (Do to Skill Base Match Making). Personally, I did notice an improvement on that extreme side of things. When I turned it off I could tell twitch headshots was harder. But for anything else, not worth it. Placebo? Perhaps, I won't rule that out. But when I turn it off and play I do notice an ever slight difference in aiming on the extreme end of "twitch shooting". Not in regular aiming. It's not major by any stretch of the imagination.

When you do use it MSI Afterburner's Hardware Monitor for Framerates goes "nuts" with huge spikes. What was determined, at a guess, that those frametimes might be spiking do to something else spiking in win10 while gaming. What that is...don't know.

But I am talking about using bcdedit /set useplatformtick yes, reboot then set timer resolution to .500 then start Modern Warfare.
that's a placebo you have there
 
Discussion starter · #90 ·
It helped with more precise twitch aiming in fast fps games. I did it for Modern Warfare in 6v6 or 2v2 matches were twitch shooting was required (Do to Skill Base Match Making). Personally, I did notice an improvement on that extreme side of things. When I turned it off I could tell twitch headshots was harder. But for anything else, not worth it. Placebo? Perhaps, I won't rule that out. But when I turn it off and play I do notice an ever slight difference in aiming on the extreme end of "twitch shooting". Not in regular aiming. It's not major by any stretch of the imagination.

When you do use it MSI Afterburner's Hardware Monitor for Framerates goes "nuts" with huge spikes. What was determined, at a guess, that those frametimes might be spiking do to something else spiking in win10 while gaming. What that is...don't know.

But I am talking about using bcdedit /set useplatformtick yes, reboot then set timer resolution to .500 then start Modern Warfare.
Like i mentioned in earlier posts, if you use useplatformtick and force the timer to the 0.5 then the drawbacks aren't as bad, but what is the point when you can force your timer to the lowest without useplatformtick and have everything being way more stable. useplatformtick is supposed to be used for debugging only and throughout the thread a lot of people posted negative effects from using it.

Who even came up with, an even 0.5 instead of 0.496 looks better and more even, hence it must be better? "It does not look even so it must go out of sync" testing with the cpuz tool showed it stays in sync across all timers even with 0.496.

A lot of hardware monitors that check and update in realtime causing spikes and microstutters everytime they update themselves. At least for me i could see an increase in stutter and microlags (that you can feel and see in your cursor movement or in fps games when rotating smoothly) when having things like for example coretemp open which also showed an increase in dpc latency spikes and decreases of polling precision.

And of course that causes your frametimes to be more inconsistent. In addition if your system has any issues then forcing the timer to 0.5 essentially doubles these issues/spikes. On my older amd system on one install the network driver was too old and caused some dpc spikes when downloading anything. Forcing my timer to 0.5 made it go mental even without the network even being utilized spiking up to 5000us

Modern Warfare is not really good to test things in relation to mouse movement anyways as it doesn't run in exclusive fullscreen. You can test that by pressing your function key on your keyboard and changing the volume, the volume bar will show up on the top left over the game = not exclusive fullscreen. If i am not mistaken it can't be changed since mw runs in dx12 only and dx12 on windows 10 can not run in exclusive fullscreen, for whatever botched reasons. So you might want to test it in games like counter strike, quake and so on.
 
Like i mentioned in earlier posts, if you use useplatformtick and force the timer to the 0.5 then the drawbacks aren't as bad, but what is the point when you can force your timer to the lowest without useplatformtick and have everything being way more stable. useplatformtick is supposed to be used for debugging only and throughout the thread a lot of people posted negative effects from using it.

Who even came up with, an even 0.5 instead of 0.496 looks better and more even, hence it must be better? "It does not look even so it must go out of sync" testing with the cpuz tool showed it stays in sync across all timers even with 0.496.
FWEIW for me default behavior for W7 SP1 x64 is to use a platform timer while W10 1903 x64 uses APIC. As for the different clock resolutions I would think it's more a case of hinting what timer is in play rather than having to be a specific number for performance reasons. W7 seems to run up to ~1.0ms resolution while 10 is happy to go up to ~0.5ms when it feels like it.
 
FWEIW for me default behavior for W7 SP1 x64 is to use a platform timer while W10 1903 x64 uses APIC. As for the different clock resolutions I would think it's more a case of hinting what timer is in play rather than having to be a specific number for performance reasons. W7 seems to run up to ~1.0ms resolution while 10 is happy to go up to ~0.5ms when it feels like it.
Applications request timer resolution which were configured during development, e.g. chrome requests 1.0ms, most games do the same.
there is no added value for players to force different resolution, unless the specific game has specific issues (which I've never seen). None of the major competitive titles have any issues with timer resolution.
you can check those requests per application if you run Powercfg.exe /energy command

it will be like this

Platform Timer Resolution:Outstanding Timer Request
A program or service has requested a timer resolution smaller than the platform maximum timer resolution.
Requested Period 10000
Requesting Process ID 8028
Requesting Process Path \Device\HarddiskVolume6\Users\xxx\Cent\CentBrowser\Application\chrome.exe

Platform Timer Resolution:Outstanding Timer Request
A program or service has requested a timer resolution smaller than the platform maximum timer resolution.
Requested Period 10000
Requesting Process ID 5196
Requesting Process Path \Device\HarddiskVolume6\Windows\System32\audiodg.exe
 
you can check those requests per application if you run Powercfg.exe /energy command
Firstly I think you missed the point, W10 1903 itself was setting resolutions on the fly, sometimes constantly changing. Secondly Powercfg.exe only takes a single reading when it is first launched and therefore is unsuitable for monitoring timer resolution over a period of time.
 
Firstly I think you missed the point, W10 1903 itself was setting resolutions on the fly, sometimes constantly changing. Secondly Powercfg.exe only takes a single reading when it is first launched and therefore is unsuitable for monitoring timer resolution over a period of time.
why would you be monitoring the resolution at all? software is requesting changes to the timer and windows follows. while you have chrome (or other software) running, windows will have 1.0 ms (or whatever is requested), it will not be fluctuating for no reason.
and it is also not "constantly changing" in 1903 in general.
 
Are you using TimerResolution tool or something else to manage the resolution, maybe Standby List Cleaner which also has config/service to change it?

I've heard of someone else having similar experience but it doesn't make any sense, applications request the resolution. It can easily change just by minimizing a browser that is playing a youtube video such as in Edge, 1ms to 15ms or whatever other value was requested that was lower.
 
that's a placebo you have there
If I had a means to measure it in game I would know for certain. However, as it stands now it's inconclusive.



Like i mentioned in earlier posts, if you use useplatformtick and force the timer to the 0.5 then the drawbacks aren't as bad, but what is the point when you can force your timer to the lowest without useplatformtick and have everything being way more stable. useplatformtick is supposed to be used for debugging only and throughout the thread a lot of people posted negative effects from using it.
It was my understanding that the testing of synthetic blending of TR vs just 1 would show some distinct results. I wanted to see what the outcome would be. It was alleged that it might help with mouse movement/improved latency in FPS games (regardless of API used). So I put that to the test to see for myself. However, there is a caveat to this. The diversity on PC between AMD and Intel make it cost prohibitive for the average user to know. Other than based on the hardware they have.




Who even came up with, an even 0.5 instead of 0.496 looks better and more even, hence it must be better? "It does not look even so it must go out of sync" testing with the cpuz tool showed it stays in sync across all timers even with 0.496.
I'm not familiar with "even numbers in timer resolutions makes win10 better" argument. However, it was alleged that MS did something to the timer resolution to prevent, close or make it difficult for win10 to be acceptable to attack.
For example:
To exploit Meltdown or Spectre, an attacker needs to measure how long it takes to read a certain value from memory. For this, a reliable and accurate timer is needed.

One API the web platform offers is performance.now() which is accurate to 5 microseconds. As a mitigation, all major browsers have decreased the resolution of performance.now() to make it harder to mount the attacks.
https://developers.google.com/web/updates/2018/02/meltdown-spectre


But later, it was determined:
The new paper from Google researchers comes to the conclusion that there exists a universal read gadget on most of today's CPUs. What is more, the common fix of reducing the accuracy of the timer available to a language doesn't help as there are "amplification" procedures which can increase the time differences used in the attack.
https://www.i-programmer.info/news/...9-security/12556-google-says-spectre-and-meltdown-are-too-difficult-to-fix.html

Now we know Windows has been using this combo of synthetic timers for some time now. So it's not new. However, I did read/hear that win10 tweaked queryperformancefrequency from 3.XX to 10Mhz starting with 1809. Which was odd to me. As it stands now we have no way of changing that with 1809 and higher. Now, was it done for "Mitgation purposes", was it a bug or a by product of something else? I've not found any documentation regarding the change. If you have any links I would appreciate that.





A lot of hardware monitors that check and update in realtime causing spikes and microstutters everytime they update themselves. At least for me i could see an increase in stutter and microlags (that you can feel and see in your cursor movement or in fps games when rotating smoothly) when having things like for example coretemp open which also showed an increase in dpc latency spikes and decreases of polling precision. And of course that causes your frametimes to be more inconsistent.
Agreed, they can cause problems. But sometimes a person doesn't know they have them on. For example, GPU drivers may have them on even though you don't have OSD enabled. Others say disable Logitech software. Yet other's say don't use Icue from Corsair. It's not just limited to CPU/GPU monitoring software as we know it.




In addition if your system has any issues then forcing the timer to 0.5 essentially doubles these issues/spikes. On my older amd system on one install the network driver was too old and caused some dpc spikes when downloading anything. Forcing my timer to 0.5 made it go mental even without the network even being utilized spiking up to 5000us
It's true that it's very PC dependant based on make/model of motherboard, CPU/GPU, etc. There is not black and white use case.




And of course that causes your frametimes to be more inconsistent.Modern Warfare is not really good to test things in relation to mouse movement anyways as it doesn't run in exclusive fullscreen. You can test that by pressing your function key on your keyboard and changing the volume, the volume bar will show up on the top left over the game = not exclusive fullscreen. If i am not mistaken it can't be changed since mw runs in dx12 only and dx12 on windows 10 can not run in exclusive fullscreen, for whatever botched reasons. So you might want to test it in games like counter strike, quake and so on.
I disagree here. Moving forward we have to examine DX12 and Vulkan titles to see were we stand. Going back to DX9-DX11 titles paints a fallacy to what may or may not be happening in DX12/Vulkan. Now if you play CS:Go by all means use that method. Not trying to knock that.


Here is an article from 2018 about it.

HPET is very important, especially when it comes to determining if 'one second' of PC time is the equivalent to 'one second' of real-world time - the way that Windows 8 and Windows 10 implements their timing strategy, compared to Windows 7, means that in rare circumstances the system time can be liable to clock shift over time. This is often highly dependent on how the motherboard manufacturer implements certain settings. HPET is a motherboard-level timer that, as the name implies, offers a very high level of timer precision beyond what other PC timers can provide, and can mitigate this issue. This timer has been shipping in PCs for over a decade, and under normal circumstances it should not be anything but a boon to Windows.

However, it sadly appears that reality diverges from theory – sometimes extensively so – and that our CPU benchmarks for the Ryzen 2000-series review were caught in the middle. Instead of being a benefit to testing, what our investigation found is that when HPET is forced as the sole system timer, it can sometimes a hindrance to system performance, particularly gaming performance. Worse, because HPET is implemented differently on different platforms, the actual impact of enabling it isn't even consistent across vendors. Meaning that the effects of using HPET can vary from system to system, as well as the implementation.
Why A PC Has Multiple Timers

Aside from the RTC, a modern system makes use of many timers. All modern x86 processors have a Time Stamp Counter (TSC) for example, that counts the number of cycles from a given core, which was seen back in the day as a high-resolution, low-overhead way to get CPU timing information. There is also a Query Performance Counter (QPC), a Windows implementation that relies on the processor performance metrics to get a better resolution version of the TSC, which was developed in the advent of multi-core systems where the TSC was not applicable. There is also a timer function provided by the Advanced Configuration and Power Interface (ACPI), which is typically used for power management (which means turbo related functionality). Legacy timing methodologies, such as the Programmable Interval Timer (PIT), are also in use on modern systems. Along with the High Performance Event Timer, depending on the system in play, these timers will run at different frequencies.
...

The Effect of a High Performance Timer

With a high performance timer, the system is able to accurately determine clock speeds for monitoring software, or video streaming processing to ensure everything hits in the right order for audio and video. It can also come into play when gaming, especially when overclocking, ensuring data and frames are delivered in an orderly fashion, and has been shown to reduce stutter on overclocked systems. And perhaps most importantly, it avoids any timing issues caused by clock drift.
Now here is the meat of why I was experimenting. I wanted to see if there was any clock drift with it on vs off among any side effects that it might cause in DX12 titles were you can no longer disable screen optimizations, etc.



However, there are issues fundamental to the HPET design which means that it is not always the best timer to use. HPET is a continually upward counting timer, which relies on register recall or comparison metrics rather than a ‘set at x and count-down’ type of timer. The speed of the timer can, at times, cause a comparison to fail, depending on the time to write the compared value to the register and that time already passing. Using HPET for very granular timing requires a lot of register reads/writes, adding to the system load and power draw, and in a workload that requires explicit linearity, can actually introduce additional latency. Usually one of the biggest benefits to disabling HPET on some systems is the reduction in DPC Latency, for example.
https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results
(it's several pages...I didn't want to link individual pages)

IMHO this is what I assume the reason why MSI Afterburner, Hardware Monitor's frametime goes "nuts" with the large spikes. But I'm not 100% sure on that though.

I do want to link this page as it talks about how the timer resolutions can differ between Intel and AMD: https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results/3

I hope that answer's your questions.
 
https://github.com/CHEF-KOCH/GamingTweaks/blob/master/Myths/Known Myths.md

Using an older Windows 10 versions helps to improve the OS performance because 3,5 MHZ vs 10 Mhz QPC timer?
The official documentation (outdated) can be found over here.

What to know:

3,5 MHz and 10 Mhz are the SAME timer, Microsoft enforced 10 MHz since Windows 10 RS5+
This means a developer can "easier" see if it's TSC or not, previously a developer had to analyze the whole boot frequency, this is now resolved (TSC (= 10 MHz) HPET (> 10 MHz)) — You will NOT notice any difference in-game — Microsoft DOES NOT need to patch this nor is this a "bug". It was introduced to help developers to find TSC/QPC modes, not more and not less. — The so called "bug" was analyzed by a professional overclocker, with facts and background info, the result was that there will be no noticeable difference if it shows 3,5 Mhz, 6 MHz or 9 Mhz. It should also be noticed that the read-out method is also not accurate and there are margin of errors (which is normal), the delta difference is so low that no one actual can notice it.
The argument that you should use an older Windows 10 version because of this so called "bug" makes no sense, given the fact that there is no difference and that the read-out method is not 100% accurate.
 
https://github.com/CHEF-KOCH/GamingTweaks/blob/master/Myths/Known Myths.md

Using an older Windows 10 versions helps to improve the OS performance because 3,5 MHZ vs 10 Mhz QPC timer?
The official documentation (outdated) can be found over here.

What to know:

3,5 MHz and 10 Mhz are the SAME timer, Microsoft enforced 10 MHz since Windows 10 RS5+
This means a developer can "easier" see if it's TSC or not, previously a developer had to analyze the whole boot frequency, this is now resolved (TSC (= 10 MHz) HPET (> 10 MHz)) — You will NOT notice any difference in-game — Microsoft DOES NOT need to patch this nor is this a "bug". It was introduced to help developers to find TSC/QPC modes, not more and not less. — The so called "bug" was analyzed by a professional overclocker, with facts and background info, the result was that there will be no noticeable difference if it shows 3,5 Mhz, 6 MHz or 9 Mhz. It should also be noticed that the read-out method is also not accurate and there are margin of errors (which is normal), the delta difference is so low that no one actual can notice it.
The argument that you should use an older Windows 10 version because of this so called "bug" makes no sense, given the fact that there is no difference and that the read-out method is not 100% accurate

This is insightful information thanks for sharing. However, I do have a question. What method is used to actually show the "read out" if current methodology isn't correct?
What are others suppose to go on to make this determination for themselves? TimerBench 1.5?

-------------------------------
Side note:
I just found out some info about HPET and it's use is not to fix meltdown mitigation. I knew I read that before but it took me a while to find it:
Additionally there is word out there that the slow HPET calls are a consequence of the Meltdown and Spectre bugfixes. This is NOT the case. We found problems with HPET latencies back in July 2017, where these security flaws were far away from being on Intel's radar. Even though the Smeltdown fixes did not cause the HPET to be slow, it introduced additional strain on the CPU that adds on top of an already existing CPU bottleneck.

In summary the problem is a very slow timer implementation of the High Precision Event Timer on modern platforms, that is used without care by the developers. Badly affected are Skylake X and Kaby Lake X. Impacts can also be shown on Threadripper, Coffee Lake and in some degree on Ryzen as well. It could be discussed if a slow functionality is a bug, but honestly let's just call it the "HPET bug".

While the reduced theoretical numbers of HPET timer calls are quite self explantory, the impact of the slow HPET can not be directly applied on game performance. It heavily depends on the usage of timer functions in the game/engine and the combination of resolution, details and graphics card in place. So to trigger the bug you normally run your games on something like FullHD, maybe an older, less GPU heavy game as well, and power it with an oversized graphics card. In effect the HPET bug will show on screen with a decreased average framerate and an additional stuttering every now and then.
https://www.overclockers.at/articles/the-hpet-bug-what-it-is-and-what-it-isnt



----------------------------------------
Because the HPET bug can be difficult to spot, we have implemented a timer benchmark for windows that sheds some light on your timer configuration and its performance. It's called TimerBench and mainly focuses on QPC because it's the defacto standard in Windows. There is a synthetic test to show the maximum number of possible timer calls and a game test to analyze the impact of your configured timer in 3D applications. It uses Unreal Engine 4 and DirectX 11, a famous combination for games.
https://www.overclockers.at/articles/the-hpet-bug-what-it-is-and-what-it-isnt
 
I've not found any documentation regarding the change. If you have any links I would appreciate that.
Yes, this link has previously been posted: https://techcommunity.microsoft.com...chments/gxcuf89792/NetworkingBlog/172/2/Evolution of Timekeeping in Windows.pdf (Page 8)

Unfortunately what happens is someone will assume a change is made for a certain reason, and because they don't find any official documentation, they then draw their own conclusions on it and make a video/post/tweet/etc.. on it like it's fact, and then misinformation spreads faster than, well, you know which virus...

My system works best with the default bcdedit settings. If I start messing with them it will at best remain the same, or degrade performance, but not improve. In other words, 0.496ms works better then 0.5ms. In theory there shouldn't be any noticeable difference, but there's obviously more going on under the hood than we're aware of when you enable useplatformtick.
 
Yes, this link has previously been posted: https://techcommunity.microsoft.com...chments/gxcuf89792/NetworkingBlog/172/2/Evolution of Timekeeping in Windows.pdf (Page 8)

Unfortunately what happens is someone will assume a change is made for a certain reason, and because they don't find any official documentation, they then draw their own conclusions on it and make a video/post/tweet/etc.. on it like it's fact, and then misinformation spreads faster than, well, you know which virus...

My system works best with the default bcdedit settings. If I start messing with them it will at best remain the same, or degrade performance, but not improve. In other words, 0.496ms works better then 0.5ms. In theory there shouldn't be any noticeable difference, but there's obviously more going on under the hood than we're aware of when you enable useplatformtick.
Thanks for that link, it's appreciated.
 
81 - 100 of 132 Posts