I still have a question in that kind of matter, which is related but more on the tech/practical side when configuring your sensitivity in-game :
Do values different than sensitivity 1 mean you're potentially discarding info coming from the mouse input by the means of in-engine rounding?
So, is there difference in rounding when it comes to having a decimal part on the value of your sensitivity?
0.022 * 1.0 = 0.022 Nothing to round here
0.029 * 1.7125 = 0.049|6625 Does this part get clipped?
Asking this since I was playing on a "weird multiplier" that gave more decimals than it should on those operations :
0.022 * 1.55 = 0.034|1
And it felt very slightly inconsistent, specially on very fast swipes (up to 3m/s), since then that value should get multiplied by the inputted CPI from the mouse, so it can lead to a large error if rounded.
Another question would be if on quake/src engines values below 1 should be treated different or as "discarding counts", or if the same rounding behavior is present, only more prominent since there is going to be potentially more decimals.
0.022 * 0.35 = 0.007|7
360/0.007|7 = 46753 "possible aiming points"
Or, if we make the rounding by truncation (discarding values) :
360/0.007 = 51428 "possible aiming points"
So if this was true (allowing all the suppositions about the rounding to stay true), lower sensitivity would be better in all cases according to the engine interpretation we're making, so it would make sense to put our mouse at the highest CPI possible and get our in-game sensitivity as low as possible.
However, we don't know what the rounding scheme is, so chances are it would be not as precise because of rounding and discarding counts.
I mean, ultimately the game engine takes the counts that the mouse has sent and transforms them in the likes of
Y_axis_CPI_input * m_pitch * sensitivity + (enable_accel * (accel parameters)) = Y axis position increment
X_axis_CPI_input * m_yaw * sensitivity + (enable_accel * (accel parameters)) = X axis position increment
A - Let's take X axis for example, and a default 400CPI mouse giving all the CPI in one count :
400 * 0.022 * 0.35 = 3.08 "counts" in X plane.
(Here we depend on how that's translated on-screen, it could be degrees or it could be another way of "counting" movement).
B - Same but with a 1600CPI mouse :
1600 * 0.022 * 0.35 = 12.32 "counts" in X plane.
C - Let's take an integer for sensitivity value and 1600CPI :
1600 * 0.022 * 1 = 35.2 "counts" in X plane.
D - Let's take an integer for sensitivity value, but now with a different m_yaw, still on 1600CPI :
1600 * 0.01 * 1 = 16 "counts" in X plane.
E - Let's try to match the number of "counts" we had on B but without any chance of rounding and using m_yaw only :
1600 * 0.0075 * 1 = 12 "counts" in X plane.
So, from a point of view of taking the most advantage of your mouse CPI, would it make more sense to put sensitivity at 1 and then modify only m_pitch/m_yaw on q3/src based games so counts don't get either discarded or rounded up in the final calculation?
In conclusion, and as the "mother" question :
Starting at sensitivity 1 and lower, should we pick very carefully our m_pitch/m_yaw so it comes as integers once multiplied by the CPI of the mouse?
The chance of getting integers is greater the higher the CPI, so upping the CPI might make sense then?
PS : I hope I made sense here ;x
All the mistakes on that posts and misconceptions are mine and entirely mine, if you copy my mistakes I'll send you the FBI to be shut down inmediately
EDIT : Yes, I accept a "it could be a placebo effect" aswell, but I rather go into numbers.