It's actually a balance.
Let me explain, but keep in mind that's only how I am figuring out it happens from my experience studying CS :
Sensor takes a number of "pictures" from your mousepad, let's say for simplicity they're taken at a 9x9 pixels resolution (a small, low-quality square), at 125 fps, and is connected to your PC at usb rate of 125Hz (hence it would be 1:1 matching and there would be no need of MCU/MPU). Let's consider the mousing surface an ideal one, that the sensor can differentiate with ease, and let's say that each "picture" the mouse takes of the surface is a perfect 1x1 mm square. Also let's consider mouse optics and the sensor a flawless one (
Then we would have a mouse that each frame could tell you if you moved one position to each side, so the additions or substractions on the cartesian plane would work a bit like that matrix :
(-1,1) (0,1) (1,1)
(-1,0) (0,0) (1,0)
(-1,-1) (0,-1) (1,-1)
So both in the x and y plane the differences referring to your current position would be translated into your mouse pointer, raw, no interpolation, no processing, 125 times per second, fully synchronized.
What you would have here, though, would be that in a certain speed that you move in a certain axis, you would hit negative acceleration, which would happen once you move faster than constant
125mm/s, which would make you go at 0.125m/s, which is laughable.
Let's put more fps on that sensor, and let it hit 1000 "pictures" taken per second, and let's overclock the USB port up to 1000Hz.
Now you will have that your negative acceleration would be hit at 1000mm/s or 1m/s, which is much, much better than the example above.
We can, from here, fork into three ways of improving the mouse perfect tracking speed :a) We make the sensor take higher-resolution pictures.
So instead of a 9x9 pixel matrix in which we only add or substract 1 unit on each axis, we can have a 17x17* matrix of pixels to compare with previous input, hence maximum change would be abs((8,8)), and thus that would fill into a 3byte bus** or a 17-bit bus (which is NOT standard). (This would be translated as having higher DPI on the mouse)
Here we have to make a compromise :
1- If we keep 1 squared mm area for snapshots, it means having much better scanning technology and it would be much more expensive, but it would have much higher DPI, probably a very low LOD (borderline to unusable) and a perfect tracking speed of up to 16000mm/s or a whooping ass 16m/s in the most ideal conditions. This approach is the one that was taken when the first laser mice came out, and it's supposing perfect surface scanning. But that's not realistic, let's see the other pharagraph :
2- If we think our mouse should not be too expensive, and thus we strive for a cheaper way to achieve good results, we'll make a compromise and make the mouse scan a wider area with the same 17x17* resolution we defined before here in a), so let's say we take a 10x10mm (or a square centimeter) area for our snapshots. That would boost our DPI the same as in 1-, we would have a higher LOD (depending on lens it would be from 1-2mm to 1cm), it would be a "cheaper" way to boost performance but the perfect tracking speed would go at 1600mm/s or 1.6m/s.
Of course, if we forced the sensor to take the same pictures at 5x5mm (half a square centimeter), we would get 3200mm/s or a more desireable 3.2m/s. But that would be far more expensive again, as the tech would need to be much sharper (and higher res).
* Odd number just so we have a line that acts as "previous x" and another line that acts as "previous y", else it would jitter A LOT if there was no soft or MCU controlling what the default was. Else we would have one side and either up or down clipped one count.
** Range would be -8 to 8, effectively 17 bits, so it would actually be 2bytes + 1 bit, typed 3 bytes for simplicityb) We make the sensor take more pictures per second.
We still have our "ownage" 9x9 pixel matrix to compare over our last capture, but now we upgrade it accordingly :
- The sensor now captures at 5000fps
- We introduce a MCU that takes care of the excess input, since now we would have 5 scans for each output to the USB, so that MCU only adds the number of counts of the last 5 scans (and since our sensor is "perfect", we don't need to think about bugged or invalid scans
), which would make a max number per axis of abs(5,5), thus at much we would need 11 bits per axis (range -5..5).
So now, our improved, high fps sensor, with a perfect implementation of the MCU and perfect sensor, would be tracking at 5000mm/s or 5m/s.
But the more FPS, the less reliable the sensor gets :
-The scanning area and the resolution would be the same, so taking 5x the number of snapshots would be much more expensive to implement without faulty scans (and thus we would need a MCU to discriminate good and bad scans, or a newer technology [such as laser was when it was introduced]).
- To not to make the sensor too expensive, we would be able to reduce the resolution of the sensor in order to save some space and silicon, optimize the results, and get only matrixes of either 4 or 5 elements instead of 9 (We would need the MCU to calculate the optimized movement).
- The MCU would cause a (minimal) delay and could also produce some "anomalies" depending on the implementation (jittering, input lag, angle snapping, etc), so it has also some letdowns.
Also, such a sensor would be more expensive than our "beloved" 1000fps one.c) We make the sensor take pictures in a wider area.
So with the same resolution, we would have a wider area to take the snapshots, so the chances of taking _good_ snapshots are greater, but that would increase LOD dramatically.
Since we don't change the resolution or the fps, the maximum perfect tracking speed would be 1m/s, but since we have a bigger area, we could go and try to modify other aspects of the mouse to gain performance :
1- We up the resolution of the sensor, so with a wider area, we get more DPI. Thus we get the improvement of a) on board. Since the area is greater, the LOD would be higher (you can't scan from a certain angle, you're restricted by the lens), but we would have a similar pixel density as we had at first, just with much higher resolution. So if we had the same 17x17 resolution sensor as a)2-, we would already have 1.6m/s of perfect tracking out of the box. But the wider the area, the more resolution we can have with the same density, so we could crank the DPI high up, let's imagine it would take 33x33 snapshots, we would have 3.2m/s of perfect tracking. Of course, it is far more expensive to have a sensor at 1000fps taking 33x33 snapshots than one that only takes them at 17x17, but you need to compromise somewhere.
2- We up the fps of the sensor (and add a MCU). Since we've got a wider area and the same 9x9 resolution with which we started, the number of good snapshots should be way higher, so the number of anomalies would go down, and should be filtered by the MCU whenever they happen. We can achieve a huge amount of fps by making the area wider, but it's in compromise with the resolution, so we would end up having a perfect tracking speed of as many thousands of fps we would have (in that example).
3- We balance both and manage to improve resolution AND fps (it seems to be happening currently with the best optical mice out there), so we get a compromise between high DPI/CPI/PPI and lots of fps, thus we have the benefits of higher perfect tracking speed and more resolution, without making the sensor much more expensive (thus being valid for scale market economies), so we're talking of around 2-5m/s of perfect tracking speed and CPI as high as around even 4000, both coupled up. If we add an MCU here, we can get different DPI steps by modifying both the fps and the resolution at which the sensor works, thus we will get different max tracking speeds at each step for a given implementation of the sensor/lens/MCU.
Sorry for the wall of text, and I am responsible for any inaccuracy that you might find, which I guess there are many