microe's mousetester program is undoubtedly among the most useful tools for studying the tracking behavior of mice. most commonly it has been used for reasonably simple tasks such as estimating malfunction speeds and identifying tracking or surface compatibility issues. however i believe that mousetester's plots have a lot more to offer; indeed in many of my posts, i've used mousetester for more involved measurements and investigations, such as measuring sensor framerate, studying mcu processing, etc...
this post is intended to share this knowledge so that we can better analyze, review, and evaluate mice, especially the technical details of their tracking behavior. one can argue that ultimately what's important is how well you perform with your mouse and the amount of confidence you have in the mouse's tracking. but we are all human and subject to biases. if people tell you that a particular mouse has a certain amount of smoothing, or that another mouse feels "raw" and responsive, it is very difficult to ignore this information when conducting your personal evaluations. a similar case can be made for one's first impressions of a mouse. anyway, hopefully these technical details are interesting in themselves, even though some of them may not necessarily affect our perception of tracking. the goal of this is not to simply answer questions like "does this mouse have smoothing", but mostly to explain where the data in mousetester plots come from and why they can provide clues to the operation of the mice's firmware and sensor.
almost all of this pertains to mice with image correlation sensors, i.e. all the mouse sensors made by agilent/avago/pixart and stmicroelectronics. as above8 has recently brought to our attention, twin-eye sensor based mice and ball mice are alternatives to image correlation sensor mice and fundamentally operate differently. things like measuring cpi can obviously be applied to those mice, but others such as framerate are properties specific to image correlation sensors. in what follows, "sensor" will always refer to an image correlation sensor unless stated otherwise.theory of the data pipeline:
before diving into all the things that can be learned from the plots, it is necessary to understand where exactly the numbers come from and how exactly the data are processed as they travel from the sensor to your computer.
this diagram summarizes the primary components relevant to tracking and their communication of motion data.
in certain cases, some of the components are integrated into a single package, such as the sensor and illumination source in the avago9500/9800 and pixart pmw3366, or the sensor and mcu (probably not an mcu... just something that can communicate with usb) in SoC packages including the stm mlt04 and others.
starting from the bottom (physical motion) up, let's consider each stage in the pipeline.from the real world to the sensor:
warning: some of the details are from my own inaccurate understanding of how sensors work
image correlation sensors operate by capturing images of the mousepad surface and determining motion by finding the overlapping regions in the images. typically the physical size of the imaged area is around 1mm x 1mm and the sensor captures and processes several thousand frames per second. in current sensors (afaik), the resolution of each frame is between 19 to 36 pixels on each side.
some details of the correlation process are described by this patent
and you can find similar patents via a search like this
. while these details may be quite involved, the key point to keep in mind is that at the lowest level, the motion data are updated at most once per frame.
for one axis, this is how data in a simple image correlation sensor would relate to the actual motion.
in practice, frame timings are not perfectly stable; however this image still applies.
why frame timings vary (Click to show)
a) the oscillators that define the sensor's clockrate (which controls the framerate) are not stable. http://www.maximintegrated.com/en/app-notes/index.mvp/id/2154
>add data/picture/video showing clock drift
b) when there is not enough illumination on the surface, the sensor may automatically increase the exposure time to allow more light. if this exposure time exceeds the nominal frame period, the actual frame period must be stretched out to accommodate for this.
c) sensors have rest modes where after a period of inactivity, the framerate drops in order to save power and/or reduce the wear on the components. for some sensors, it is possible to disable these modes.
d) some sensors (9800,3988,3310,3366) have dynamic framerate scaling where the framerate automatically steps up once the motion exceeds certain velocities. not sure about exact purpose, but probably the lower frame modes allow for higher accuracy tracking as there is more time available for processing. and any systematic errors from the correlation process are accumulated less frequently. otoh high framerates are necessary for high max tracking speeds because at low framerates, with high motion speed, there is less overlap between consecutive frames.
finally, (not sure about this) the processing latency may not necessarily be constant even if the frames have perfectly steady timing
(this is 80% speculation)
in this picture i depicted how a sensor with infinite precision would respond. probably, some very high precision data is available somewhere in the sensor's internal pipeline, and then the counts are derived by downscaling this data. since counts must be represented by integers, the sensor must keep track of the quantization error during the downscaling. for instance if the sensor is moving at a constant 1 inch/second, and the sensor is set to 400cpi with a framerate of 1000fps, between each frame is 1/1000 inches of motion, which would ideally correspond to 0.4 counts. suppose the sensor has enough precision to exactly recognize this amount. but in storing the data, it cannot store 0.4; it can only store integers. and it cannot simply always round down 0.4 to 0, because that would mean no motion... and a sensor that doesn't even track 1inch/second is a really really bad sensor.
what it actually does is keeping track of the error like this:
1. begin with 0 error. round down the first 0.4 to 0
. store the difference, 0.4, as the error and add it onto the next count
2. 2nd count is also 0.4, but we have 0.4 of error from before. adding up gives 0.8, which rounds up to 1
. error is now -0.2
3. 3rd count is 0.4. adding the error gives a total of 0.2, which rounds down to 0
, so the error now is 0.2
4. 4th count is 0.4. adding the error gives a total of 0.6, which rounds up to 1
, so the error is not -0.4
5. 5th count is 0.4. adding the error gives a total of exactly 0
with this procedure the error from rounding is not accumulated and can always be kept between -0.5 and 0.5 counts,
here and above i've assumed that it is the two most recent frames being compared to calculate the motion. in the patents there is mention of recycling the reference frame (the frame against which the most recent frame is compared), so my assumption is not true in general. still, the error propagation scheme i described should still exist at some level; otherwise there will be +-0.5 counts of accumulating error whenever the reference frame is updated. this would mean that lower sensor cpi levels would have significant issues with tracking variance (changes in resolution depending on speed) issues, but they don't.
finally there is the possibility of sensor smoothing, which will be discussed in its own section.from the sensor to the mcu:
i have no idea how soc sensors work, but the same concepts apply
the mcu serves as the bridge between the computer and the sensor. as the computer does not have direct access to the sensor's registers or memory, the mcu must periodically read the motion count data from the sensor's registers. if you want to know the specific details of this (e.g. for writing mouse firmware), the adns9800 datasheet
explains everything. other avago/pixart sensors work similarly. but the tl;dr of the procedure is that
-the mcu and sensor communicate via SPI
-the mcu is responsible for loading the SROM (specific to avago/pixart) and configuring the sensor
-once the sensor is set up and running, it accumulates motion data into its motion delta registers
-after the SROM is loaded and the sensor is configured, the mcu simply reads the motion delta registers periodically. the registers are reset to zero upon each read.
if we extend the previous figure, which related the sensor's data to real world motion, to relate the mcu's data to the sensor's data, we'd have something which looks like this:
it is clear that the data in the mcu is simply the sum of the sensor's motion data since the previous read.
almost always, the frequency of the register reads is lower than the sensor's framerate; in other words, the value from one read corresponds to data from multiple frames. if the framerate is 2000hz (frame period = 0.5ms), and the mcu reads every 1ms, then it's straightforward: each read gives the sum of data from two frames, since the ratio between the periods is exactly 2. but what if the ratio isn't an exact integer? in the drawing above, the period of register reads is approximately 2.5 times the frame period. correspondingly, we see in the drawing that the first read corresponds to the sum of 3 frames' data, the second to the sum of 2, the third to the sum of 3, i.e. a 3,2,3,2,3,2,... pattern. if the ratio is 2.8, the corresponding pattern would be 3,3,3,3,2,3,3,3,3,2... .
essentially, whenever the ratio between the register read rate and the sensor framerate isn't an integer, periodic patterns unrelated to the raw data from the sensor appear in the mcu's data. i'm not sure what to call this... maybe moire? the same phenomenon was described here
for the mismatch between usb report rate and display refresh rate and was referred to as microstutters, but i think moire is a more precise term given its periodic nature.
the stability of the register reads depends on the firmware. for properly designed firmware with the register reads synchronized to the usb polls, jitter in the reads can be less than 1us, or essentially 0. For weaker/slower microcontrollers this may be more difficult to achieve; however, i believe that any modern gaming mouse's mcu should be completely capable for this. to measure this jitter directly requires probing the SPI bus with a logic analyzer or oscilloscope, and it is difficult to discern this jitter in mousetester graphs. it would only show up as changes such as (32323232323 -> 32323322
3232) in the data, and it is difficult to discern whether such a change is due to sensor framerate fluctuations or register read timing jitter.from the mcu to your computer:
the mcu and computer communicate via usb or ps/2. no one uses ps/2 for mice nowadays so let's ignore that for now. unfortunately usb is quite a complicated protocol and i haven't studied it in enough depth to fully understand the details relevant to this. but i'll share what i do know and hopefully also clear up some misconceptions and inappropriate nomenclature.
>tbcthe big picture:
the above primarily discussed timing issues as the data is transmitted from the sensor to the computerbasic measurements and analysisresolution:
i'm not sure if there are universally agreed-upon definitions of cpi/dpi, given how many stages there are in the data pipeline, including those in the computer itself that i omitted in the previous diagram, but here i'll refer to cpi as the counts per one inch of motion received by mousetester or any other program using raw input. due to potential processing and scaling by the mcu and/or the driver, this may or may not be the same as the counts the sensor sends to the mcu.
the best way i know to measure cpi is to
1. place a sheet of paper of known size (or a ruler) onto the mousepad
2. align the part of the cable closest to the mouse to the edge of the paper
3. align the mouse to be perpendicular to the paper.
3. start collecting data in mousetester
4. move the mouse with as little rotation as possible, at a reasonable speed, until the cable reaches the other edge of the paper. the most important thing is to make sure your mouse is in the same orientation when you finish as when you started.
5. stop collecting and plot the data in mousetester with Plot Type: X vs Y. zoom (scroll up) into the edge corresponding to where motion stopped, until you can read off the xCounts value for the last dot.
6. this is the total counts. divide by the actual motion in inches to get the cpi.
7. as always, repeat the measurement a few times to roughly know the precision of the measurement. you should be able to get ~1% consistency.
keep in mind:
0. ultimately cpi can depend on many things, including the surface you're on, the height of the mousefeet, etc... additionally, some people have reported that for some mice (the 1st finalmouse and some others i can't remember) lifting the mouse can trigger a recalibration process that somehow affects the cpi significantly.
1. if you don't hold the mouse perfectly perpendicular to the paper/ruler, the xCounts value will underestimate the actual motion by a factor of cos(angle the mouse was off by). but due to how the cosine function works, it's quite forgiving. for instance if you're off by 10 degrees throughout the whole motion, which would be easily noticeable, that's only a ~1.5% decrease in the total counts.
2. if the angle of the mouse you start off with isn't the same angle you end with, the actual motion the sensor travels can be a bit more or less than what you wanted. if the end of the cable is 5cm from the sensor and you're off by 10degrees at the end (again really noticeable and you should do much better), the sensor has moved 5cm * sin(10deg) = 8.7mm more or less than you intended. for ~30cm of total motion, that's a ~3% error
3. variance. the cpi changes slightly with the actual speed. unfortunately there isn't much public data on this... usually people throw around numbers in the percent range though. probably this won't affect the consistency of your measurements if you use a reasonable speed like 10cm/s.max speed:
occasionally we make the distinction between "perfect control speed", the maximum speed at which the mouse tracks with a reasonably accurate cpi, and "malfunction speed", the maximum speed the mouse can move without outputting garbage. the meaning of these is more thoroughly described in this famous article from esr:
actually without some way to measure the actual physical speed, it is impossible to know either of these speeds in mousetester or any other software-only method, as the resolution of the mouse (by definition of perfect control) changes in the region between the perfect control speed and malfunction speed. rather, what we usually get from mousetester is the "max reported speed" which would be the highest counts per second divided by the assumed resolution.
for instance, with that plot of the wmo's speed response at 1000hz polling, deviations from the perfectly straight lines can be seen immediately above 1.5m/s. the maximum count response from the mouse is achieved at a speed of 1.65m/s; however the resolution has dropped off slightly at this speed and the reported counts corresponds to a speed of ~1.57m/s for the cpi at lower speeds (~401 in their case). this would be the speed we'd see in a mousetester xvelocity plot.
generally it is assumed that this maximum reported speed is close to the perfect control speed. as seen in the example of the wmo, this is a decent approximation, so we tend not to nitpick about this detail. furthermore, in modern sensors, afaik the change in tracking beyond the perfect control point is quite obvious
the measurement of max reported speed is straightforward: measure (or assume) the cpi, collect data in mousetester and swipe the mouse hard, making sure to keep the mouse on the mousepad, and plot xvelocity. the speed you you're look for is the highest speed achieved before the curve suddenly drops.
here are example plots for a g100s:
Warning: Spoiler! (Click to show)
a swipe without malfunctions ([email protected]
, black supermat)
a malfunctioning swipe exceeding the max speed
the max speed here is around 2.6m/s.
note that implementations of modern sensors (e.g. 3310, 3988, 3366) without 8-bit data limitations can track fine even above 5 meters/second. it may simply not be possible to accelerate the mouse above the perfect control speed in the space you have.
i was able to reach 8m/s with a g303 using the full ~50cm diagonal of my mousepad by standing up and swiping the mouse off the edge of the pad. to distinguish between a cutoff of tracking due to malfunction or due to simply going over the edge of the pad, like with the cpi measurement, an X vs Y plot can track how much motion is actually registered. if it is noticeably less than the actual distance for which the mouse was on the pad, most likely it is due to malfunction.integer size limitations:
counts are represented by two's complement integers. wikipedia has all the details, but the only important things we need to know are that 8bit integers are limited in range to -128 to 127 and 16bit integers are limited to -32768 to 32767.
at every stage of the data pipeline, from the sensor to the computer, there is the possibility of the counts exceeding the corresponding range, i.e. integer overflow, especially (and almost exclusively) with 8bit data. in this case the data are either clipped to or "wrapped around" the range. the latter case is (afaik) rare.
here is one example of the latter case: the azio exo1
Warning: Spoiler! (Click to show)
where the counts are supposed to exceed 128, they simply wrap around to -127. as the sensor (pmw3320) actually outputs 12bit integers (range -2048 to 2047), the reason for this behavior is related to the firmware; either the mcu isn't storing or reading the upper bits from the sensor and/or its usb motion data is 8bit.see how there are points in between the "wrap"? that's a sign of mcu smoothing, which will be explained later
clipping is what happens more commonly and can be seen in the esreality mousescore article for the [email protected]
it also occurs in (all?) 3090 mice for fast motion at high cpi settings. take a look at the plots for 3090 mice in this link
for the 3090 and many other sensors, the 8bit data, as well as its associated issues as described above, begin with the sensor itself having only a single 8bit register per axis for motion counts. the avago adns9500 was the first (i'm pretty sure) avago sensor to support 16bit motion data, by using 2 registers per axis to store motion counts. the 9800, 3988, 3310, 3366, and presumably future gaming sensors from pixart all do the same. with firmware that is aware of the 16bit data, mice with these sensors allow ridiculously high cpis to be used without having the perfect control speed limited by the aforementioned issue. well not until the mice need to track at hundreds of meters per second or at 19385762cpi.
>to be addednot so basic things
>to be added
>to be added
>to be addedmore technical things (which are not always important)
sensor framerate and framerate shifts:
>to be added
>to be added
>to be addedother things where mousetester is not the best tool
>to be added
>to be added