Originally Posted by nleksan
I can say with complete confidence that anyone claiming they can hear a difference in quality over a top of the line sound card compared to an equivalent priced, or even twice as expensive DAC, is hearing things that aren't physically there
There are three camps here. Conventional wisdom states that in order for a system to be bit perfect it must act as a pass-through device, not altering the digital data in any fashion through the use of matrixing, DSP, or other means. The idea behind this is to say the output is exactly the same as what was put in. This idea is supported by the camp's theory that bits are just bits and that digital is just ones and zeros, so if a one is a one and a zero a zero the data has passed un-fooled around with and is thus bit perfect. This means that all bit perfect signals should be created equal.
The second camp states that bit perfect means that the bits are exact, but jitter may still be introduced. When doing something in non-real-time (running an application) bit-perfect is applicable because the data are buffered and sent in packets that are just resent if there are any errors (otherwise you would have applications crashing constantly). Audio, on the other hand, is real time. Bit perfect implies that the data and sample rates match, it does not mean jitter isn't introduced within those same sample rates.
Finally the third camp. Let's all start by agreeing that audio is a real-time process. Even if an application loads data into memory for processing, everything before and the whole operation after is a real time operation. Real time processes in a computer take the form of a square wave, specifically a pulse width modulation. This pulse width modulation is an analog representation of what we conceptualize as a digital signal and is created by voltage in the power supply. This PWM signal has both amplitude characteristics and timing characteristics. The timing, or duty cycle, along with the amplitude determine the frequency response of that square wave. A computer is made up of billions of transistors, all switching very quickly to changes in logic (mathematical algorithms created by the operating system and software). Based on the input voltages, logic switches create a new version, a duplicate, of the square wave (either theoretically identical or altered). That new version of the square wave is also created from power in the power supply. Because audio is real time, there is no error correction that can be done to this square wave, any resulting wave form IS your music.
Looking at the concept of bit-perfect, it's arguably impossible to have bit perfect playback in a real-time system because there are no bits. If the power supply introduces noise or there is jitter on the square wave this results in a square wave that is not identical to the original. Because the square wave is an analog signal it is still susceptible to noise and distortion. A square wave, however, reacts a little differently than its sine wave counterpart. Jitter is an alteration of the duty cycle, when that jitter hits the digital interface chips, a DAC for instance, that jitter is seen as an amplitude error and creates an alteration of the frequency response. Amplitude distortion itself is created by noise voltages that either add or subtract from the amplitude of the square wave. This introduces harmonic content into the square wave that shouldn't exist in the music. The square wave may still resemble a one or a zero, but it contains additional frequency content. So as far that bits are concerned, it's bit perfect, but with additional harmonic content that shouldn't be there.
Sorry for the long post, but it was an interesting article