post #21 of 21
Originally Posted by friend'scatdied View Post

Do you suppose that difference (related to implementation and circuit externalities) actually translates to an audible difference?
Suppose you matched the output levels of an onboard solution and a discrete card (within 0.1dB) and performed a DBT -- would you reliably be able to tell the two apart?
What does it take to make that difference confirmable with confidence in testing (e.g. quality of circuit, layout, EMI, other hardware)?
Just a couple of things to consider.

Hm, fair enough. Good questions. Too bad I don't really have answers or good data to point to.

It should be that differences are easier to tell between lower-end equipment than between mid- and high-fidelity stuff.

In the very least, at the levels we're talking about, you should be able to plug in some sensitive IEMs (some models are definitely above 130 dB SPL / 1V input; those full-size ATH-AD700 are more like 115 dB SPL / 1V input) and definitely hear the noise floor. If the noise is significantly higher because of the implementation, that should be audibly different, verifiable with the right DBT.

Some dedicated sound cards have much higher output impedance than whatever 1-2 ohms was quoted there (but is there a series output resistance or DC blocking cap after that?). A decent difference in output impedance alone would definitely be audible with many headphones.

As an example, the ~$75 Focusrite VRM Box uses a DAC chip with a 120 dB dynamic range, top-of-the-line stuff. They claim that the device's measured dynamic range is only 108 dB (still excellent), so 12dB went bye-bye because of the realities of implementation, even for a dedicated audio device designed by a pro audio company, with presumably no notable EMI and other noisy electronics on the board to muck up the results. That said, it's harder and harder to get stuff right and max the performance, the higher up you go.