“Audiophile” Hi-Fi Journalist Defends Expensive Cables, Admits He Believes in Magic

On the What Hi-Fi? site – a well known UK audio equipment review site – journalist Andy Madden wrote today an interesting defense of expensive audio cables. But he essentially states that he believes in magic, and doesn’t care about any kind of realist analysis of the issue:

You can put whatever research you want in front of me, all the measurements in the world aren’t going to stop me from having the opinion that all digital cables do not sound the same. There, I said it.

This is a serious problem in audiophile journalism. People get so convinced that their beliefs are true, that they refuse to accept any possibility that they are wrong. Frankly, this is irresponsible for a journalist to approach any type of item or content that is reviewed with this sort of pre-conception.

This journalist believes in magic. Note that he expressly talks about digital cables. While there is a possibility that there can be tiny differences in analog cables, this is simply not possible with digital cables, whether they are USB, HDMI or Ethernet.

What Hi-Fi? has lost all credibility. This said, at least they actually published this article; many other sites and magazines have journalists whose attitudes are similar, but who are ashamed to admit it.

Also, read Do Cables Make a Difference to Audio Playback? where the editor of What Hi-Fi? responds to my comments, and I show that even the top recording engineers don’t use fancy cables. And Music, not Sound: Why High-Resolution Music is a Marketing Ploy. And read about how What Hi-Fi? reviews cables; see how, in one case, they just posted the same review text for two different cables.

28 thoughts on ““Audiophile” Hi-Fi Journalist Defends Expensive Cables, Admits He Believes in Magic

  1. Two retorts, possibly rendered unreadable due to lack of paragraphs:

    “While there is a possibility that there can be tiny differences in analog cables”

    In my experience, there can be pretty large differences in analog cables, especially between very low-end cables and middle-priced cables. That doesn’t mean you need to go out and buy platinum encrusted cables marketed to idiots, but there really can be noticeable differences in analog cables.

    “This journalist believes in magic.”

    While it is certainly possible that the guy is just an idiot spouting derp, never underestimate the potential for simple corruption. High priced cables tend to be advertisers for such publications and websites, so having the journalistic side promoting the useless need for “hi-fi” digital cables could have its benefits…

    • I don’t want to suggest corruption. While that may be the case, I am more doubtful that it is so (unless it is passive corruption). In more than a dozen years writing for Macworld – and for other publications – no one has ever suggested that I write something specific because of advertisers.

  2. Two retorts, possibly rendered unreadable due to lack of paragraphs:

    “While there is a possibility that there can be tiny differences in analog cables”

    In my experience, there can be pretty large differences in analog cables, especially between very low-end cables and middle-priced cables. That doesn’t mean you need to go out and buy platinum encrusted cables marketed to idiots, but there really can be noticeable differences in analog cables.

    “This journalist believes in magic.”

    While it is certainly possible that the guy is just an idiot spouting derp, never underestimate the potential for simple corruption. High priced cables tend to be advertisers for such publications and websites, so having the journalistic side promoting the useless need for “hi-fi” digital cables could have its benefits…

    • I don’t want to suggest corruption. While that may be the case, I am more doubtful that it is so (unless it is passive corruption). In more than a dozen years writing for Macworld – and for other publications – no one has ever suggested that I write something specific because of advertisers.

  3. I begin to have trouble with the use of the world “journalist” at this point. I am one, and even though you have to give wide berth for different kinds of journalism, all of it requires a basic willingness to deal with the facts as they are, and not as we wish they would be. In theory, what What Hi-Fi does is a kind of journalism; somewhere between consumer guidance and trade coverage. In practice, this suggests a failure to launch.

  4. I begin to have trouble with the use of the world “journalist” at this point. I am one, and even though you have to give wide berth for different kinds of journalism, all of it requires a basic willingness to deal with the facts as they are, and not as we wish they would be. In theory, what What Hi-Fi does is a kind of journalism; somewhere between consumer guidance and trade coverage. In practice, this suggests a failure to launch.

  5. Ok. I’m no audiophile. My speakers were £20 from Oxfam. The cables I’m using are terrible, and 30 years old. So it will suit me fine if, when I get round to upgrading to digital maybe in 2034, the cables make no difference. I can completely believe that the idea that they do is just a marketing ploy. I’ll bear it in mind.

    BUT I’d like to read something from someone who actually knows the answer to this. I agree that the What Hi Fi article comes across as zealous, and the author does himself no favours by seeming to dismiss evidence, but at the same time I see a lot of speculation and some fairly broad assumptions from people on the other side of the argument.

    If we were only talking about audio data being transferred over an IP-based or similar network, I’d have no questions. It has to be error free, and the network tells you if it isn’t. I guess many digital audio setups will work like this, but do they all? I’m not sure, but that’s an assumption that seems to be made.

    Isn’t there another, more basic, way to use ‘digital’ cables, that doesn’t necessarily include error correction? I’ve read that ‘digital’ cables necessarily entail faultless transmission, but I don’t buy it. Digital cables are analog cables made to carry digital information. How you work with what comes out the other end depends on the circumstances. The OSI model doesn’t require reliable transmission on the physical layer (for obvious reasons), and some audio over ethernet protocols use this layer. I learned this on Wikipedia and don’t fully know what it means, but I bet most of the people commenting on this don’t know either. At least it suggests there is a point to be answered.

    One assumption that seems to come up a lot is that digital information is intrinsically ‘all or nothing’. That’s not right. Corruption happens, without necessarily entailing total loss. You should see my TV signal when it rains. What you do with degraded digital information depends on what you’re using it for, and the standards imposed. If you’re copying a digital audio file over a network, you need 100% error-free duplication. Wrong bits either need be fixed, or cause the entire transmission to fail. We can do that because of (things like) the Ethernet standard, which I believe is extremely clever, and quite possibly beyond the scope of my mind.

    But for realtime audio, things could be different. In certain environments, including perhaps consumer audio and also live performance, what we need from the digital information isn’t an archive copy, but a usable signal. We can happily tolerate quite a lot of signal degradation before we notice, and a lot more before we care. As long as the analog to digital converter at the far end of our cables is clever, perhaps we can get usable digital information even if some bits have gone astray. I would imagine this to be especially true for a digital audio signal, because it’s not just any old series of bits. Each sample is likely to be within a range set by the previous and next samples. Blips in the analog signal can be smoothed out by the ADC without necessarily reverting to full digital error correction.

    On the other hand, we can’t tolerate a signal dropping or glitching just because a few packets were a bit late or were corrupted by some kind of electromagnetic field, the like of which it wouldn’t be unlikely to find in a live performance setting. If we insist on zero errors, we introduce the risk of signal dropouts.

    Is any of this relevant. I don’t know. For all I know, audio data transmitted in a digital context is always error-corrected and whatever else, but I don’t think it’s anything like as simple as just drawing the analogy to copying a file over a network.

    Even if there is a point here, maybe it wouldn’t make the kind of difference that the audio buffs claim to hear. I can believe that. But I haven’t seen this stated definitively either way, by someone who seems to have any sort of knowledge beyond what they read on the ‘what’s the difference between digital and analogue TV’ leaflet. I’d be interested if anyone could point me to something that explains it properly, scientifically, without bias…

    • A valid point. First, the article I link to discusses Ethernet cables. The Ethernet protocol is designed to ensure that all data sent gets received. If there is packet loss, packets are re-sent. In general, over short distances (there are limits to the length of Ethernet cables), packet loss is probably rare. But an audio player can handle that, just as an HDMI cable or USB cable can. These protocols were designed, initially, to transmit data, and if data in a file is lost, that file can be corrupted.

      CD players also use error correction; have you ever noticed it? Unless you have a badly damaged CD, probably not.

      As for the digital TV signal, that’s a good analogy: that signal has no error correction. It’s broadcast blindly, with no acknowledgment from your TV. So there’s no way corrupted packets can be resent.

      • Yes, in the case of transmission using packets and the Ethernet protocol, there’s no way the ‘quality’ could be affected by the cable. I don’t think anyone can sensibly argue with that.

        But do you think there are audio uses of Cat 5 or 6 cables that people are calling Ethernet because that’s what the cables tend to be called, but which don’t actually use the Ethernet protocol? If so, could a difference be heard under some circumstances — given that we agree that digital transmission doesn’t necessarily equal faultless transmission? Could this just be a conflict of terminology?

        • I doubt it. Any device using networking via a cable – or even wireless – isn’t inventing a protocol. I’ve never heard of non-Ethernet networking that uses Ethernet cables.

          • Fair enough. Neither have I.

            Still, I’m not really talking about networking — but I don’t know enough about this to say much else sensible. I’m still not convinced that consumer audio equipment couldn’t just use Ethernet connectors to look snazzy but in reality just be sending standard digital audio down what is effectively a lovely high-quality wire, without the benefits of the Ethernet protocol. With only parity checking, would cable quality matter? If so, would it matter enough to be audible? Probably not.

            I suppose I just wouldn’t be as categorical about it either way without an understanding of cable engineering that I don’t think most audiophiles or anti-audiophiles could honestly say that they have.

      • Minor correction. Ethernet is not design to do what you say. Ethernet has no delivery guarantee. Each packet does contain a basic checksum to allow detection of corrupt packets. But re-sending has to be handled at a higher level protocol. In the case of most modern networks, we’re talking about TCP. Not even the bare Internet Protocol (IP) has re-transmit.

        I’d also like to say

        • You’re right; it’s the TCP/IP that’s used with Ethernet cables that handles that error correction.

  6. Ok. I’m no audiophile. My speakers were £20 from Oxfam. The cables I’m using are terrible, and 30 years old. So it will suit me fine if, when I get round to upgrading to digital maybe in 2034, the cables make no difference. I can completely believe that the idea that they do is just a marketing ploy. I’ll bear it in mind.

    BUT I’d like to read something from someone who actually knows the answer to this. I agree that the What Hi Fi article comes across as zealous, and the author does himself no favours by seeming to dismiss evidence, but at the same time I see a lot of speculation and some fairly broad assumptions from people on the other side of the argument.

    If we were only talking about audio data being transferred over an IP-based or similar network, I’d have no questions. It has to be error free, and the network tells you if it isn’t. I guess many digital audio setups will work like this, but do they all? I’m not sure, but that’s an assumption that seems to be made.

    Isn’t there another, more basic, way to use ‘digital’ cables, that doesn’t necessarily include error correction? I’ve read that ‘digital’ cables necessarily entail faultless transmission, but I don’t buy it. Digital cables are analog cables made to carry digital information. How you work with what comes out the other end depends on the circumstances. The OSI model doesn’t require reliable transmission on the physical layer (for obvious reasons), and some audio over ethernet protocols use this layer. I learned this on Wikipedia and don’t fully know what it means, but I bet most of the people commenting on this don’t know either. At least it suggests there is a point to be answered.

    One assumption that seems to come up a lot is that digital information is intrinsically ‘all or nothing’. That’s not right. Corruption happens, without necessarily entailing total loss. You should see my TV signal when it rains. What you do with degraded digital information depends on what you’re using it for, and the standards imposed. If you’re copying a digital audio file over a network, you need 100% error-free duplication. Wrong bits either need be fixed, or cause the entire transmission to fail. We can do that because of (things like) the Ethernet standard, which I believe is extremely clever, and quite possibly beyond the scope of my mind.

    But for realtime audio, things could be different. In certain environments, including perhaps consumer audio and also live performance, what we need from the digital information isn’t an archive copy, but a usable signal. We can happily tolerate quite a lot of signal degradation before we notice, and a lot more before we care. As long as the analog to digital converter at the far end of our cables is clever, perhaps we can get usable digital information even if some bits have gone astray. I would imagine this to be especially true for a digital audio signal, because it’s not just any old series of bits. Each sample is likely to be within a range set by the previous and next samples. Blips in the analog signal can be smoothed out by the ADC without necessarily reverting to full digital error correction.

    On the other hand, we can’t tolerate a signal dropping or glitching just because a few packets were a bit late or were corrupted by some kind of electromagnetic field, the like of which it wouldn’t be unlikely to find in a live performance setting. If we insist on zero errors, we introduce the risk of signal dropouts.

    Is any of this relevant. I don’t know. For all I know, audio data transmitted in a digital context is always error-corrected and whatever else, but I don’t think it’s anything like as simple as just drawing the analogy to copying a file over a network.

    Even if there is a point here, maybe it wouldn’t make the kind of difference that the audio buffs claim to hear. I can believe that. But I haven’t seen this stated definitively either way, by someone who seems to have any sort of knowledge beyond what they read on the ‘what’s the difference between digital and analogue TV’ leaflet. I’d be interested if anyone could point me to something that explains it properly, scientifically, without bias…

    • A valid point. First, the article I link to discusses Ethernet cables. The Ethernet protocol is designed to ensure that all data sent gets received. If there is packet loss, packets are re-sent. In general, over short distances (there are limits to the length of Ethernet cables), packet loss is probably rare. But an audio player can handle that, just as an HDMI cable or USB cable can. These protocols were designed, initially, to transmit data, and if data in a file is lost, that file can be corrupted.

      CD players also use error correction; have you ever noticed it? Unless you have a badly damaged CD, probably not.

      As for the digital TV signal, that’s a good analogy: that signal has no error correction. It’s broadcast blindly, with no acknowledgment from your TV. So there’s no way corrupted packets can be resent.

      • Yes, in the case of transmission using packets and the Ethernet protocol, there’s no way the ‘quality’ could be affected by the cable. I don’t think anyone can sensibly argue with that.

        But do you think there are audio uses of Cat 5 or 6 cables that people are calling Ethernet because that’s what the cables tend to be called, but which don’t actually use the Ethernet protocol? If so, could a difference be heard under some circumstances — given that we agree that digital transmission doesn’t necessarily equal faultless transmission? Could this just be a conflict of terminology?

        • I doubt it. Any device using networking via a cable – or even wireless – isn’t inventing a protocol. I’ve never heard of non-Ethernet networking that uses Ethernet cables.

          • Fair enough. Neither have I.

            Still, I’m not really talking about networking — but I don’t know enough about this to say much else sensible. I’m still not convinced that consumer audio equipment couldn’t just use Ethernet connectors to look snazzy but in reality just be sending standard digital audio down what is effectively a lovely high-quality wire, without the benefits of the Ethernet protocol. With only parity checking, would cable quality matter? If so, would it matter enough to be audible? Probably not.

            I suppose I just wouldn’t be as categorical about it either way without an understanding of cable engineering that I don’t think most audiophiles or anti-audiophiles could honestly say that they have.

      • Minor correction. Ethernet is not design to do what you say. Ethernet has no delivery guarantee. Each packet does contain a basic checksum to allow detection of corrupt packets. But re-sending has to be handled at a higher level protocol. In the case of most modern networks, we’re talking about TCP. Not even the bare Internet Protocol (IP) has re-transmit.

        I’d also like to say

        • You’re right; it’s the TCP/IP that’s used with Ethernet cables that handles that error correction.

  7. I have a $2000 USD bet to anyone that would like to match that amount: Check out my Ethernet Cable Challenge here:

    I particularly encourage Chord and AudioQuest to to pick out their ‘Audiophile Ethernet Cable’. I love AQ and the directional arrows in particular.

  8. I have a $2000 USD bet to anyone that would like to match that amount: Check out my Ethernet Cable Challenge here:

    I particularly encourage Chord and AudioQuest to to pick out their ‘Audiophile Ethernet Cable’. I love AQ and the directional arrows in particular.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.