The Trouble With HomePod Reviews – Monday Note

Jean-Louis Gassée, former Apple executive and created of BeOS, doesn’t like the way HomePod reviews have been done.

With its HomePod speaker, Apple has once again reshuffled existing genres. As an almost singular representative of the new consumer computational audio devices, HomePod’s slippery algorithms defeat quick and easy reviews.

He criticizes most tests, as not being scientific, and highlights David Pogue’s “blind” test of four speakers with give people.

He discusses the “computational audio” used by the HomePod, and notes:

This is where we find a new type of difficulty when evaluating this new breed of smart speakers, and why we must be kind to the early HomePod reviewers: The technical complexity and environmental subjectivity leads to contradictory statements and inconsistent results.

I think he’s missing the point. When one reviews something subjectively, the goal is to find out how it sounds to each listener. You can double-blind all you want, but that’s not how people perceive music. There is certainly room for measurements – but not when they’re done wrong – but the true test of a device like this, especially one where the surroundings change the sound, is to have listeners judge it.

Yes, when you have four speakers, and their volume isn’t perfectly balanced, that is an issue, but the main takeaway in Pogue’s review was that a) no one liked the Amazon Echo, because it’s a cheap, tinny speaker, and b) the HomePod may not be the best. It is notably very bass heavy, which means that some music will sound good, and some won’t sound very good at all. Compared to the other speakers – which have a flatter sound signature – the HomePod makes the mistake of imposing a tone on all the music it plays, and not allowing for individual user adjustments. (I’m not sure if all the better speakers that David Pogue tested allow for EQ tweaking; the Amazon Echo probably doesn’t, because it’s not that much of a speaker; the Sonos One definitely does, via the Sonos app.)

Finally, I find it almost risible to see the graphic that Mr. Gassée has included in has article as proof that the test was rigged. He points out that a louder speaker generally sounds better – which is well known – so the people who preferred one speaker must have been closer to that speaker.


This is a clear example of bias. Persons one and five were certainly closer to the speakers on the end, but persons two, three, and four were closer to speakers B and C. But none of them like it. Mr Gassée’s lines are ludicrous; he’s talking about the distance, yet ignoring the fact that, for example, person three is notably further from speakers A and D, and much closer to speakers B and C.

This is a glaring error in logic, and it’s a shame to see it included in an article that gets so technical about computational audio, electro-acoustc music at IRCAM, and so one.