An elevator pitch is essentially a short speech intended to generate interest in an idea or proposition within a very limited timeframe. How long does it take to go a couple of floors in an elevator? A few seconds, once the doors close? Or if you’re going to the higher floors in modern buildings, it can take as much as 30 seconds. There may be times when you have to pitch your idea – your product, service, or company – in that brief amount of time. And when you want to do this, you can’t improvise; you have to plan ahead so you can explain your idea quickly.
You’ll use your elevator pitch at other times than just in elevators, but the metaphor – that of time ticking away as the floors go by – is useful. You may use it at conferences when you meet new people, and are networking between panels. You can use it when you run into someone in a coffee shop or grocery store. And you may even use it in an elevator, when you’ve met someone who might invest in your company.
When I bought a new MacBook Pro last year, I was catching up to a new interface element that had been around for a while, but that I had never used: the TouchBar. This bar replaces the function keys with a set of dynamic “buttons,” allowing you to control certain things on your Mac. You can adjust volume and brightness, and different apps provide different virtual buttons on the bar.
There’s one app where it’s is really useless: Safari. On my MacBook Pro, with Safari frontmost, I have six tabs, but the TouchBar only shows two of them; that’s because the other four are “pinned tabs,” that are minimized at the left of the tab bar. There’s no way to use the TouchBar to access those tabs. (I’ve tried to scale the image so it looks about the same as what I see.)
And even if I did want to use the TouchBar to access different Safari tabs, there’s nothing on the TouchBar that helps me see what the tabs contain. Okay, I can see that the one on the left is Facebook, but what if I have a lot more tabs open?
I would have expected the TouchBar to display favicons, which would at least give a better idea of which sites are open in each tab. It’s clear that an option to do this would make it a lot more usable.
Chris Connaker of Audiophile Style renovated the attic of his house and turned it into the ultimate listening room. He then tuned it using amazing speakers, acoustic treatment, and DSP (digital signal processing). He explains how he went about this, and how the room itself is perhaps the most important element in an audio system.
You know the buttons you push at crosswalks? They give you the illusion of control, that you’re telling the stoplight to change color, whereas you know they really don’t. Most stoplights in cities are controlled by a system to keep them in sync; pressing a button doesn’t change anything.
The same thing happens on Apple Music when I “dislike” a track or album. I would expect that telling Apple Music what I “love” and what I don’t like will have some effect on my recommendations. I think that the “love” declaration does help the algorithm, but the “dislike” option does nothing. (It’s worth noting that on iOS, the term “dislike” is not used: the option is Suggest Less Like This.)
Case in point: recommendations in For You this morning:
I listened to a Dick’s Picks the other day, so Apple Music recommends similar music. But I don’t like Phish. (Don’t @ me.) I had “disliked” this album some other time it came up in recommendations. Yet Apple Music still recommends it, and another Phish album (which I hadn’t previously disliked, so fair dinkum on that one).
It’s not just in recommendations that “dislikes” are ignored. In New Releases, I get lots of stuff that I don’t like, and if I explicitly dislike an album, I’d expect it to not remain in the list. Maybe it won’t go away immediately, but it should eventually be removed. I’ve got about two dozen new releases on my For You page, and I’ve disliked half of them (and I don’t know why I ever got many of these recommendations anyway), so I’d expect them to be replaced by something else.
We look back at the eventful year 2019 in Apple security. In the news, Apple is switching to randomized serial numbers for its products, Apple sues a company over jailbreaking, Firefox has critical vulnerabilities, and more.
I’ve had a HomePod since it was first released in early 2018. It sounds okay, but there are a number of issues with it. As I said in my review, “sometimes this speaker sounds really great, sometimes it really doesn’t.” And the biggest problem for me was this:
What the HomePod needs, of course, is user access to settings like an equalizer, as you have in iTunes or on an iOS device. Not to the broader DSP algorithm, but to the tone sculpting that makes some music sound too bassy, or, at times, too trebly.
A few months later, I got a second HomePod to combine them into a stereo pair to use in my bedroom. Using two standalone speakers in a stereo pair is practical: you save the space you would need for an amplifier, and you don’t need to run speaker wire to them (you do need to plug both into AC power, of course).
So the next step was to buy a second Sonos One and set it up in a stereo pair. I did so recently, taking advantage of post-Christmas sales, and I purchased the less expensive Sonos One SL, which does not have a microphone so does not support Alexa or Google Assistant. I don’t use Alexa, nor do I use Siri on my HomePods, and if you have a stereo pair, you don’t need both Sonos Ones to have microphones anyway.
So, it was time to set up the Sonos Ones in a stereo pair in my bedroom and compare them. I placed each one on the same shelf as a HomePod, a few inches away. In the Music app, I set the volume for each pair to approximately what was audibly the same level; the Sonos One is a bit louder, so I lowered its volume until it sounded about the same. (“Bedroom” below is the HomePod stereo pair.)
You can switch from one AirPlay device to another by tapping the AirPlay icon at the bottom of the Music window, and I switched back and forth, starting with my Kirk’s Audio Test Tracks playlist on Apple Music. This is a playlist of music that I am very familiar with, which I use when testing new audio equipment. (I listened to more than just what’s in the playlist, but I started with that.)
It’s great to have location data stored in your photos. This allows you to sort through your photo library and find all your photos from your last vacation, or from favorite sites you like to visit. For some photos, like that one of the Eiffel Tower, it’s obvious where you’ve taken them. But you may not want people to be able to figure out where all your photos were taken. For example, you probably don’t want location data in photos you’ve taken in your back yard showing up on social media, allowing people to find exactly where you live.
It’s easy to remove location data when sharing photos from your iPhone, iPad, or Mac. Here’s how.
In 2013, Sonos scored a coup when Google agreed to design its music service to work easily with Sonos’s home speakers. For the project, Sonos handed over the effective blueprints to its speakers.
It felt like a harmless move, Sonos executives said. Google was an internet company and didn’t make speakers.
The executives now say they were naïve.
On Tuesday, Sonos sued Google in two federal court systems, seeking financial damages and a ban on the sale of Google’s speakers, smartphones and laptops in the United States. Sonos accused Google of infringing on five of its patents, including technology that lets wireless speakers connect and synchronize with one another.
Sonos’s complaints go beyond patents and Google. Its legal action is the culmination of years of growing dependence on both Google and Amazon, which then used their leverage to squeeze the smaller company, Sonos executives said.
Sonos advertises its speakers on Google and sells them on Amazon. It built their music services and talking virtual assistants directly into its products. Sonos workers correspond via Gmail, and run the business off Amazon’s cloud-computing service.
Then Google and Amazon came out with their own speakers, undercutting Sonos’s prices, and according to Sonos executives, stealing its technology. Google and Amazon each now sell as many speakers in a few months as Sonos sells in one year.
Sonos executives said they decided to sue only Google because they couldn’t risk battling two tech giants in court at once. Yet Mr. Spence and congressional staff members have discussed him soon testifying to the House antitrust subcommittee about his company’s issues with them.
Streaming services pay labels and artists according to the number of times people play their tracks. Because of this, a 3-minute ditty gets the same (paltry) amount of money as a 30-minute movement of a Mahler symphony.
But the record labels have figured this out, and are changing the definition of the “track” to adapt to this new market.
Case in point, Brian Eno and Robert Fripp’s album Evening Star. I went to listen to it last night on Apple Music, and the second side of the original album, An Index of Metals, was broken up into six tracks:
Here’s the original track listing from Wikipedia:
This isn’t new; I’ve been seeing it for a few years. Another example is Max Richter’s eight-hour Sleep. I bought this album on the iTunes Store when it was released. It contains 32 tracks. Here’s the first two tracks of the original release:
Here’s the same tracks on Apple Music:
To be fair, you can’t argue with the fact that labels and artists have come up with a workaround for an unjust system, but their solution lacks finesse. In the case of An Index of Metals, each “track” is from two and a half to more than seven minutes; in the case of Sleep, tracks seem to be as short as possible, with many of them less than two minutes long.
Surely no record label would do that with, say, a Mahler symphony, right? Well, good old Deutsch Grammophon seems to have adopted this model for a lot of their releases. Here’s one example. This release, one of the longer recordings of the work, at one hour and 45 minutes, is divided into 26 tracks. Here’s the first movement:
While this makes the label and artits a bit more of a pittance, it is a real annoyance for listeners who try to find their way in this morass of financially motivated cuts, and also for those who add this music to their libraries and want to play it later.
Again, I understand why they are doing this, but these labels – especially major labels – have the power to bring about change by negotiating with streaming services. It seems to me that there should be different payments per track according to their length. For example, less than 10 minutes would be paid a base rate, 10 – 20 minutes would be paid twice that, and 30 minutes or more would be paid three times the base rate. Yes, there are tracks that are as long as a CD, so maybe there should be more tiers, but splitting up the music, and confusing users, is not the solution to this problem.