As a podcaster, I go to great lengths to get the best possible audio quality for my recordings, and, within the limits of practicality, for my guests. I work in a quiet room, with quality hardware, but it’s not a recording studio, so there is always a hint of echo. This can be reduced with an adroit use of compression, but with guests, who may be in noisy rooms with poor microphones, this is pretty much impossible. There are expensive plug-ins for audio editing tools that can reduce room noise, but I can’t afford to pay what they cost.
Adobe has just released a beta of their Adobe Podcast tool, which includes a voice enhancement feature. You upload an audio file, and Adobe’s AI cleans up the file. And it works.
I recorded a podcast episode this morning, and my co-host is not in a great room, and doesn’t have a great microphone, and he always sounded, well, like someone on a podcast, not someone in a recording studio. The quality of the audio after running it through Adobe’s tool was impressive, and sounds like a studio recording.
Here are two examples. The first was recorded using the internal microphone of my MacBook Air:
Here’s the same file after Adobe did its magic:
And here’s another test, using my normal podcasting setup (a Rode Procaster, going through a Focusrite Vocaster Two, with some effects applied in Audio Hijack during recording). Even here, there’s a difference. At first, the microphone is about six inches from my mouth, to reduce room noise, then it’s about a foot away.
While the files that Adobe produced are a bit too bass-heavy for my taste, it’s easy to apply some EQ when producing podcast episodes. It seems like Adobe wants to reproduce the proximity effect, which you get when speaking with your mouth very close to a microphone; many podcasters like this, but I don’t.
This really is a game-changer. It is simple now to improve any audio recordings to use with podcasts and make all my podcasts sound much better.