When the tape started rolling in old analog recording studios, there was a feeling that musicians were about to capture a particular moment. On tape, there was no “undo.” They could try again, if they had the time and money, but they couldn’t move backwards. What’s done is done, for better or worse. Digital machines entered the mix in the 1980s, changing the way music was made — machines with a different sense of time. And the digital era has not just altered our tools for working with sound but also our relationship to time itself.
‘Time’ is the first episode of Ways of Hearing. This story looks at the way digital audio — in music recording, and in radio and television broadcast — employs a different sense of time than we use in our offline life, a time that is more regular and yet less communal.
Part of the new Radiotopia Showcase, Ways of Hearing is a six-episode series hosted by musician Damon Krukowski (Galaxie 500, Damon & Naomi), exploring the nature of listening in our digital world. Each episode looks at a different way that the switch from analog to digital audio is influencing our perceptions, changing our ideas of Time, Space, Love, Money, Power and Noise. In the digital age, our voices carry further than they ever did before, but how are they being heard?
Want a great version of a Zeppelin song? Listen to Boom Pam’s cover of Black Dog. Definitely feels like a satisfying experience.
Is there a list of the song samples played in the episode?
Just wondering what the names of all the songs were on this episode??
You can’t just open up the first 30 seconds of cool songs and not post them. I mean really??
Great shout out for Dred Zep. Part of what makes their versions work is that they *do* capture all the small but important features of the performance pieces at the same time that they are putting their own spin on them.
The notion that a composition is just a lead sheet is itself a huge step back in musical culture. There’s a joke that if Brahms had written his Haydn Variations today he would be credited only as the arranger, Haydn as the composer.
It was understood for centuries that a composer’s job is to write all the instrumental parts, but in pop music each instrumentalist writes his own part., and gets no composing credit for it. That there are few good covers of Led Zeppelin says more about the incompetence of pop musicians as pure performers than of LZ as songwriters. It’s important to distinguish songs that are too hard to cover from those that are too easy to cover.
Lithium is a good song, but nowhere near Gershwin’s compositional level. Gershwin had complete mastery of the popular idiom, he used French/German 6ths, 9ths and 11ths, suspensions, diminished chords, i.e. constant functional use of dissonance. Cobain used almost entirely major/minor triads.
Moreover LZ could be quite inspired melodically and harmonically, e.g. Ten Years Gone, The Rain Song, and of course thsat 800 lb gorilla Stairway to Heaven. I think everyone would recognize the opening chords of that much more easily than those of Lithium.
But I enjoyed the discussion and many good points were made.
Anyone know the name of the jazzy piano tune at 38:02 that was compared to Lithium?
“Someone to Watch Over Me” by George and Ira Gershwin
Yeeees, thank you, I was looking for that one too!
I loved the episode, but I just have one comment:
(Note that I’m not speaking as an expert, this is just a thought that I wanted to share)
It seems (to me at least) that the episode hints that Latency is confined to digital. When in fact, latency is always there (both in analog and digital) because nothing has infinite speed.
For example, we can experience latency in the analog world with the delay between the sound of thunder and the light that carries the image of lightening.
Latency is always there, we just don’t have enough “resolution” in our hearing to be able to notice it.
I think what digital exclusively introduces is (variable latency). Latency that changes without any changes in the physical constraints. Latency that depends on the load experienced by the processor producing the signal.
I have to say that I was bothered by the way digital, as a concept, was discussed here. The piece made it seem as though inherent to digital technologies was an intolerable level of latency. Yet the piece never goes into what causes latency in the digital world.
While some delay is inherent in converting from the analog signal of a microphone to digital, the most significant delays are due to phenomena that are not required by digital technologies. These are phenomena that are due to trade-offs that are made possible by the digital world. For example, those in television choose to trade off a noticeable delay in video for the ability to fit more channels over broadcast, cable, or satellite. Phone companies choose to trade off a noticeable delay for the ability to use a less reliable (i.e., less costly) channel, e.g., the packet-based VoIP, or to protect against weak signals, e.g., for mobile phones. You might choose to put up with some latency so you can use your $1,000 laptop to record rather than a rack with $100,000 of equipment. Such trade-offs aren’t the fault of digital, but of the people who made them.
A related problem with the piece is that it didn’t discuss what advantages digital had at all, leaving that a mystery. Most listeners probably know some, but few know all. Most can fall into three broad categories: Cost, convenience, and capability. Recording on your laptop falls in all three categories, but it should be emphasized that there were clear advantages to going this route, that it wasn’t just some technical fad.
Also, the piece make it seem as though audible latency were unique to digital. (Another commenter noted that latency has always existed, but latency undetectable by human ears is generally not a problem.) There is the latency inherent in both human hearing and the creation of sound originating by the movement of human muscles. Intolerable latency would be latency we could detect, so let’s assume that that takes that type of latency out of the equation. There’s still the inherent delay – sometimes a feature, sometimes not – of audio recording, whether you’re talking about the reaction time of an analog filter or the amount of time corresponding to the distance between a recording head and a playback head on a tape recorder. Latency isn’t new; what’s new is how we’re able to use (and perhaps abuse) it.
The piece also leaves out that digital allows more precise technologies for compensating for such latencies. There’s no inherent reason that television signals and radio stations can’t be synchronized, and no inherent reason that two tracks that are out of sync can’t be matched up. Digital should make both easy, though often – as in the case of television – it’s not deemed important enough to actually bother with.
Finally, there’s nothing special about digital when it comes to live recording. If you want to record just as in days of old, set up a few mics and hook them up to your recording equipment. Whether digital or analog, everything should be properly synced up.
Overall, these flaws made the introductory plea that this was not just nostalgia seem a little bit like “the lady doth protest too much, methinks.” While there might be a subtle difference between “digital causes latency, which I hate” and “digital enables tradeoffs that often result in latency, which I hate,” it’s an important distinction to make.
The introductory comments about computers being associated only with boffins and moonshots in the 80s is very odd. The Apple II came out in 1977, and the TRS-80 Tandy came out in 1980. Home computers were quite common in the early 80s, though they were rudimentary compared to what we have now.
Also, analog recordings are editable too, via punch-ins and and the like. It’s laborious, but it’s not impossible. In fact, the idea of recordings being essentially captured live performances disappeared in the 60s. Look at the Beatles’ output as the 60s went on.