_

We all may listen to our favorite music as an MP3, but no band we can think of — rock, pop, R&B, hip-hop, or any other style — would sound the same without these seven huge developments in music technology. Artists, producers, audio engineers, and fans all stand on the shoulders of these giants. So, kick back and enjoy a little history!

1932: THE ELECTRIC GUITAR

Rickenbacker Electro A-22 “Frying Pan”

Trying to amplify a guitar electrically goes back to the early 20th century, driven by jazz players needing to be heard above big-band brass. The first commercial electric was a National Guitar lap steel with pickups by Adolph Rickenbacker, released in 1932. When we think “electric guitar,” though, we think of the solid-body design that simply means rock ‘n’ roll, and that means Les Paul. Ironically, Gibson didn’t think solid-bodies would catch on, so the guitar bearing the legendary inventor’s name wasn’t marketed to the public until 1952. The Fender Stratocaster was hot on its heels in 1954, and the rivalry rages to this day. Because the solid-body had no soundboard and relied solely on pickups, it could take amplification beyond merely projecting louder sound to being a creative sound design tool. Do things to that electrical signal — using tone controls, different kinds of pickups and amps, and effects — and the guitar could sing, cry, scream, or blow you out of your seat. Almost 90 years later, the Les Paul and Strat are still the two designs almost everyone else imitates. That’s what happens when you get so much right the first time.

 

1957: MULTI-TRACK RECORDING

AMPEX 440 (two-track, four-track) and 16-track MM1000

A band playing all at once and recording in stereo (or mono) used to be the only way to capture a song. After the fact, there was no way to edit or tweak each instrument and vocal separately. This resulted in some great recordings because engineers and players alike had to be really on top of their game, but there are countless advantages to putting each musical part on its own track. Who and what was actually first gets murky, but 1957 was the year Ampex sold the first eight-track recorder to Les Paul, who was just as much a pioneer of recording as he was of the guitar. The concept is easy enough to picture: The tape has lanes like a highway and each instrument gets its own lane. Imagine routing each of those lanes through a different volume control, EQ, and effects, and you have modern mixing. Plus, you didn’t need to record everyone at once — you could use some of the lanes, then use more for overdubs later.  This advancement in music technology exploded the creative possibilities of the recording studio, and let artists and producers create music that otherwise would have been impossible to arrange. The biggest example in rock? “Sgt. Pepper’s Lonely Hearts Club Band” is always flagged as a turning point in the sound of The Beatles. This had as much to do with EMI Abbey Road having two four-track decks in the late ’60s as it did with the band’s interest in the, cough, psychedelic culture of the time.

1964: THE SYNTHESIZER

Modern music would be inconceivable without the synthesizer, or at least it would sound very different. What was the very first synth? For that debate, suit up like a Mad Max villain and bring your baseball bat to Reddit. Instead we zero in on 1964, the year Dr. Robert A. Moog and his colleague Herb Deutsch showed off their first modular synth at the Audio Engineering Society convention, leading to a fat stack of orders and legit launching their business. This was important for two reasons. First, synthesizers had been confined to just a few studios, such as university music departments and the BBC Radiophonic Workshop. Now, mainstream musicians saw, heard, and wanted them. Second, Moog Modular systems were the first to be portable enough to even think about taking to a gig. Artists with the resources, most famously Keith Emerson, did exactly that. Almost overnight, the synthesizer went from an esoteric mashup of gear filling a darkened room to a staple of the rock ’n’ roll stage. Where a piano could play louder or softer and an organ offered some variation in the tone, the synthesizer got down to the building blocks of sound itself: the waveforms, harmonics, and much more. Instead of the instrument making a sound, it was more like the sound was the instrument. The user could emulate well-known instruments or create sounds never heard before. The first keyboard synth that could be carried under an arm arrived in 1970, digital synths and samplers dominated the ’80s and ’90s, and analog synths have made a huge comeback today. Which brings us to…

1983: MIDI

Roland Jupiter-6 analog polyphonic synthesizer from 1983

Before the early ’80s, the idea of hooking up one brand of synthesizer, drum machine, or sequencer to a different brand was unheard of. Some companies had their own standards in place, such as Roland’s Digital Control Bus and Oberheim’s dedicated system, but musicians who wanted to build composition setups pretty much had to stick with one brand or another. Dave Smith, inventor of the legendary Prophet-5 synth, had been working on a universal solution since 1980, and its existence was first announced by Bob Moog at the 1982 Audio Engineering Society conference. But what instrument maker in their right mind would let their gear cooperate with its competition? Roland. Its founder Ikutaro Kakehashi realized that cross-compatibility would encourage customers to buy more of every brand’s gear including his own, so he put his muscle behind Smith’s dream. At the 1983 National Association of Music Merchants (NAMM) show, Smith and Kakehashi showed two connected synths: a Roland Jupiter-6 and a Prophet-600. When you played a note on one, the other played it as well. It was the buzz of the show, and suddenly every manufacturer in the industry didn’t want to get left behind. Today, MIDI is still the standard by which all music hardware and software communicates. Notes, control messages, timing info, and more can be shared. Plus, MIDI can coordinate more than music, such as lighting, pyro, and stage props at big concerts, and even aerial drone shows.

1989: PRO TOOLS

The technical term here is a mouthful: hard disk-based non-linear recording. And Pro Tools wasn’t first. Some versions of the Synclavier, the Rolls-Royce of ’80s digital synth workstations, could do it. But even early Pro Tools put the writing on the wall: The personal computer was the future of recording. Pro Tools’ direct ancestor is Sound Tools, a Mac system consisting of the Sound Accelerator processor card and Sound Designer stereo editing software. Digidesign (now Avid) founders Evan Brooks and Peter Gotcher then joined forces with the makers of an indie four-track program called Deck, which ran on the card, and the DAW as we know it was born. The first version to be called Pro Tools melded an updated Deck with Digidesign’s AudioMedia card for Macs. As computer and DSP hardware got better, so did Pro Tools, with more tracks, higher resolution, plug-ins, and more. An important milestone came in 1999 with Pro Tools LE,  which could record up to 24 tracks on an ordinary computer with no special cards installed, bringing studio power to home users. That Pro Tools was non-linear meant you could edit graphically, anywhere you wanted. That it was non-destructive meant you could make these edits with the ability to revert to a previous state if you screwed up — try that with tape and a razor blade. Pro Tools remains the industry standard for music production and film/TV audio to this day. That’s partly because it got a head start making audio the centerpiece whereas other very good DAWs such as Cubase and Logic began life as MIDI sequencers and added audio recording later. Versions exist to fit all workstyles and budgets, from the free Pro Tools First to six-figure HDX systems with lots of DSP cards, large console-style controllers, and Dolby Atmos surround mixing.

1992: THE ADAT

ADAT XT 8-channel digital audio recorder

The Alesis Digital Audio Tape Recorder brought multi-track recording to the masses. While many home studio dwellers had four-track cassette decks and a lucky few ballers had eight-track reel-to-reel, multi-track recording beyond this was the domain of commercial studios and wealthy rock stars. Developed by Marcus Ryle, who came up as an engineer for synth pioneer Tom Oberheim, the first ADAT used digital conversion to record eight tracks of CD-quality (16-bit, 44.1kHz) audio on a video cassette, specifically the S-VHS format you could find in any Radio Shack. Even better, you could sync multiple ADATs together with sample-accurate timing. Though not exactly an impulse buy at $3,995 a pop, even several ADATs cost a tiny fraction of the price of a studio analog tape machine, and users regularly set up 24-track and larger systems. Add the BRC (Big Remote Control), and you could run all units from one panel. Suddenly, the big-studio experience could fit in a spare bedroom. This kicked off the “project studio revolution” of the 1990s, fostering a cottage industry of supporting gear: compact 8-bus mixers, affordable tube mic preamps to warm up the digital sound of the era, a boom in rackmount effects, and more. DAWs eventually unseated the ADAT and today, any laptop (or iPad) can out-record a rack of ADATs. However, a central part of the tech is still with us. ADATs used an 8-channel optical connection called lightpipe. It’s still the standard for multi-channel digital audio, and found on the back panel of nearly every brand of audio interface as well as gear like 8-channel mic preamps and digital mixers. That kind of makes ADAT the MIDI of digital audio.

1997: AUTO-TUNE

What does gassing up your car have to do with the most signature sound in pop music? Andy Hildebrand, a Ph.D. scientist who honed his chops developing predictive models to help oil companies find new deposits. But the music industry was where he’d personally hit paydirt. After a friend’s wife joked that he should make her a way to sing in tune, he started thinking seriously about the whole idea of pitch processing. Some pitch-correction technologies already existed, but they either sounded unrealistic or required too much computing power to be practical. Thanks to what he called a “mathematical trick” that reduced the number-crunching needs, Hildebrand developed a better-sounding method (first on a Macintosh), and when he demonstrated it at the next NAMM show, forklift crews kept busy lifting everyone’s jaws off the floor.
Hildebrand intended Auto-Tune to be invisible when used, rescuing otherwise great musical performances from occasional intonation problems. It certainly did this excellently, but some producers and artists discovered that at extreme retune settings, it imparted a robotic pitch-jumping to vocals — and they started doing this intentionally. Cher’s 1998 hit “Believe” cemented the “Auto-Tune Effect” into pop culture, and it has been sought out by artists from Daft Punk to T-Pain to Lil Nas X ever since.

Watch episode 2 of the hit Netflix show This Is Pop which covers the history of Auto-Tune.

The Auto-Tune plug-in family now leads the world in vocal and pitch processing and includes solutions to achieve any musical goal: vocoder, talkbox, and other creative effects; turning a single voice into an entire choir; key and scale detection; live performance; and yes — being the silent session-saver Dr. Hildebrand originally envisioned.