The host, David Pierce, has decided to relearn how to play the guitar, which he used to play as a kid but quit when he was 12 years old (9s).
He has a guitar that sits behind him during meetings and podcast recordings, often prompting people to ask if he plays guitar (15s).
David Pierce's guitar teacher had told him to either try harder and care more about playing or quit, leading him to quit playing (31s).
He has now downloaded an app called USIC, which people recommend, to help him learn how to play the guitar again (42s).
The USIC app uses the microphone to provide dynamic feedback on his playing and tells him what to play, although it is not as good as having a person to teach him (52s).
So far, David Pierce has learned how to play a C chord and considers it progress (1m6s).
The topic of relearning to play the guitar is not the main subject of the discussion, but rather an introduction to the main topic (1m15s).
The episode is part of a miniseries about the future of music, and this week's topic is Auto-Tune, a technology that has significantly impacted the music industry over the last two decades (1m41s).
Charlie Harding, a music journalist, professor of music, and co-host of the podcast Switched on Pop, is the guest for this episode, and he will discuss the history of Auto-Tune and its effects on music (1m44s).
The conversation will also explore what can be learned from Auto-Tune's evolution over the last two decades and how it might provide insight into the future of the music industry, particularly with the rise of AI and TikTok(2m6s).
The episode will feature a selection of songs that showcase the impact of Auto-Tune on music, which may get stuck in listeners' heads for months (2m22s).
The Nissan Kicks is the sponsor of The Vergecast, and it has undergone a complete transformation, emerging as a city-size crossover redefined for urban adventures (2m42s).
The discussion starts with the introduction of Charlie Harding, who previously appeared in an episode about making a song with AI, and now he's here to talk about Auto-Tune and its impact on music and the world, and possibly draw similarities with the story of AI (3m45s).
Auto-Tune has two definitions: the formal definition refers to the audio software tool created by Antares that helps adjust the pitch of vocals, and the second definition is any kind of pitch correction, making Auto-Tune synonymous with pitch manipulation and correction (5m10s).
Auto-Tune is like the Kleenex of the space, being both a specific product and a term used to refer to everything similar (5m51s).
The story of Auto-Tune begins with its inventor, Andy Hilderbran, a geologist who worked in the oil and gas industry and used techniques like the Fourier transform analysis to find oil deposits, which he later applied to create a pitch correction tool (6m15s).
Andy Hilderbran launched a software company in the 90s and built Auto-Tune using the math and science he had used in the oil and gas industry to create a pitch correction tool (6m58s).
The wave technology used to identify oil fields also applied to tuning vocals, making the oil and gas industry an unexpected contributor to pitch correction (7m8s).
The Auto-Tune effect was initially intended to be a subtle tool to help enhance slightly out-of-tune vocals, not completely out-of-tune ones, when it launched in 1997 (7m53s).
Cher's song "Believe" in 1998 is often credited with popularizing the Auto-Tune effect, but it was actually used in a more extreme way by Kid Rock in his song "Only God Knows Why," released a few months before "Believe" (9m0s).
Kid Rock's song was initially released as an album cut, not a single, but was later released as a single after the success of "Believe" (9m28s).
The "Cher effect" was a one-off novelty at first, but people started to mimic it, and Auto-Tune was used in various ways, including as a gentle pitch correction tool (10m41s).
The use of Auto-Tune became more common, and it was often used in a more subtle way, but sometimes it was overused, leading to a slightly inhuman sound, as heard in Maroon 5's song "She Will Be Loved" from 2002 (10m55s).
The reaction to Cher's use of Auto-Tune was not entirely positive, with some people complaining that it was a sign of the end of music, as it allowed artists to gloss over their imperfections (11m32s).
Cher was not trying to hide the use of Auto-Tune in her song, which was part of the reason for the reaction (11m41s).
Cher's song "Believe" uses a mix of tuning and no tuning, with Auto-Tune used in the verse to create a robotic effect, matching the song's lyrics about feeling soulless after a broken heart (11m48s).
As the song transitions to the chorus, Cher drops the Auto-Tune, creating a creative effect where she breaks through her robotic tone, showcasing her vocal abilities (12m21s).
When "Believe" was released, people were surprised by the unusual effect, and Cher and her producers kept the technique a secret, wanting to maintain a proprietary sound (12m50s).
Auto-Tune was relatively new at the time, and its use in "Believe" was novel and unexplored, with people assuming it was a different vocal processing technique, such as a vocoder (13m10s).
The vocoder is a tool that dates back to World War II, originally used for encoding and decoding messages, and was later used in music, with producers misleadingly citing it as the technique used in "Believe" (13m26s).
Auto-Tune works by identifying the pitch a singer is attempting to sing and then performing pitch quantization, pushing the sung pitch to the correct pitch, with adjustable speed (14m8s).
The characteristic "Auto-Tune effect" is achieved by setting the pitch speed to zero, immediately jumping the voice from one pitch to another, creating a digital artifacting sound that is now desirable in music (14m54s).
In theory, Auto-Tune is a simple tool that corrects pitch, with adjustable parameters, allowing singers to fine-tune the effect and intentionally incorporate it into their singing (15m29s).
Auto-Tune is a tool that can correct pitch problems in music recordings by setting a scale and adjusting the singer's pitch to match it (15m43s).
The tool became an immediate hit when it was first introduced, with many A-list artists such as Madonna and Maroon 5 using it in their recordings (16m5s).
Today, Auto-Tune is used in over 90% of music recordings, with the only exceptions being naturalistic rock songs, indie songs, and certain rappers who do not sing (16m21s).
Before Auto-Tune, producers used other tools to fix pitch problems, but these methods were slow and laborious, involving manual adjustments to individual notes (17m4s).
Auto-Tune became an instrument in its own right, particularly in the world of R&B, after being popularized by artists such as T-Pain and Kanye West(18m45s).
T-Pain's song "I'm Sprung" in 2005 was one of the first times the Auto-Tune effect was widely heard, and it quickly became known as the "T-Pain effect" (18m33s).
Kanye West's song "Heartless" in 2008 was another key moment in the popularization of Auto-Tune, as it showcased the tool's ability to enable rappers to sing (19m1s).
Auto-Tune has given rappers the capacity to sing, and has become a characteristic feature of many R&B and hip-hop recordings (18m53s).
The Auto-Tune effect became ubiquitous in the late 2000s, over a decade after its initial release, with artists like Cher, T-Pain, Kanye West, and others using it in their music (19m39s).
Auto-Tune was initially met with a cultural backlash, with some in the music industry calling for its removal, claiming it was ruining music and destroying the art of singing (20m31s).
One argument against Auto-Tune is that it homogenizes the voice, removing unique vocal imperfections that make a singer sound human (20m55s).
Another criticism is that Auto-Tune allows people who don't sing well to become stars, which is seen as a negative development in the music industry (22m1s).
However, the counterargument is that Auto-Tune does not homogenize voices, and that artists can still be identified by their unique sound, even when using Auto-Tune (22m44s).
The popularity of Auto-Tune is not driven by a cabal of music executives, but rather by listener demand, with many fans wanting more music that features the effect (22m25s).
The use of Auto-Tune is compared to the use of other audio effects, such as Reverb, which has not been subject to the same level of criticism (20m14s).
The history of rock and roll is full of examples of untrained singers who have become stars, and Auto-Tune is seen as a continuation of this tradition (21m40s).
The processing of the human voice through technology, such as Auto-Tune or old-style phones, can be seen as a way to create a unique sound, rather than a homogenizing force (23m28s).
Some artists put Auto-Tune on their vocals from the start and intentionally try to create weird artifacting sounds, which is why they often use the classic mode to achieve a sound reminiscent of 1998, rather than a pristine, pure sound that allows them to sing better (29m47s).
Auto-Tune has shifted from being a post-processing tool to a live tool used in the beginning of the recording process, allowing artists to hear their voices through Auto-Tune in real-time (30m17s).
Many singers have never heard their takes without Auto-Tune, and this has changed the way they make music, with some artists intentionally singing slightly out of tune to get the desired effect (30m40s).
To achieve the desired sound, artists need to be talented at using Auto-Tune and practice with it, as singing well can actually make Auto-Tune less effective (31m37s).
Some artists use portable recording interfaces, such as the Universal Audio Apollo, to run Auto-Tune live without latency, allowing them to hear their voices through Auto-Tune in real-time (31m14s).
Using Auto-Tune in this way requires a different approach to singing and recording, similar to the difference between playing electric guitar with distortion and playing classical guitar (32m15s).
Many artists now "bake" Auto-Tune into their sound before it even goes into their software, and some don't want to hear what their voice sounds like without it on (32m1s).
The use of Auto-Tune has become so prevalent that some artists travel to different locations, such as Hawaii, to record in home bedrooms or Airbnbs, bringing their own microphones and portable recording interfaces (31m5s).
Singing into Auto-Tune and singing without it are two different skills that require practice and development as a musician, and even good vocalists need to learn how to use Auto-Tune effectively to sound good with it (32m34s).
Some people use Auto-Tune to practice melodies and deal with not being good vocalists, but to sound like a great vocalist like T-Pain, one needs to practice singing like him (33m11s).
Great vocalists like T-Pain and Cher are skilled at using Auto-Tune, but it's a different way of singing than what Frank Sinatra did decades ago (33m27s).
The use of Auto-Tune in live music is present and can be used in different ways, such as the Auto-Tune live tool that can be run through a microphone on a live stage (34m18s).
Auto-Tune live can be used for subtle vocal tuning to help performers sound more in tune when they're running around on stage and might be out of breath (34m32s).
Other ways to enhance vocals to sound more like the original recording include playing backing tracks of perfectly in-tune vocals, which can add depth and width to the sound (34m50s).
Using backing tracks and subtle pitch correction can be more practical and cost-effective than bringing extra backup singers on tour (35m26s).
The simplest and most mainstream version of Auto-Tune live is likely the subtle pitch correction version, which is used to enhance vocals without being too noticeable (35m38s).
High-end video editing software has become more accessible to the general public through apps like TikTok, making it possible for people to edit videos successfully without needing professional software (35m46s).
Similarly, Auto-Tune is now available to regular people through various iPhone apps, including those developed by the Gregory Brothers, and is also included in audio recording software like Apple's Logic (36m8s).
Many software developers have created their own Auto-Tune effects, making it more accessible and affordable for people to use (36m33s).
The increased accessibility of Auto-Tune is seen as a positive development, as it provides more creative tools for people to use (36m58s).
However, there is a backlash against the use of Auto-Tune, particularly on social media platforms like TikTok and Instagram, where users are skeptical of the authenticity of videos featuring singers using the technology (37m10s).
Some users are calling out singers who use Auto-Tune in their videos, asking to hear what they sound like without the technology, and questioning the authenticity of their performances (38m13s).
The use of Auto-Tune is seen as a disconnect from the idea of authenticity, particularly when singers are presenting themselves as performing naturally in their kitchens or other informal settings (38m31s).
Auto-Tune has become a catch-all term for any sound that is post-processed to make it sound more perfect, and is often used to describe any use of technology to enhance a singer's voice (38m54s).
Other post-processing tools, such as reverb removal, EQ, and compression, can also be used to enhance a singer's voice, and are not necessarily seen as inauthentic (39m5s).
Auto-Tune can make recordings sound professional, even if they were recorded with a device like an iPhone, by utilizing good post-processing tools (39m15s).
Auto-Tune is not inherently bad, and people's dislike for it may stem from personal taste, as different aesthetics exist in music, and Auto-Tune is currently a popular one (39m49s).
Some people feel that Auto-Tune sounds unnatural, but it is just another way of singing, and criticism of it may be overblown (40m0s).
There is a concern that Auto-Tune can be used to hide imperfections in a singer's voice, making it seem dishonest, similar to using a face filter on a picture (40m54s).
The use of Auto-Tune raises questions about authenticity in recorded music, particularly in pop music, where there is an expectation that the artist is being genuine (41m39s).
Even artists like Taylor Swift, who are not typically associated with Auto-Tune, use various vocal processing tools in their music, such as vocoders and pitch shifting (41m55s).
The key to using pitch tuning tools effectively is to make them sound natural, which requires skill and can be a crucial part of the music production process (42m25s).
When used poorly, pitch correction can result in unpleasant artifacts, but when done correctly, it can enhance a beautiful, emotional performance (42m42s).
The use of Auto-Tune is often viewed negatively when it's noticeable, as it's seen as an attempt to manipulate a naturalistic performance, and the tools used to clean up the sound need to be hidden in order for it to sound good (42m54s).
A poorly done, noticeable use of Auto-Tune can be compared to a poorly photoshopped photo, where the manipulation is evident and the result doesn't look natural (43m5s).
Although people generally accept that magazine photos are often photoshopped, they tend to dislike it when the manipulation is obvious, and the same principle applies to the use of Auto-Tune in music (43m15s).
There's a tendency to accept manipulated music or photos as long as the manipulation is not obvious, and people can ignore the fact that it's been altered (43m22s).
The concept of cognitive dissonance is applied to the use of Auto-Tune and CGI in media, where the brain can internalize the fact that something is processed, but it feels bad when it's obvious and presented as authentic (43m28s).
The acceptance of processed media depends on the context and presentation, with things that are supposed to be authentic being perceived as wrong when processed, while things presented as composed and enhanced are expected to be processed (44m9s).
Permission is given to be mad at Auto-Tune when used poorly, but when used intentionally, it's best to let go of criticism and accept it as a tool in the music industry (44m26s).
The Nissan Kicks is a city-sized crossover that has undergone a complete transformation, featuring a new exterior and interior, premium features, and intelligent all-wheel drive (44m49s).
The discussion shifts to AI videos on YouTube and the possible future of AI in the music industry, with the potential for AI to become a dominant aesthetic, similar to Auto-Tune (45m34s).
AI is being used in various ways in music, including writing lyrics, separating stem recordings, and generating whole songs from prompts, with some uses being more invisible than others (46m41s).
The comparison is made between the arc of Auto-Tune and the potential future of AI in the music industry, with the possibility of AI becoming a tool in the toolkit, allowing new people to do new things, and eventually becoming a dominant aesthetic (45m52s).
AI tools, including those used in music production, often lack a distinct sonic fingerprint, unlike Auto-Tune, which has become a predominant sound in popular music (47m29s).
AI songs and poorly written songs tend to use too many perfect rhymes, which can make them sound overly sweet and unoriginal (47m17s).
ChatGPT, an AI tool, sometimes produces bad rhymes, but it is improving over time, and it often relies on simple rhymes like "do" and "to" (47m7s).
AI tools are designed to mimic other sounds rather than having their own unique sound, which can make them less interesting (48m31s).
Some AI tools, like those used for image generation, have a default style that can be noticeable, and it's possible that future AI music tools could develop their own distinct styles (48m52s).
Current AI music tools, such as Amper or AIVA, have "tells" that can be identified, such as a grainy, hissy sound due to being trained on low-quality audio files (49m31s).
AI-generated music can lack the imperfections that make human-generated music sound more exciting, such as the subtle differences in pitch and timing between instruments in a horn section (50m13s).
The use of AI tools in music production is becoming more common, but it's unclear whether a specific AI tool will become a dominant sound in popular music in the future (47m33s).
The concept of authenticity in music is being reevaluated, with some people suggesting that the next phase of culture will be a return to more "authentic" and "honest" sounds, such as punk rock, where vocals may sound worse but are more genuine (51m12s).
This idea is based on the notion that the overuse of technology and the internet has made people less authentic, and that a return to more traditional methods of music creation, such as recording live instruments in a room, may be the next step (52m1s).
The question of how far back one needs to go to find something "authentic" is raised, with some people suggesting that a return to traditional instruments and recording methods may be necessary (52m21s).
Historically, there have been people who have advocated for a return to more traditional and authentic music creation methods, but this has never been a widespread trend (52m40s).
However, with the rise of AI in music creation, there may be a pushback against the use of technology and a desire for more human and authentic sounds (52m51s).
This trend has been seen before in the history of popular music, with the rise of electronic recording and the electric guitar leading to a folk resurgence, and the highly produced era of the 1980s leading to the emergence of grunge (53m0s).
Despite this, it is possible that the concept of authenticity has already been co-opted by advertisers, who are using the same techniques to create a sense of authenticity in their ads, and that the next trend may be something entirely new and unexpected (54m2s).
The current generation of music still heavily uses Auto-Tune, and it's not going anywhere, with many artists, including Noah Kahan, incorporating it into their music (54m26s).
There's a resurgence of Lumineer-style music and a possible return to 1920s-30s jazz or classical pop, with artists like LVE creating music with a false nostalgia for a hundred-year-old sound (56m26s).
The idea of the "voice memo demo" becoming an actual genre of music is proposed, where the unpolished, raw recordings of artists, like Charlie Puth's demos on TikTok, live alongside the polished versions (55m14s).
This concept is already seen in some albums, like the new Hy record, which includes tracks that are essentially voice memos and demos (56m9s).
The trend of incorporating interlude tracks, like phone calls, into hip-hop albums also has a long history (56m17s).
The possibility of looking back at the current era as the "Auto-Tune era" in the future, similar to how we distinguish between different eras in music, is discussed (57m0s).
The idea that Auto-Tune will eventually be replaced by something else, but for now, it's still a dominant sound in music, is also considered (57m17s).
Listeners, regardless of their musical training, can often identify the era of a song based on specific production techniques, such as the way snare drums were produced in the 1980s (57m26s).
Sonic artifacts of an era, such as Auto-Tune, can place a song in time and evoke a specific aesthetic, similar to how a point-and-shoot Kodak photograph from the early 2000s screams early 2000s (57m41s).
People are embracing nostalgic aesthetics, such as buying old cameras to achieve a retro look, and similarly, Auto-Tune may experience a resurgence in the future as an act of creative nostalgia (58m1s).
Although the popularity of Auto-Tune may wax and wane, it will likely remain a ubiquitous tool in music production, similar to the electric guitar, and may continue to evolve and fade into the background before popping forward again (58m33s).
Charlie XCX's record "Brat" is cited as an example of a great song that heavily features Auto-Tune, and the artist admits that she can no longer sing without it, as it has become an integral part of her sound (58m45s).
For those looking to experience Auto-Tune, a recommended playlist might include songs that feature other vocal processing techniques, such as Kraftwerk's "Trans Europe Express," Daft Punk's "Harder, Better, Faster, Stronger," and Peter Frampton's "Show Me the Way" (1h0m27s).
The playlist should also include songs that heavily feature Auto-Tune, such as Lil Wayne's "Lollipop," Travis Scott's "Highest in the Room," and Charlie XCX's "Brat" (1h0m50s).