What Affects Audio Quality The Most?

People have been trying to get the most out of their audio since we started recording sound into grooves in wax in the late 1800s. Today, we have complex kinds of audio data processing, equipment, and formats for every need. So, without an undergraduate as a prerequisite, what actually affects audio quality the most?

Sound quality can be affected by anything along the audio processing chain from the recording room to your ears. Recording, digital encoding, playback equipment, and environment all play a significant role. However, typical weak links involve the audio format and the speakers used for playback.

While the recording process is outside of the typical listener’s purview, understanding how to get the most out of your CDs, files, and streaming services is invaluable.

What Affects Audio Quality The Most?

Audio quality is most affected by the weakest link in the chain. That could be anything from a poor-quality MP3 file to an acoustically detrimental room. Ultimately, a bad recording is a bad recording, and no matter how good your speakers are, your ears will be displeased.

Thankfully, most of your favorite tracks and podcasts are produced by people who know what they’re doing. Thus, the issues are typically found on the user’s end. But there are ways to fix this.

Audio media comes in various forms: CD, DVD, vinyl, audio files, and streaming services like Spotify or Apple Music. Our focus will be on digital systems, but we’ll also see how vinyl compares in terms of audio quality.

Digital formats can be converted and re-encoded to compress audio for streaming or storage. Of course, how this is done may affect the audio quality, depending on the types of formats involved and encoding used, such as bitrate, sample rates and frequencies, and more.

Finally, this data must be converted back into an analog signal and then into sound. This process requires a digital/analog converter, an amplifier, and speakers. The configuration and quality of the hardware will make a significant difference on top of everything else.

Thankfully, improving your listening experience isn’t too hard, nor does it require intricate setups or deep pockets, so let’s get started.

How Is Audio Quality Affected, And Why?

There’s evidently a subjective notion that allows us to distinguish much better audio quality from worse, but how come? There are a few different ways to understand how it is being affected:

  • Fidelity: how closely does the reproduced sound match the original?
  • Intelligibility: how well can essential information, like speech, be understood?
  • Perceptual: how much do listeners like the sound they hear?

We will primarily use fidelity to compare different approaches, as it’s the least subjective measure and doesn’t require a lot of testing and listener feedback to judge. Of course, most people would love to hear their favorite bands as if they were in the studio, which fidelity reflects. As soon as we distort the sound and lose quality, fidelity is lowered, so it’s a decent proxy for how we feel about sound quality.

Note that retaining fidelity tends to help intelligibility. Still, sometimes this isn’t true, with noise reduction techniques decreasing the accuracy of the reproduction but helping the listener hear speech nonetheless. Similarly, adjusting the equalization (such as upping the bass) of a track may make music more enjoyable to some but not others – i.e., changing the perceptual quality while reducing fidelity, for better or worse.

You can apply these forms of post-processing to your music, such as in Audacity. Although for simplicity’s sake, we’ll assume that accurate reproduction of the original recording is the goal unless otherwise clarified. From there, you can modify the results as you see fit with online audio editing tools.

How Digital Audio Affects Sound Quality

How audio is represented digitally can have a massive impact on sound quality. This isn’t always the case: there are straightforward ways of retaining the original sound of a recording in an audio file. However, there’s always a tradeoff between file size and quality.

To understand how digital audio can affect sound quality, the classic example that is often used is lossy formats such as MP3. However, even “lossless” formats have their own tradeoffs that they make.

Lossy Formats’ Impact On Sound Quality

As their name suggests, lossy formats reduce sound quality. MP3 is a great example, released in the early 1990s when storage was at a premium and dial-up still had us grabbing a cup of coffee before a news article loaded. Worse still, uncompressed audio files take up a lot of data, unlike the text and HTML of the internet. Thankfully, MP3 sought to help reduce the file size of audio tracks dramatically.

Achieving 80-95% reductions in file size is no small feat: ZIP compression can barely manage 5-15% with most music. MP3 achieves it nonetheless by throwing out information that’s hard to hear and compress. It uses a psycho-acoustic model to predict which frequency components are most important, discarding the least among them until the target bitrate is reached.

This is by no means a fully reversible process. Why? Lossy formats, including AAC, Opus, and MP3, all target certain “bitrates” when encoding the audio stream. This bitrate limits the amount of data an audio codec (audio encoder/decoder software) has available per second to store data. This is usually too low for perfection.

Unsurprisingly, at low bit rates, such as 64 kbps (versus the 1411 kbps used by CD-quality audio), it needs to throw out an enormous amount of information, and lo and behold, it sounds terrible. Distortions occur, compression artifacts can be heard, and the audio eventually starts to sound mushy, dull, or crunchy. Go low enough, and it begins to sound like it’s being played through a mediocre landline.

Don’t trash all of your MP3s just yet. Most lossy formats support bit rates of 256 to 320 kbps, and some as high as 500 kbps. You’re still saving a lot of space at these high bitrates over the uncompressed audio on CDs. Yet, the difference is often impossible to discern, even during attentive listening with decent audio equipment. These days, distributing audio as MP3 with a high bit rate is extremely common as it is well-supported and sounds excellent.

Lossless Formats’ Subtle Limits On What’s Possible

While lossless formats are true to their name for many practical purposes, they aren’t perfect. Fundamentally, all digital audio formats are decoded to PCM (pulse code modulation) before they are passed to a typical DAC (digital-to-analog converter), which is a necessary component in audio playback that we’ll get back to.

PCM takes the form of a long list of samples. Each sample is a number that corresponds to the sound’s amplitude. When converting a continuous analog signal to a list of numbers with finite precision, questions that come to mind may include the following:

  • How many samples do we need?
  • How precise are the samples?

Let’s put on our audio engineering caps.

How many numbers do we need to use? We want to cover the frequency range of human hearing: about 20Hz to 20 kHz. We also don’t want to have a higher sampling frequency than necessary: that’ll waste a lot of space.

In comes the Nyquist-Shannon Sampling Theory, which tells us we need at least double the bandwidth (20 kHz) and sample at that rate in order to be able to decompose the wave into discrete samples without causing distortions. Accordingly, when saving audio recordings, we filter out all the sounds above 20 kHz and sample at over 40 kHz (plus a bit of wiggle room).

How precise do the samples need to be? For the kind of PCM we usually care about, linear PCMM, this question boils down to how many bits we use per sample. While using a multiple of eight isn’t strictly necessary, it’s convenient as computers work with bytes, which are eight bits each. Each bit gets us double the number of representable values and about 6 dB (decibels) of dynamic range.

Eight bits alone is rather stingy, yielding only 256 possible values (28) and allowing only roughly 48 dB of dynamic range. Sixteen bits is obviously double the data size, but with 216 or 65536 possible values and about 96 dB of dynamic range, we’re in business. While some audiophiles prefer 24 bits, it’ll be hard to discern from 16 bits. After all, the extra precision only allows you to be at most 1/65536 closer to the “true sound” as a ratio of the full-scale spectrum.

Why Compact Disk Digital Audio Is Still Great

Compact Disk Digital Audio, or CD-DA, is the standard for audio on digital CDs. Aside from the various nitty-gritty details and specifications around metadata, it’s just PCM. And what do you know, it uses 16-bit samples at 44.1 kHz, which is over double 20 kHz as required.

So even though the industry has come a long way since the debut of CDs, they remain great at what they do, although upgrading to DVD with a sample rate of 48 kHz and a lot more space might be a good idea, or just using a lossless format such as WAV, FLAC, or WavPack on your computer, which support a range of types of PCM (usually a converter will pick sensible defaults for you).

WAV stores PCM directly, while FLAC and WavPack compress it by about 50% without compromising on quality. While this is not great compared to lossy formats, audio quality is unaffected, which is great for production, archival, and use by audiophiles and engineers for storing music and recordings.

Is Vinyl Better Than Digital Audio?

Whether vinyl records are genuinely better than digital audio is subjective. However, digital encoding is unparalleled in terms of audio fidelity. So while they both can sound great, what you put into a CD is pretty much what you get out. On the other hand, vinyl imparts its own sound onto it, along with minor distortions such as small pops or clicks or the sound of the needle scraping through a moment of silence.

Many have grown up with the sound of records and have come to prefer them, and that’s not to be ignored, and others may get a similar enjoyment out of them too. Perceptual audio quality is what matters in the end, and listeners sometimes like a bit of EQ here and some added ambiance there.

On the other hand, many audiophiles and engineers want to be able to reproduce the recording as accurately as possible, which has its own appeal (given the original recording was enjoyable to listen to, of course). So for those without the nostalgia of records, you should ignore the hype around vinyl if you’re primarily interested in quality.

The Effect Of Good Audio Equipment

With inadequate audio playback equipment, you’re going to get mediocre results. For example, you can have a pristine 24-bit, 96 kHz lossless audio straight from the mastering room. Still, a pair of earbuds are going to nullify your efforts. Most people can improve their playback devices without needing to sell off a kidney. However, we’ll cover Hi-Fi systems for illustrative purposes first.

How To Get From Data To Sound

Getting from data to sound requires a chain of components, each with crucial roles to play. These may be integrated into each other in various ways. Sometimes they are ordered differently or take different forms, but the general idea is the same.

Here’s an example of a home theater setup of how this happens:

  1. Source: decode a disk, audio file, or stream to an uncompressed, linear PCM bitstream.
  2. DAC: the bitstream is fed into the DAC, which converts PCM to line-level analog output.
  3. Amp: the line-level signal is amplified to the much higher speaker-level power.
  4. Cable: the speaker-level output of the amp is fed into a speaker.
  5. Crossover: the speaker-level signal is split into frequency ranges, such as low, mid, and high.
  6. Driver: the driver oscillates according to the input signal, producing sound.
  7. Room: the sound propagates around the room and is detected by the listener’s ears.

These steps vary greatly but generally still follow a similar logic. For example, when you plug headphones into a computer or phone:

  1. Decoding happens as usual, and the data is fed into an on-chip DAC.
  2. The DAC outputs at headphone-level impedance (distinct from line level) to the jack.
  3. The jack connects to the headphones. Amplification doesn’t occur, as the level is correct.
  4. Headphones usually use one driver per ear, so a crossover circuit isn’t needed.
  5. Typically, the signal is passed to the driver, which electromagnetically vibrates a diaphragm.
  6. The sound that is produced is heard by the listener.

How To Improve Your Audio Playback

Improving audio playback from a given source is a matter of trying to get the equipment to cause as little distortion to the signal at each step of the way as possible. A commonly applied principle between audiophiles and audio engineers is “garbage in, garbage out.” This means no component will be able to correct the mistakes of the one before, so the most needed upgrade is to the weakest link.

In order to make these upgrades and exercise fine control over the system, separate components are used for each step (independent source, DAC, pre-amp, amp, and speakers, for example). And these components can then be tested and replaced incrementally or as needed. Of course, the listening room matters as well, as the sound is affected by the interference and reflection that occurs as it propagates.

However, maintaining a complex rig of cables and expensive boxes is understandably niche. Maybe you just want to hear some good music without all the fuss. So what’s most important? The speakers/headphones. For most consumers, powered speakers (these contain an amp and can connect directly to your device) or headphones will be the most convenient. Investing specifically in this will usually yield the most improvement.

The DAC on your phone works just fine for most purposes. The built-in amps in a powered speaker often are generally appropriate. Ultimately, the distortion these devices cause will pale in comparison to the effect of mediocre speakers, provided you have a good quality source – nothing can fix a 32 kbps MP3 file or a recording on a decade-old Android microphone.

Conclusion

The quality of sound is the net result of everything that comes before. So while trying to improve fidelity is a tangible goal, there’s still enough wiggle room in the pursuit to keep audiophiles arguing over the details until the end of time. Otherwise, if you’re not too fussed about perfection, stick to lossless or high-bitrate lossy files, invest in good headphones or powered speakers, and enjoy.

References

  • https://en.wikipedia.org/wiki/Sound_quality
  • https://www.stereophile.com/content/which-components-have-greatest-effect-sound-quality-1
  • https://superuser.com/questions/114915/aac-sample-rate-and-bit-rate-for-high-quality-audio
  • https://www.npr.org/2012/02/10/146697658/why-vinyl-sounds-better-than-cd-or-not
  • https://www.britannica.com/science/sound-physics/The-decibel-scale
  • https://superuser.com/questions/319542/how-to-specify-audio-and-video-bitrate
  • https://www.youtube.com/watch?v=ICIbgw3PSNE
  • https://en.wikipedia.org/wiki/Line_level
  • https://en.wikipedia.org/wiki/Digital-to-analog_converter

Similar Posts