LAME Official Logo

Introduction to encoding

Introduction

There is a lot of confusion surrounding the terms audio compression 1, audio encoding, and audio decoding. This section will give you an overview what audio coding (another one of these terms...) is all about.

The purpose of audio compression

Up to the advent of audio compression, high-quality digital audio data took a lot of hard disk space to store. Let us go through a short example.

You want to sample 1 minute of your favourite song and store it on your harddisk. Because you want CD quality, you sample at 44.1 kHz, stereo, with 16 bits per sample.

44100 Hz means that you have 44100 values per second coming in from your sound card (or input file). Multiply that by two because you have two channels. Multiply by another factor of two because you have two bytes per value (that's what 16 bit means). The song will take up 44100 samples/s · 2 channels · 2 bytes/sample · 60 s/min ~ 10 MBytes of storage space on your harddisk.

In order to stream this over internet, a speed of at least 1.41Mbits/ s is needed, which wasn't a common speed at all at the time MP3 was invented. If you wanted to download that, given an average 56k modem connected at 44k, it would take 1.41Mbits · 1000 kbits/Mbit / 44 kbits ~ 32 times as much.
This means 32 minutes just to download one minute of music!

Digital audio coding, which - in this context - is synonymously called digital audio compression as well, is the art of minimizing storage space (or channel bandwidth) requirements for audio data. Modern perceptual audio coding techniques (like MPEG Layer III) exploit the properties of the human ear (the perception of sound) to achieve a size reduction by a factor of 11 with little or no perceptible loss of quality.

Therefore, such schemes are the key technology for high quality low bit-rate applications, like soundtracks for CD-ROM games, solid-state sound memories, Internet audio, digital audio broadcasting systems, and the like.

The two parts of audio compression

Audio compression really consists of two parts. The first part, called encoding, transforms the digital audio data that resides, say, in a WAVE file, into a highly compressed form called bitstream. To play the bitstream on your soundcard, you need the second part, called decoding. Decoding takes the bitstream and re-expands it to a WAVE file.

The program that effects the first part is called an audio encoder. LAME is such an encoder . The program that does the second part is called an audio decoder. Nowadays there are lots of players that decode MP3

Compression ratios, bitrate and quality

It has not been explicitly mentioned up to now: What you end up with after encoding and decoding is not the same sound file anymore: All superfluous information has been squeezed out, so to say. It is not the same file, but it will sound the same - more or less, depending on how much compression has been performed on it.

Generally speaking, the lower the compression ratio achieved, the better the sound quality will be in the end - and vice versa.
Table 1.1 gives you a rough estimate about the quality you can expect.

Because compression ratio is a somewhat unwieldy measure, experts use the term bitrate when speaking of the strength of compression. Bitrate denotes the average number of bits that one second of audio data will take up in your compressed bitstream. Usually the units used will be kbps, which is kbits/s, or 1000 bits/s (not 1024).
To calculate the number of bytes per second of audio data, simply divide the number of bits per second by eight.

table 1.1: bitrate versus sound quality
Bitrate Bandwidth Quality comparable to
16 kbps mono 5.5 khz above shortwave radio / telephone
32 kbps mono 8.5 khz near AM (medium wave) radio
64kbps mono, 128 kbps stereo 16 khz FM radio
-V 3~-V 0 (160~200 kbps)
(variable bitrate)
18~20 khz perceptual transparency versus CD2
  1. Audio compression (also called coding) means reduce the size (bytes) that the original source requires to be stored. This is not the same than compressors in DSP (or audio effects). The latter reduces the dynamic range of the audio so that there is less difference in perceived loudness between its strong and subtle parts.
  2. Lossy encoding (as opposed to lossless) cannot guarantee transparency all of the time. This is the value accepted as the sweet spot.