FORMAT



Sampling and Datarates

In contrast to an analogue signal, a digital signal of bits (each either a 0 or a 1) can only represent a finite number of amplitudes at a finite number of moments in time.

As a result a digital waveform is comprised of steps, not continuous level changes.

Sampling

A signal is continuously sampled at a fixed rate, expressed in samples per second. This is called the sampling rate – also referred to as the sampling frequency – and is represented in hertz or ‘samples per second’.
A picture is sampled in several dimensions – in time (frame-rate) as well as in space (horizontal and vertical resolution).

Quantisation is the process of selecting the digital amplitude value closest to the original analogue amplitude in each sample. For digital image processing it is the number of displayable digital amplitude levels in a particular pixel. Quantisation is usually referred to as “bit-depth” or “resolution” although the latter term is used differently in other contexts.

An 8 bit system can have 256 distinct levels (0 – 255)
A 10 bit system can have 1024 distinct levels (0 – 1023)
A 12 bit system can have 4096 distinct levels (0 – 4095)

Sampling of audio signals requires greater quantisation levels than video signals because our sense of hearing is much more acute than our sight.

A 16 bit system can have 65,536 distinct levels (0 – 65535)
A 24 bit system can have 16,777,216 distinct levels (0 – 16777215)

Insufficient quantisation levels in video will show as visible bands where luminance should be smooth from high to low, i.e. the halo around a light in fog. In audio it is most easily noticed in abrupt volume changes in slowly decaying sounds, i.e. a held piano note.
The sampling rate multiplied by the bit-depth of quantisation gives the raw data rate.

Data Rate is the measure of how much information (data) is required to playback the video or audio at a given resolution/codec/frame rate. Therefore, the data rate of an HD video signal is much greater than for an SD video signal. The resolution of SD = 720 pixels by 576 lines at 8-bit resolution whereas for HD it can be either 1920 pixels by 1080 lines or 1280 pixels by 720 lines – both at 10-bit resolution. Compression is used to reduce data-rates - a good example of this is the difference in data rate between an uncompressed WAV signal and a highly compressed MP3 stream.

A practical example

For a 1280 x 720 10 bit HD signal:
Data rate = {(quantisation x the number of luminance pixels per line x number of lines containing luminance) + 2x (quantisation x chroma pixels per line x number of chroma lines)} x frame rate

To display that with actual figures:
data rate [bit/s] = {(10 bits x 1280 x 720) + 2x (10 bits x 640 x 720)} x 50 [1/s] = 0.9216 [Gbit/s]

It is worth noting that the conversion from the sensor output in modern professional HD cameras into a digital signal (ADC) is often done with a resolution of more than 10 bits, such as 12 bits or 14 bits. This added dynamic range allows more freedom in post production processes such as colour grading while preserving high quality throughout.

Related Articles: