DATA



Computer Performance and Metrics

Whether a particular computing device is fit for a given task in a media production workflow depends on multiple factors.

They boil down to two classes: raw speed and usability. Speed or performance is relatively easy to quantify if you are using the right tools and metrics (which we'll explain below). Usability is trickier since it's a mix of input (touch, keyboard, mouse, pen tablet) and output options (internal or external displays, resolution, color depth, dynamic range; similar for sound), mobility (weight and battery life), robustness and resilience during operation (handling storage and memory failures, repairability, rugged casing) and connectivity (networking and external storage connects).

Based on raw performance a smartphone or tablet can already handle all typical office, communication and video review tasks today. For more demanding tasks mobile devices are limited by constrained performance and connectivity options, their display size and the touch-only user interface.

Media producers typically operate on the high-end of computing performance. Increasing image resolution and file sizes in 4k, 8k, HDR and VR post-production workflows require a more careful look at processing and I/O capabilities. High-end desktop computers and laptops are a good fit already. For very demanding storage and processing tasks specialized servers are a better option.

In a particular task a computing system may be CPU bound (limited by the number of numerical calculations), I/O bound (limited by reading or writing data from/to storage or networks) or memory bound (limited by the amount of active data required in main memory as in video editing). Performance bottlenecks in media workflows are often due to limited I/O bandwidth of mass storage systems.

Processing metrics

Processing speed is actually very hard to quantify because there are many factors which influence how fast a computer can execute a given workload. Execution times depend on the quality of software in use and in many cases even on the input data (think predictive video compression on high-motion content).

Standardised benchmarks are the only meaningful comparison between different CPU architectures and models. Benchmarks run a pre-defined workload for different software categories and measure execution time of each task. Keep in mind that the performance a benchmark displays is actually influenced by multiple factors such as CPU architecture and  instruction set, cache sizes, and memory speed.

Clock speed: a processor's clock frequency, expressed in gigahertz (GHz), determines how fast the processor can execute instructions, load and store data. Most instructions require more than one clock cycle and modern CPUs execute multiple instructions in parallel or delay instructions when waiting for data. Hence the clock frequency is not a good indicator for how fast a processor performs on a given workload.

Cores: the number of parallel processing units inside a processor, so called cores, defines how many tasks can be executed in parallel. Multiple processor cores allow the operating system to run multiple programs or hardware threads in a single application truely at the same time. Modern video editing and rendering software can automatically make use of multiple cores, but having more CPU cores than parallel activities going on is useless.

Cache size: caches help hiding the slow access times of main memory and thus improve perceived execution speed. The capacity of caches plays an important role when you have a large number of parallel activities (web servers) or when you process data repeatedly (as in video compression).

Energy: the typical energy consumption of CPUs is directly related to utility costs, cooling requirements and battery life on mobile. Modern CPUs have dynamically adjustable clock-speeds to decrease consumption at periods of low activity and boost clock speed when top performance is required.

Storage metrics

Capacity: storage capacity is measured in bytes (1 B = 8 bit of information, note the uppercase B). According to the International System of Units (SI) the terms kilobyte (1 kB = 10^3 B), megabyte (1 MB = 10^6 B), gigabyte (1 GB = 10^9 B) and so on denote exponents of base 10. That is, 1 MB = 1 000 000 bytes (= 1000^2 B = 10^6 B). The equivalent in base 2 notation uses binary prefixes kibibyte (1 kiB = 2^10 B), mebibyte (1 MiB = 2^20 B), gibibyte (1 GiB = 2^30 B) and so on.

Latency: is the time interval between issuing a random read request and receiving the first byte of data. Latency is typically measured in milliseconds [ms] or seconds [s]. Latency has a hugely negative performance impact when storage technology requires a seek operation for random access, such as hard disks or tapes.

Reliabilty: the reliability of drives and storage media is expressed through bit-error-rate (BER) and the expected drive or media lifetime. BER is a unitless ratio that describes how many bits can be read from a storage media before the first non-recoverable single-bit error occurs. Typical BER values range between 10^-14 (12.5 TB) for consumer hard drives to 10^-15 (125 TB) for enterprise hard drives up to 10^-19 (1.25 EB) for LTO tapes. The numbers in brackets denote the amount of data in bytes you can read before it becomes likely you experience a bit error. Expected drive lifetime is an empirical value that depends on production quality, usage pattern and wearout. Hard drive vendors often use a metric called mean-time-between-failure (MTBF) which is too ambiguos and unreliable.

Durability: is measured by two metrics, the maximum number of write cycles and the persistence of data on the medium. Persistence determines how long after writing data to a storage medium the data will be readable without error after power is cut off from the drive.

Costs: are usually expressed as ratio of purchase price per capacity (e.g. $/GB). This works for storage like hard drives where drive and medium are combined. For tape or other disc storage the ammortized costs of the drive need to be added to the media costs. What's also often neglected are the operational expenses which amount to at least ~79 kWh/year in utility cost for a 9W hard drive. Pricing for cloud storage works different. It's more fine-grained since you pay used gigabytes per day in monthly billing cycles, but becomes more expensive than own hard drives at large quantities and over long time-frames.

IO and networking metrics

Transfer rate: transfer rates of computer networks and other I/O channels, sometimes called bit rates, are measured in multiples of the unit bit per second (note the lowercase b for bit in all units). Similar to storage units transfer units follow the SI system where the terms kilobit/s (1 kbit/s = 1 kb/s = 10^3 b/s), megabit (1 Mbit/s = 1 Mb/s = 10^6 b/s) and gigabit (1 Gbit/s = 1 Gb/s = 10^9 b/s) refer to base 10. Sometimes (typically in marketing documents for networking or video codecs) people neglect the time unit and express transfer rates as Mbit or Gbit. Transfer rates always define the theoretical maximum for a technology.

Throughput: is the actual amount of application data transferred per time. Throughput uses the same units as transfer rate, but it does not include overheads introduced by protocol headers and extra signalling data. The throughput metric also accounts for error recovery overheads and flow-control related pauses in a traffic stream. It's only under ideal conditions that throughput approaches the theoretical transfer rate.

Latency: network latency is the time interval between sending a request and receiving the first byte of data in a reply from a remote system. Network latency is measured in milliseconds [ms]. Unlike storage, network latency depends on the physical distance from the destination because the signal can not travel faster than light-speed. Network latency is further increased by the forwarding delays of all intermediate hops to the destination computer.

Related Articles: