Skip to content

Commit

Permalink
references chapter 8 for brevity
Browse files Browse the repository at this point in the history
  • Loading branch information
jasonjabbour committed Aug 26, 2024
1 parent 72fbcf7 commit 0db7a41
Showing 1 changed file with 3 additions and 13 deletions.
16 changes: 3 additions & 13 deletions contents/optimizations/optimizations.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -378,6 +378,8 @@ Precision, delineating the exactness with which a number is represented, bifurca

**Integer:** Integer representations are made using 8, 4, and 2 bits. They are often used during the inference phase of neural networks, where the weights and activations of the model are quantized to these lower precisions. Integer representations are deterministic and offer significant speed and memory advantages over floating-point representations. For many inference tasks, especially on edge devices, the slight loss in accuracy due to quantization is often acceptable given the efficiency gains. An extreme form of integer numerics is for binary neural networks (BNNs), where weights and activations are constrained to one of two values: either +1 or -1.

You may refer back to @sec-numerical-formats for a table comparison between the trade-offs of different numeric types.

#### Numeric Encoding and Storage

Numeric encoding, the art of transmuting numbers into a computer-amenable format, and their subsequent storage are critical for computational efficiency. For instance, floating-point numbers might be encoded using the IEEE 754 standard, which apportions bits among sign, exponent, and fraction components, thereby enabling the representation of a vast array of values with a single format. There are a few new IEEE floating point formats that have been defined specifically for AI workloads:
Expand All @@ -393,19 +395,7 @@ The key goals of these new formats are to provide lower precision alternatives t

### Efficiency Benefits

Numerical efficiency matters for machine learning workloads for a number of reasons:

**Computational Efficiency:** High-precision computations (like FP32 or FP64) can be slow and resource-intensive. By reducing numeric precision, one can achieve faster computation times, especially on specialized hardware that supports lower precision.

**Memory Efficiency:** Storage requirements decrease with reduced numeric precision. For instance, FP16 requires half the memory of FP32. This is crucial when deploying models to edge devices with limited memory or when working with very large models.

**Power Efficiency:** Lower precision computations often consume less power, which is especially important for battery-operated devices.

**Noise Introduction:** Interestingly, the noise introduced by using lower precision can sometimes act as a regularizer, helping to prevent overfitting in some models.

**Hardware Acceleration:** Many modern AI accelerators and GPUs are optimized for lower precision operations, leveraging the efficiency benefits of such numerics.

Efficient numerics is not just about reducing the bit-width of numbers but understanding the trade-offs between accuracy and efficiency. As machine learning models become more pervasive, especially in real-world, resource-constrained environments, the focus on efficient numerics will continue to grow. By thoughtfully selecting and leveraging the appropriate numeric precision, one can achieve robust model performance while optimizing for speed, memory, and energy.
As you learned in @sec-efficiency-benefits, numerical efficiency matters for machine learning workloads for a number of reasons. Efficient numerics is not just about reducing the bit-width of numbers but understanding the trade-offs between accuracy and efficiency. As machine learning models become more pervasive, especially in real-world, resource-constrained environments, the focus on efficient numerics will continue to grow. By thoughtfully selecting and leveraging the appropriate numeric precision, one can achieve robust model performance while optimizing for speed, memory, and energy.

### Numeric Representation Nuances

Expand Down

0 comments on commit 0db7a41

Please sign in to comment.