The theory of high rate lossy source coding is well developed both with respect to the practice of quantization and its fundamental rate distortion limits. But many modern compression systems for natural signals such as audio and images operate at lower rates, which are not covered by these theories. Often these systems are based on a signal transform followed by a nonlinear processing step that reduces the number of coefficients that are actually quantized. Both this nonlinearity and the non-Gaussianity of the involved signals rule out using the theory of (high rate) linear transform coding to analyze such compression algorithms. The lossy source coding theorem could be used, but it is not the right tool ``to see what happens''. Another way of getting a better understanding of these systems is to look at them as a form of nonlinear approximation of individual signals. While partly avoiding the need for statistical signal models that are hard to come by, this approach still relies on high rate results for the coefficient quantization.

In this thesis we take a more information theoretic approach, whose main result is a new low-rate upper bound on the distortion rate function. In order to obtain that result, the problem had to be simplified substantially. To begin, only i.i.d.\ memoryless sources with a mean square error distortion measure are considered. Instead of bounding the distortion rate function $D(R)$ directly, we derive upper bounds on the optimum performance of a certain class of quantizers, which by definition upper bounds also $D(R)$. Inspired by the observation that many of those compression systems quantize only very \emph{few} coefficients at low rates, we called this the class of \emph{oligo}quantizers. In particular, if the coefficients have a peaked unimodal probability density, a simple magnitude thresholding operation is very effective at deciding which coefficients should be quantized. It turns out that peaked unimodal densities are also the most interesting ones, not just because they appear in many transform coding systems, but also because they have a $D(R)$ behavior that is very different from a Gaussian. Thus the emphasis of this work will be on oligoquantization based on scalar thresholding. Actually we will show that the principle of oligoquantization does not generalize well to higher dimensions.

On a more applied note, we analyze several models for sparse transform coefficients with the aid of the low rate bound. For the special case of sparse coefficient vectors and Hamming distortion measure, analytic expressions for $R(D)$ are obtained. A high rate bound that complements its low rate counterpart is also derived. Finally, we show how these bounds can be computed directly from a set of source samples, without first estimating the underlying probability density. These empirical bounds can be a useful complement to the rate distortion function computed with Blahut's algorithm, since the latter is hard to use at very low or high rates.