In the following, *x*_{i,j} = *x*_{m} stands for the color value of the pixel
currently in position (*i*,*j*) = *m* in the pixels original
image, and is the color value of the ``predicted''
(reconstructed) pixel of the reconstructed image. As seen in [11]
and accordingly to Fig. 2, the configuration of the 2-D
predictor is as following:

(2) |

where *x*_{m}= *x*_{i,j}, , , and
.

In the above, *e*_{m} stands for the error value between the original and the
reconstructed pixels:

(3) |

Finally, is the quantized value of *e*_{m}, as described in
[11] from which Table 1 is drawn.
Then:

**Algorithm** `PREDICTION`

**Input:** *x*_{i,j} = *x*_{m}, , , and
(see Fig. 2).

**Output:** .

**Method:** We compute a first , then *e*_{m} and
. Eventually, we update the value of .

beginCompute accordingly to Eq. 2 Clip to the range [0,255] Compute Compute Upgrade by computing := Clip to the range [0,255]end

i | Probability | Huffman Code | |

(-255,-16) -20 | 0.025 | 111111 | |

1 | (-16,-8) -11 | 0.047 | 11110 |

2 | (-8,-4) -6 | 0.145 | 110 |

3 | (-4,0) -2 | 0.278 | 00 |

4 | (0,4) 2 | 0.283 | 10 |

5 | (4,8) 6 | 0.151 | 01 |

6 | (8,16) 11 | 0.049 | 1110 |

7 | (16,255) 20 | 0.022 | 111110 |

In fact, we add an offset of 128 to , making all of the error values positive (for an 8-bit original) so that they can be printed on a output device. The error image for a perfectly reconstructed image is thus uniform gray field with a code value of 128.

The DPCM algorithm is summarized in Figure 3. We
observe that it applies to every pixel of the image. That leads to the
following algorithm:

**Algorithm** `PREDICTION_LOOP`

**Input:** *x*_{i,j} = *x*_{m}, for all and .

**Output:** , for all and .

**Method:** We use `PREDICTION` to compute for each
pixel.

beginInitialize and for all andj=0.forj=1toN-1dofori=0toN-1doCallPREDICTIONonx_{i,j}=x_{m}, , , and (see Fig. 2).doenddoendend

From Figure 3 one concludes that the prediction
(reconstructed pixel) for the transmitter is exactly the same as the
one used by the receiver. They both need, and only need, the quantized
difference . The transmitter puts this difference in a file, whereas
the receiver reads this difference from the file.

Finally, , which is the quantized difference for each pixel, and the whole picture (formed with these quantized differences) is compressed with Huffman code.