One of the better explanations I've seen concerning encoding level ...
Without going into too much detail about the specifics of the codec, a good oversimplification is that during compression, many different algorithms are tested to try to model the data. The higher the compression level, the longer it searches for a good algorithm--usually resulting in a more efficient modeling of the data (hence a smaller file size). When the compressed file is written, that algorithm is written into the file as well--to decode, simply apply the algorithm. All the algorithms, regardless of compression level, are stable and relatively computationally cheap; all the work is done upfront during encoding trying to find that ideal algorithm.