top of page
Search
peonjernejci

Matlab program for uniform quantization encoding and decoding: Reducing distortion with optimal para



An H.264 video encoder carries out prediction, transform and encoding processes (see Figure 1) to produce a compressed H.264 bitstream. An H.264 video decoder carries out the complementary processes of decoding, inverse transform and reconstruction to produce a decoded video sequence.


Because samples are reliant on time, so a precise clock is necessary for precise reproduction. If any of the encodings otherwise decoding CLK is not steady, these defects will directly influence the output of the device quantity.




Matlab program for uniform quantization encoding and decoding




Finally, the encoding using Equations 3 and 4 are optimal if we have a uniformdistribution of "0"s and "1"s (i.e. \(p_0=p_1=\frac12\)). Noticethat the entropy \(H(x) = -2 \cdot \frac12\log_2(\frac12) = 1\),which results in 1 bit per binary digit, which is exactly what theseequations generate (if you exclude the fact that we start at 1).


The last two points are relevant because it gives us a hint as to how we mightextend this to non-uniform binary messages. Our encoding is optimal because wewere able to spread the evens and odds (over any given range) in proportion totheir probability. We'll explore this idea a bit more in the next section.


Let's think about why the naive encoding in the previous section might resultin an optimal code for a uniform distribution. For one, it spreads even and oddnumbers (binary strings ending in "0"'s and "1"'s respectively) uniformly acrossany natural number range. This kind of makes sense since they are uniformlydistributed. What's the analogy for a non-uniform distribution?


Another problem are those pesky real numbers. Theoretically, we can havearbitrary real numbers for the probability distribution of our alphabet. We"magically" found a nice formula in Equation 10/11 that encodes/decodes anyarbitrary \(p\), but in the case of a larger alphabet, it's a bit tougher.Instead, a restriction that we'll place is that we'll quantize the probabilitydistribution in \(2^n\) chunks. So \(p_s\approx \fracf_s2^n\),where \(f_s\) is a natural number.This quantization of the probability distribution, simplifies things for us byallowing us to have a simpler and more efficient coding/decoding function(although it's not clear to me if it's possible to do it without quantization).


The idea is that during encoding once \(x_i\) gets too big, we simply writeout the lower \(M\) bits to ensure it stays between \([2^M, 2^2M - 1]\)(e.g. \(M=16\) bits).Similarly, during decoding, if \(x_i\) is too small, shift the currentnumber up and read in \(M\) bits into the lower bits. As long as you takecare to make sure each operation is symmetric, it should allow you to alwaysplay with a number that fits within an integer type.


As you can imagine, there are numerous variants of the abovealgorithms/concepts, especially as it relates to efficient implementations.One of the most practical is one calledtANSor the tabled variant. In this variation, we build a finite state machine(i.e. table) to pre-compute all the calculations we would have done in rANS.This has a bit more upfront cost but will make the encoding/decoding much fasterwithout the need for multiplications.


SPIHT represents a small "revolution" in image compression because it broke the trend to more complex (in both the theoretical and the computational senses) compression schemes. While researchers had been trying to improve previous schemes for image coding using very sophisticated vector quantization, SPIHT achieved superior results using the simplest method: uniform scalar quantization. Thus, it is much easier to design fast SPIHT codecs.


A straightforward consequence of the compression simplicity is the greater coding/decoding speed. The SPIHT algorithm is nearly symmetric, i.e., the time to encode is nearly equal to the time to decode. (Complex compression algorithms tend to have encoding times much larger than the decoding times.)


The incorporation of the decoder inside the encoder allows quantization of the differences, including nonlinear quantization, in the encoder, as long as an approximate inverse quantizer is used appropriately in the receiver. When the quantizer is uniform, the decoder regenerates the differences implicitly, as in this simple diagram that Cutler showed:


I begin to study how codec works, and I found a "summary-diagram" for DCT-codec (DCT stands for Discrete Cosine Transform). On this diagram, there are of course two steps: the encoding step and the decoding one. I try to make a reproduction of this diagram herebelow:


At present, three operations of domain transformation, quantization, and encoding are included in all image coding concealment schemes, but the corresponding methods of the three operations adopted by the different coding schemes are different. Figure 5 shows the key processing steps and core content of DCT-based coding.


The description style of the syntax is similar to the C programming language.Syntax elements in the bitstream are represented in bold type. Each syntaxelement is described by its name (using only lower case letters withunderscore characters) and a descriptor for its method of codedrepresentation. The decoding process behaves according to the value of thesyntax element and to the values of previously decoded syntax elements. When avalue of a syntax element is used in the syntax tables or the text, it appearsin regular (i.e. not bold) type. If the value of a syntax element is beingcomputed (e.g. being written with a default value instead of being coded inthe bitstream), it also appears in regular type (e.g. tile_size_minus_1). 2ff7e9595c


0 views0 comments

Recent Posts

See All

Full 9 Deaktiv

Full 9 deaktiv: Como desativar o pacote Full 9 do Nar Se você é assinante do Nar e está procurando uma forma de desativar o pacote Full...

Comments


bottom of page