All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Description

https://www.irjet.net/archives/V3/i2/IRJET-V3I2105.pdf

Tags

Transcript

International Research Journal of Engineering and Technology
(IRJET)
e-ISSN: 2395 -0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal
| Page 613
Lossless Huffman coding image compression implementation in spatial domain by using advanced enhancement techniques
Ali Tariq Bhatti
1
, Dr. Jung H. Kim
2
1,2
Department of Electrical & Computer engineering
1,2
NC A&T State University, Greensboro NC USA
1
atbhatti@aggies.ncat.edu, alitariq.researcher.engineer@gmail.com, ali_tariq302@hotmail.com
2
kim@ncat.edu
Abstract:
Images are basic source of information for almost all scenarios that degrades its quality both in visually and quantitatively way. Now
–
a-days, image compression is one of the demanding and vast researches because
high Quality image requires larger bandwidth. Raw images need larger memory space.
In this paper, read an image of equal dimensional size (width and length) from MATLAB. Initialize and extract M-dimensional vectors or blocks from that image. However, initialize and design a code-book of size N for the compression. Quantize that image by using Huffman coding Algorithm to design a decode with table-lookup for reconstructing compressed image of different 8 scenarios. In this paper, several enhancement techniques were used for lossless Huffman coding in spatial domain such as Laplacian of Gaussian filter. Use laplacian of Gaussian filter to detect edges of lossless Huffman coding best quality compressed image(scenario#8) of block size of 16 and codebook size of 50. Implement the other enhancement techniques such as pseudo-coloring, bilateral filtering, and water marking for the lossless Huffman coding c based on best quality compressed image. Evaluate and analyze the performance metrics (compression ratio, bit-rate, PSNR, MSE and SNR) for reconstructed compress image with different scenarios depending on size of block and code-book. Once finally, check the execution time, how fast it computes that compressed image in one of the best scenarios. The main aim of Lossless Huffman coding using block and codebook size for image compression is to convert the image to a form better that is suited for analysis to human.
Keywords:- Huffman coding, Bilateral, Pseudo-coloring, Laplacian filter, Water-marking
1.
Image Compression
Image compression plays an impassive role in memory storage while getting a good quality compressed image. There are two types of compression such as Lossy and Lossless compression. Huffman coding is one of the efficient lossless compression techniques. It is a process for getting exact restoration of srcinal data after decompression. It has a lower Compression ratio In this paper,
Huffman coding
is used. Lossy compression is a process for getting not exact restoration of Original data after decompression. However, accuracy of re-construction is traded with efficiency of compression. It is mainly used for image data compression and decompression. It has a higher compression ratio. Lossy compression [1][2] can be seen in fast transmission of still images over the internet where the amount of error can be acceptable. Enhancement techniques mainly fall into two broad categories: spatial domain methods and frequency domain methods [9]. Spatial domain techniques are more popular than the frequency domain methods because they are based on direct manipulation of pixels in an image such as logarithmic transforms, power law transforms, and histogram equalization. However, these pixel values are manipulated to achieve desired enhancement. But they usually enhance the whole image in a uniform manner which in many cases produces undesirable results [10].
2.
Methodology 2.1 Huffman encoding and decoding process based on block size and codebook for image compression
Step 1- Reading MATLAB image 256x256 Step 2:- Converting 256x256 RGB image to Gray-scale level image Step 3- Call a function that find the symbols for image Step 4- Call a function that calculate the probability of each symbol for image Step 5- The probability of symbols should be arranged in DESCENDING order, so that the lower probabilities are merged. It is continued until it is deleted from the list [3] and replaced with an auxiliary symbol to represent the two srcinal symbols. Step6- In this step, the code words are achieved related to the corresponding symbols that result in a compressed data/image. Step7- Huffman code words and final encoded Values (compressed data) all are to be concatenated. Step8- Huffman code words are achieved by using final encoding values. This may require more space than just
International Research Journal of Engineering and Technology
(IRJET)
e-ISSN: 2395 -0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal
| Page 614
the frequencies that is also possible to write the Huffman tree on the output Step9-Original image is reconstructed in spatial domain which is compressed and/or decompression is done by using Huffman decoding. Step 10-Compressed image applied on Huffman coding to get the better quality image based on block and codebook size. Step 11- Recovered reconstructed looks similar to srcinal image. Step 12: Implement Laplacian of Gaussian 5x5 filtering for lossless Huffman coding compressed image Step 13: Implement Pseudo coloring for lossless Huffman coding compressed image Step 14: Implement Bilateral filtering for lossless Huffman coding compressed image Step 15: Implement Water marking for lossless Huffman coding compressed image
Figure 1 Block diagram 2.2 Different scenarios
There are 8 different scenarios for image compression using lossless Huffman coding based on block and codebook size.
Figure 2 Original image (RGB to Gray-scale)
Scenario#8 Size of Block=M=16, and Size of Codebook=N=50 (
16X50)
Figure 3 Reconstructed Image of 16X50
Scenario#7 Size of Block=M=16, and Size of Codebook=N=25
(16X25)
Figure 4 Reconstructed Image of 16X25
Scenario#6 Size of Block=M=64, and Size of Codebook=N=50
(64X50)
Figure 5 Reconstructed Image of 64X50
International Research Journal of Engineering and Technology
(IRJET)
e-ISSN: 2395 -0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal
| Page 615
Scenario#5 Size of Block=M=64, and Size of Codebook=N=25
(64X25)
Figure 6 Reconstructed Image of 64X25
Scenario#4 Size of Block=M=256, and Size of Codebook=N=50
(256X50)
Figure 7 Reconstructed Image of 256X50
Scenario#3 Size of Block=M=256, and Size of Codebook=N=25
(256X25)
Figure 8 Reconstructed Image of 256X25
Scenario#2 Size of Block=M=1024, and Size of Codebook=N=50
(1024X50)
Figure 9 Reconstructed Image of 1024X50
Scenario#1 Size of Block=M=1024, and Size of Codebook=N=25
(1024X25)
Figure 10 Reconstructed Image of 1024X25
Scenario#8 is the best one for better image quality which is block size of 16 and codebook size of 50
2.3 Performance Metrics
There are following performance metrics used for image compression of srcinal and reconstructed image such as
(a)
Bit Rate:
Bit Rate is defined as
(1) (2)
The units for Bit Rate is bits/pixel.
(b)
Compression Ratio:
Compression Ratio is defined as:
International Research Journal of Engineering and Technology
(IRJET)
e-ISSN: 2395 -0056
Volume: 03 Issue: 02 | Feb-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal
| Page 616
(3)
Compression Ratio is Unit-less.
(c)
SNR: SNR (Signal-To-Noise Ratio) is defined as
(4)
(d)
MSE:
The Mean Square Error (MSE) is the error metric used to compare image quality. The MSE represents the cumulative squared error between the reconstructed(
Y
i
) and the srcinal image(X
i
)
.
(5)
(e)
PSNR
Peak Signal-to-Noise Ratio short as PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its MSE representation.
(6)
Table 1 Performance metrics for lossless Huffman coding for first image 2.4
Probabilities for the best quality compressed image
In this paper, the block size of 16 and codebook size of 50 shows a better quality image than other scenarios . Therefore, the probabilities: Probabilities for codebook size of 25 and 50 are as: prob = Columns 1 through 13 0.0031 0.0062 0.0092 0.0123 0.0154 0.0185 0.0215 0.0246 0.0277 0.0308 0.0338 0.0369 0.0400 Columns 14 through 25 0.0431 0.0462 0.0492 0.0523 0.0554 0.0585 0.0615 0.0646 0.0677 0.0708 0.0738 0.0769 ent = 4.3917 prob = Columns 1 through 13 0.0008 0.0016 0.0024 0.0031 0.0039 0.0047 0.0055 0.0063 0.0071 0.0078 0.0086 0.0094 0.0102 Columns 14 through 26 0.0110 0.0118 0.0125 0.0133 0.0141 0.0149 0.0157 0.0165 0.0173 0.0180 0.0188 0.0196 0.0204 Columns 27 through 39 0.0212 0.0220 0.0227 0.0235 0.0243 0.0251 0.0259 0.0267 0.0275 0.0282 0.0290 0.0298 0.0306 Columns 40 through 50 0.0314 0.0322 0.0329 0.0337 0.0345 0.0353 0.0361 0.0369 0.0376 0.0384 0.0392 ent = 5.3790
3.
Laplacian of Gaussian filter and Pseudo-coloring
Lossless Huffman coding reconstructed (best quality compressed image of 16X50) using Laplacian of Gaussian filter 5x5 kernal for figure 3 can be shown as

Related Search

Lossless Image CompressionImage compression (Computer Science)Image Compression (Engineering)Image compressionStatistical Techniques in Spatial AnalysisIMAGE ENCRYPTION METHODS IN DWTFinite Differences in Time DomainImage and Video CompressionSpatial Analysis and Predictive Modelling in implementation of socio-economic rights in Af

Similar documents

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks