This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Camera

Image Processing

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

Image Processing

INTRODUCTION

1.1 INTRODUCTION

Image Processing is a technique to enhance raw images received from cameras/sensors placed on satellites, space probes, and aircraft or pictures taken in healthy day-to-day life for various applications. Various techniques have been developed in Image Processing during the last four to five decades. Most of the methods designed for enhancing images obtained from unmanned spacecraft, space probes, and military reconnaissance flights. Image Processing systems are becoming popular due to the easy availability of powerful personnel computers, large size memory devices, graphics software, etc.

Image processing is a form of signal processing for which the input is an image, and the output of image processing may be either an image or a set of characteristics or parameters related to the picture. Most image-processing techniques treat the image as a two-dimensional signal. Image processing is computer imaging, where the application involves a human being in the visual loop. In other words, the images are to be examined and are acted upon by people. The essential topics within the field of image processing include Image restoration, Image enhancement, Image compression, etc.

Image processing deals with the manipulation and analysis of images by using a computer algorithm to improve pictorial information for better understanding and clarity. This area is characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. Image processing involves the manipulation of images to extract information to emphasize or de-emphasize certain aspects of the information contained in the image or perform image analysis to obtain hidden information.

Don't use plagiarised sources.Get your custom essay just from $11/page

Another aspect of image processing involves compression and coding of visual information. With the growing demand for various imaging applications, storage requirements of digital imagery are growing explosively. Compact representation of image data and their storage and transmission through communication bandwidth is a crucial and active area of development today. Interestingly enough, image data generally contain a significant amount of excessive and redundant information in their canonical representation. Image compression techniques help to reduce the redundancies in raw image data to reduce storage and communication bandwidth.

1.2 HISTORY OF IMAGE PROCESSING

In some sense, “image processing” dates back to the earliest use of graphics by humans. The cost of processing was relatively high, that changed in the 1970s when digital image processing proliferated as cheaper computers and dedicated hardware became available. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and, generally, is used because it is not only the most versatile method but also the cheapest. In the early days, topics like median filtering were exciting new research topics. Various techniques have been developed in Image Processing during the last four to five decades. Most of the methods designed for enhancing images obtained from unmanned space crafts, space probes, and military reconnaissance flights. Image Processing systems are becoming popular due to the easy availability of powerful personnel computers, large size memory devices, graphics software, etc.

1.3 METHODS OF IMAGE PROCESSING

There are two methods available in Image Processing, i.e., analog image processing and digital image processing.

Analog Image Processing

Analog image processing is any image processing task conducted on two-dimensional analog signals by analog means, i.e., the alteration of the image through electrical means. The most common example is the television image. Analog or visual techniques of image processing can be used for the hard copies like printouts and photographs. When creating images using analog photography, the image is burned into a film using a chemical reaction triggered by controlled exposure to light. It is processed in a dark room, using select chemicals to create the actual image. This process is decreasing in popularity due to the advent of digital photography, which requires less effort.

Digital Image Processing

Digital image processing is the use of computer algorithms to perform image processing on digital images. There are three significant benefits to digital image processing: consistently high quality of the image, a low cost of processing, and the ability to manipulate all aspects of the process. In digital photography, the image is stored as a computer file. This file translated using photographic software to generate an actual image. The colors, shading, and nuances are all captured at the time the photograph is taken, and the software translates this information into an image. The principle advantage of Digital Image Processing methods is its versatility, repeatability and the preservation of original data precision

The various Image Processing techniques are:

  • Image representation
  • Image pre-processing
  • Image enhancement
  • Image restoration
  • Image analysis
  • Image reconstruction
  • Image data compression

 

 

Image Representation

 

An image defined in the “real world” considered to be a function of two real variables, for example, f(x,y) with f as the amplitude (e.g., brightness) of the image at the actual coordinate position (x,y). The effect of digitization is shown in Figure 1.1.

                                                             Figure 1.1 Effects of Digitization

The 2D continuous image f(x,y)  divided into N rows and M columns. The intersection of a row and a column is called a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1, 2,…,M-1} and {n=0,1,2,…,N-1} is f[m,n]. In fact, in most cases f(x,y)–which we might consider to be the physical signal that impinges on the face of a sensor. Typically an image file such as BMP, JPEG, TIFF, etc., has some header and picture information. A header usually includes details like format identifier (typically first information), resolution, number of bits/pixel, compression type, etc.

Image Pre-Processing

It used to remove noise and eliminate irrelevant, visually unnecessary information. Noise is unwanted information that can result from the image acquisition process

  1. Scaling

Image scaling is the process of resizing an image. Scaling is a non-trivial process that involves a trade-off between efficiency, smoothness, and sharpness. With bitmap graphics, as the size of an image is reduced or enlarged, the pixels that form the image become increasingly visible, making the image appear “soft” if pixels averaged, or jagged if not. With vector graphics, the trade-off may be in processing power for re-rendering the image, which may be noticeable as slow re-rendering with still graphics, or slower frame rate and frame skipping in computer animation.

  1. Rotation

Rotation used in image mosaic, image registration, etc. One of the techniques of rotation is 3-pass shear rotation, where the rotation matrix can decompose into three separable matrices. 3-pass shear rotation In 3-pass shear rotation, there is no scaling, i.e., no associated resampling degradations. Shear can be implemented very efficiently.

  1. Mosaic

Mosaic is a process of combining two or more images to form a single large image without a radiometric imbalance. If we take pictures of a planar scene, such as a large wall, or a remote scene (the scene at infinity), or if we shoot pictures with the camera rotating around its center of projection, we can stitch the images together to form a single big picture of the scene. This is called image mosaicking. Mosaic is required to get the synoptic view of the entire area, otherwise, capture as small images.

4.Image Enhancement

Image enhancement techniques can

be divided into two broad categories:

 

  1. Spatial domain methods, which function directly on pixels
  2. Frequency domain methods, which function on the Fourier transform of an image

Spatial domain methods

The value of a pixel with coordinates (x,y) in the enhanced image is the result of performing some operation on the pixels in the neighborhood of (x,y) in the input image, F. Neighborhoods can be any shape, but usually, they are rectangular.

Frequency domain methods

Image enhancement in the frequency domain is straightforward. We simply compute the Fourier transform of the image to be enhanced, multiply the result by a filter and take the inverse transform to produce the enhanced image. The idea of blurring an image by reducing its high-frequency components or sharpening an image by increasing the magnitude of its high-frequency components is intuitively easy to understand. However, computationally, it is often more efficient to implement these operations as convolutions by small spatial filters in the spatial domain. Understanding frequency domain concepts is important and leads to enhancement techniques that might not have thought of by restricting attention to the spatial area.

5.Image Restoration

Image restoration refers to the removal or minimization of degradations in an image. This includes the de-blurring of images degraded by the limitations of a sensor or its environment, noise filtering, and correction of geometric distortion or non-linearity due to sensors. It is the process of taking an image with some known or estimated degradation and restoring it to its original appearance. Image restoration is often used in the field of photography or publishing, where an image was somehow degraded but needed to be improved before it can be printed.

6.Image Analysis

Image analysis methods extract information from an image by using automatic or semiautomatic techniques termed as scene analysis, image description, image understanding, pattern recognition, computer/machine vision. Image analysis differs from other types of image processing methods, such as enhancement or restoration, in that the final result of image analysis procedures is a numerical output rather than a picture.

7.Image Reconstruction

Image reconstruction encompasses the entire image formation process and provides a foundation for the subsequent steps of image processing. The goal is to retrieve image information that has lost in the process of image formation. Therefore, image reconstruction requires a systems approach that takes into account the entire process of image formation, including the propagation of light through inhomogeneous media, the properties of the optical system, and the characteristics of the detector. In contrast to image enhancement, where the appearance of an image is improved to suit some subjective criteria, image reconstruction is an objective approach to recover a degraded image based on mathematical and statistical models.

8.Image Compression

The objective of image compression is to reduce the size of digital images to save storage space and transmission time. Lossless compression is preferred for artificial images like technical drawings, icons and also preferred for high-value content, such as medical imagery or image scans made for archival purposes. Lossy methods are especially suitable for natural images such as photos in applications where the minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate. The lossy compression that produces unnoticeable differences can be called visually lossless. Run-length encoding, Huffman encoding, and Lempel Ziv encoding are the methods for lossless image compression. Transform coding such as DCT; Wavelet transform applied, followed by quantization, and symbol coding can be cited as a method for lossy image compression. A general compression model is shown in figure 1.2.

 

 

 

 

 

 

 

Figure 1.2 General Image Compression Model

 

Compression i

obtained by eliminating any of one or more of the below three fundamental data redundancies:

Coding redundancy: This is presented when the less than best (that is the smallest length) code words were used.

Inter-pixel redundancy: This results from the correlations between the pixels of an image.

Psycho-visual redundancy:  This is because of the data which is neglected by the human visual system  (that is, visually not required information).

Image/data Compression attains redundancy for the more effective coding shows in figure 1.3

 

 

 

 

 

 

 

 

 

 Figure 1.3 Shows the Image Compression Effective coding.

 

1.4 TYPES OF IMAGE COMPRESSION

 

Image compression can be further classified or divided into two separate types, such as  lossy compression and lossless compression.  In the lossy  compression as its name indicated that it results in the loss of little information. In this technique, the compressed image is the same as to actual/original uncompressed image yet not exact to the previous one as, within the compression process, littler information related to the image has been lost. So they are normally applied for the photographs.  The very natural example of the lossy compression is a JPEG.  Where are in Lossless compression, it compresses an image by encoding it’s all information from the actual file,  so in case if the image is get decompressed again, then it will be the exact same as the actual image.  For examples of the lossless technique of image, compression are  PNG and  GIF, i.e.,  GIF  only provides  8-bit  images.  At  the  time  of using  a  specific  format of  image compression  that basically  based on  what is being got compressed

 

1.4.1 LOSSLESS

 

The technique of Lossless compression with the compressing of data that is, when get decompressed, will be the same replica of actual data.  In this case,  when the binary data like the documents, executable, etc. get compressed. This required to be reproduced exactly when it gets decompressed again. On the contrary,  the images and the music also required not to be generated  ‘exactly.’  A  resemblance of the actual image is sufficient for the most objective, as far as the error or problems between the actual and compressed image is avoidable or tolerable.

These types of compression are also known as noiseless as they never add noise to signal or image.  It is also termed as the entropy coding as  it uses the techniques of decomposition/statistics to remove/reduce the redundancy. It is also used only for some specific applications along with the rigid needs like a medical-imaging. Lossless compression is usually two steps algorithm. The first step transforms the original image to some other format in which the inter-pixel redundancy is reduced. The second step uses an entropy encoder to remove the coding redundancy. The lossless decompression is a perfect inverse process of the lossless compressor shows in figure 1.4.

 

 

Entropy Encoding
Transform
Original Image

                                                                                           

compressor

                                                                                                     

                                                                                                     

Channel

                                                                                             

                                                                                                                                     

 

 

 

 

 

 

 

Figure 1.4 Lossless Compression Techniques

Following methods are included in lossless compression:

  1. Run-length encoding
  2. Huffman encoding
  3. LZW coding
  4. Area coding

Run-length encoding:

This is a very simple procedure used for sequential data. It is very useful in case of redundant data. This method replaces sequences of identical symbols (pixels), called runs by shorter symbols. The run-length code for a grayscale image is represented by a sequence {V, R} where V is the intensity of pixel and R is the number of the consecutive pixel with the intensity V example is shown below in figure 1.5

1111002222
{0,2}
{2,4}
{1,4}

 

     Figure 1.5 Run Length Encoding

The steps of the algorithm for RLE are as follows.

Step 1: Input the string.

Step 2: From the first symbol or character, give a unique value.

Step 3: Read the next character or symbol; if the character is last in the string, then exit otherwise.

A: If: the next symbol is the same as the previous symbol, then give the same unique value as previously. B: Else if: the next symbol is not the same as giving its new value that is not matched from the previous value.

Step 4: Read and count additional symbols.

Step 5: Go to step 3 until a nonmatching value to the not same symbol for previous.

Step 6: Display the result that is the count of occurrence of a single symbol with that particular symbol.

 

Huffman encoding

This is a common method for coding symbols based on their statistical occurrence of frequencies (probability). The pixels in the image are treated as symbols. The symbols that occur more frequently are assigned a smaller number of bits, while the symbols that occur less frequently are assigned a relatively larger number of bits. Huffman code is a prefix code. This means that the (binary) code of any symbol is not the prefix of the code of any other symbol. Most image coding standards use lossy methods in the earlier stages of compression and use Huffman coding as the last step.

The example of Huffman coding with the algorithm is as follows.

Step 1: Input the string

8

l

3

k

20

j

15

h

 

 

 

 

Step 2: Sorting the data by frequencies

20

j

3

k

15

h

8

l

 

 

 

Step 3: Choose the two smallest frequencies to count.

3

k

8

l

 

 

 

 

Step 4: Merge them together with some of them and update the data.

 

 

 

 

 

 

 

 

Step 5: Repeat steps 2, 3, 4.

The final Huffman tree is as follows:

 

 

 

 

 

 

 

 

 

LZW Coding

 

LZW (Lempel-Ziv-Welch) is a fully dictionary-based coding. It is divided into two subcategories. In static, the dictionary is fixed during the encoding and decoding processes. In dynamic, the dictionary is updated if the change is needed, [6]. LZW compression replaces strings of characters with single codes. It does not perform any analysis of the incoming text. Instead, it just adds every new string of characters from the table of strings. The code that the LZW algorithm outputs can be of any arbitrary length, but it must have more bits in it than a single character. LZW compression works best for files containing lots of repetitive data. LZW compression maintains a dictionary. In this dictionary all the stream entry and code are stored is shown in figure 1.6

783225783922510
Unique code

C1

Unique code

C2

 

 

 

 

C1C2C19C210

 

Figure 1.6 LZW Coding

The steps in the LZW algorithm

Step 1: Input the stream.

Step 2: Initialize the dictionary to contain an entry of each character of the stream.

Step 3: Read the stream, if the current byte is the end of the stream, then exit.

Step 4: Otherwise, read the next character and produce new code. If a group of characters is frequently occurring, then give them to a unique code.

Step 5: Read the next input character of the stream from dictionary; if there is no such a character in a dictionary, then.

A: Add a new string to the dictionary.

B: Write the new code for the new entered string.

C: Go to step 4.

Step 6: Write out code for encoded string and exit.

Area coding

Area coding is an enhanced version of run-length coding, reflecting the two-dimensional character of images. This is important and advance over the other lossless methods. For coding an image, it does not make too much sense to interpret it as a sequential stream, as it is, in fact, an array of sequences, building up a two-dimensional object. The algorithms for area coding try to find rectangular regions with the same characteristics. These regions are coded in a descriptive form as an element with two points and a certain structure. This type of coding can be highly effective, but it bears the problem of a nonlinear method, which cannot be implemented in hardware. Therefore, the performance in terms of compression time is not competitive, although the compression ratio is.

 

1.4.2 LOSSY 

 

In the technique of Lossy compression, it decreases the bits by recognizing the not required information and  by eliminating it. The system of decreasing the size of the file of data is commonly termed as the data-compression, though its formal name is the source-coding that is coding gets done at the source of data before it gets stored or sent. In these methods, a few loss of information is acceptable.  Dropping non-essential information from the source of data can save the storage area.   The  Lossy data-compression methods are  aware of the researches on how the people anticipate data in the question. As an example, the human eye is very sensitive to slight variations in the luminance as compared to that there are so many variations in the color. The Lossy image compression technique is used in the digital cameras to raise the storage ability with the minimal decline of the quality of the picture. Similarly, in the DVDs, which uses the lossy  MPEG-2  Video codec technique for the compression of the video. In the lossy audio compression, the techniques of psychoacoustics have been used to eliminate the non-audible or less audible components of the signal.

 

Lossy compression, as the name implies, leads to the loss of some information. The compressed image is similar to the original uncompressed image, but not just like the previous as in the process of compression, some information concerning the image has been lost. They are typically suited to images. The most common example of lossy compression is JPEG. An algorithm that restores the presentation to be the same as the original image is known as lossy techniques. Reconstruction of the image is an approximation of the original image, therefore the need for measuring the quality of the image for the lossy compression technique. Lossy compression technique provides a higher compression ratio than lossless compression is shown in figure 1.7

 

 

Original Image
Entropy Coding
Quantization
Transform

                                                                                                                                                           

compressor

 

 

 

channel

                                                                             

 

 

 

 

 

Figure 1.7 Lossy Image Compression

Types of lossy compression techniques are given below.

  1. Transformation coding.
  2. Vector quantization.
  3. Fractal coding.
  4. Block truncation coding.
  5. Subband coding.
  6. Chroma subsampling.

 

Transformation Coding

DFT and DCT is a type of transforms which are used in changing the pixels of the original image into frequency domain coefficients. There are several properties in this type of coefficients. One is the compaction property. This is the basis for achieving the compression

Vector Quantization

This is the method to develop a dictionary of Fixed-sizes vectors called code vectors. An image is divided into nonoverlapping blocks and then for each value dictionary is determined as well as the index is generated for the dictionary which is used as the encoding for an original input image

Fractal Coding

Fractal coding introduces the idea of decomposition of an image into segments by using standard methods of image processing like color separation, edge detection, and texture analysis. Each segment is stored in a library of fractals

Block Truncation Coding

In this method, firstly the image is divided and then arrange a block of pixels and find a threshold and reconstruction values for each block then a bitmap of the block is derived and all those pixels got replaced which have the value greater than or equal to the threshold value by 1or 0.

Subband Coding

The image got analyzed to produce the components which contain frequencies of well-defined blocks and sub-bands. Quantization and coding are applied to each of the bands, and then each sub-band will be designed separately.

Chroma subsampling

This method contains the advantage of the human visual system’s lower acuity for color differences. This technique basically used in video encoding, for example, jpeg encoding and etc. Chroma Subsampling is a method that holds color information of lower resolution and intensity information. Further, the overwhelming majority of graphics programs perform 2×2 chroma subsampling, which breaks the image into 2×2 pixel blocks and only stores the average color information for each 2×2 pixel group in fig 1.8.

 

2:2
2:2

 

        
        
        
        
        
        
        
        
Cr
Cb

 

 

Figure 1.8 Block diagram of 2 * 2-pixel Chroma Subsampling technique

 

  • IMAGE COMPRESSION FILE FORMATS

 

There are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used to display images on the internet. These graphic formats are separated into the two main such as raster and vector.

 

JPEG/JFIF Format

 

JPEG(Joint photographic experts group) is a compression method; JPEG-Compressed images are typically stored in the JFIF(JPEG File Interchange Format) file format.JPEG Compression is a lossy compression. The JPEG/JFIF filename extension is JPG or JPEG. Every digital camera can save images in the JPEG/JFIF format. The amount of compression can be specified, and the amount of compression affects the visual quality of the result.

 

JPEG 2000 Format

 

JPEG(Joint Photographic Expert Group) is a compression standard enabling both lossless and lossy storage. The compression methods used are different from the ones in standard JFIF/JPEG; they improve quality and compression ratios, but also require more computational power to process.

 

Exif Format

 

The Exif(Exchangeable image file format)format is a file standard similar to the JFIF format with TIFF extensions; it is incorporated in the JPEG-writing software used in most cameras. Its purpose is to record and to standardize the exchange of images with image metadata between digital cameras and editing and viewing software. The metadata is recorded for individual images and includes such things as camera settings, time and date, shutter speed, exposure.

 

TIFF Format

 

The TIFF(Tagged Image File Format)format is a flexible format that normally saves 8 bits or 16 bits per color for 24-bit and 48-bit totals, respectively.TIFFs can be lossy and lossless; some offer relatively good lossless compression for bi-level(black and white) images.TIFF image format is not widely supported by web browsers.

 

RAW Format

 

RAW refers to (Raw Image Formats)that are available on some digital cameras.These formats usually use lossless compression and produce file sizes smaller than the  TIFF Formats. Most camera manufacturers have their own software for decoding their raw file format.

 

GIF Format

 

(Graphics Interchange Format) is limited to an 8-bit palette, or 256 colors. This makes the GIF format suitable for storing graphics with relatively few colors, such as simple diagrams, shapes, logos, and cartoon style images. The GIF format supports animation and is still widely used to provide image animation effects. It also uses lossless compression.

 

BMP Format

 

The BMP file format handles graphics files within the Microsoft window OS. Typically, BMP files are uncompressed. Hence they are large; the advantages are their simplicity and wide acceptance in window programs.

 

PNG Format

 

The PNG(Portable Network Graphics)file format was created as the free, open-source successor to GIF. The PNG file format supports 8-bit palette images with optional transparency for all palette colors and 24-bit true-color or 48 bit true color with and without alpha channel while GIF supports only 256 colors and single transparent color.

 

FEATURES

FormatFeaturesDisadvantages
TIFF(Tagged Image File Format)(lossy and lossless)The flexible format, save 8 or 16 per color (RGB) totally 24 or 48 bitsNot used in web pages because TIFF files require the large size
GIF(Graphics Interchange Format)Grayscale and black-white image, it works with 8 bits per pixel or less, which indicates 256 or fewer colors. It states simple graphics, logos, and cartoon-style images.It does not work with color
PNG(Portable Network Graphics)(Lossless)Same 8 bits, 24 bits, and 48 bits color image. Provide motion video compression, compress the meal would subjects, photographs, and video stills.
BMP(Bitmap)(don’t compress)Graphics file related to Microsoft window operating systems, simplicity.BMP images is binary files.Large in size, it does not support true colors.
JPEG(Joint photographic expert group)(Lossy)It supports 8 bits grayscale and 24 bits of color images. Provide motion video compression, compress the meal would subjects, photographs, and video stills.Black and white documents, line art animations
RAW(lossless/lossy)File size smaller than TIFF Format.Available on digital camerasThese are not standardized image, and it will differ for manufacturers. So these images require manufacture software to view the images.

 

 

 

1.4.4 VARIOUS COMPRESSION ALGORITHMS

 

Data compression a method that takes an input data D and generates the data C (D) with the lower number of bits as compared to input data. The reverse process is called decompression, which takes the compressed data C (D) and reconstructs the data D’ as shown in figure 1.9.

 

 

 

 

Figure 1.9 Compression Algorithms

 

JPEG: DCT-Based Image Coding Standard

JPEG  enables a  compression technique which is able  to do compressing continuous-tone image or data along with a pixel having a depth of the 6 to 24bits with enough efficiency and speed.

 

A  discrete cosine transform  (DCT)  describes a  fixed series  of the data points in terms of some of the cosine  functions fluctuate at various frequencies.  DCTs are vital  for various implementations in science and in the engineering area, from the lossy compression of audio, for example, MP3 and the images, for example, JPEG in which the small high-frequency  elements maybe  get discarded,  to the spectral approached for their numerical solution of the partial differential equations.   By using the cosine instead of the sine functions, it is complicated in these implementations:  for the compression, which it returns found that  the cosine function is so much effective as mentioned here, some functions are required to the exact a typical signal, while for the differential equations, the cosines function explains a specific selection of the boundary conditions.

 

JPEG image compression performs in part through rounding off the non-essential bits of  the information. Here is  an associated trade-off between the information loss and the reduction of size. A various number of famous compression techniques have achieved these intuitive differences, consists of those that are used in the music files, video, and images. Hence the technique of  JPEG’s lossy encoding  forces to be very prudent with a gray-scale portion of the image and be very trivial with color.

 

DCT  fragmented the images into the parts of separate frequencies in which less significant frequencies are canceled through the quantization  process, and more  significant frequencies are used for retrieving the image during the process of decompression.

 

JPEG Process Steps for color images

 

This section presents jpeg compression steps

  • An RGB to YCbCr color space conversion ( color specification )
  • The original image is divided into blocks of 8 x 8.
  • The pixel values within each block range from[-128 to 127], but pixel values of a black and white image range from [0-255] so, each block is shifted from[0-255] to [-128 to 127].
  • The DCT works from left to right, top to bottom; thereby, it is applied to each block.
  • Each block is compressed through quantization.
  • The quantized matrix is entropy encoded.
  • The compressed image is reconstructed through the reverse process. This process uses the inverse Discrete Cosine Transform (IDCT).

 

Image Compression by Wavelet Transform

 

For various natural signals, the technique of wavelet transform is a very efficient tool as compared to the  Fourier transform technique.  The technique of wavelet transform  enables the multi-resolution representation by the use of the  set of analyzing functions, which are translations and dilations of some specific functions or wavelets. The technique of wavelet transforms found  in various forms.  The complicated sampled form  of wavelet  transform enables the very sophisticated representation; although, it has various limitations also.

 

 

Huffman Algorithm

 

 

The general idea in the  Huffman encoding algorithm is to allocate the very short code-words to those blocks  of input along with the high possibilities, and the long code-words are allocated to those who are having the low probabilities.

 

The Huffman code process is dependent on the two observations mentioned below:

  • Very frequently found symbols will have the shorter code- words as compared to the symbol which found less frequently.
  • Two symbols which found least frequently may have equal length.

 

The Huffman code is prepared by combining together the two least possible characters, and that is repeating in this process as far as there is only one character is remaining. A code-tree is hence prepared, and then a  Huffman code  is generated from the labeling of code tree. It is  the best  prefix code that is generated from the set of the probabilities and which has been used in the different applications of the compression.

 

These generated codes are of different lengths of code, which is using an integral number of the bits.  This concept results in  a  decrease in average length of the code and hence the whole size of the  compressed  data is  become  smaller  as  compare  to provides the solution to the issue of constructing the codes with less redundancy.

theoriginal  one.  The  Huffman’s  algorithm  is  the  first  that

 

1.5 SUMMARY

Image processing is the study of representation and manipulation of pictorial information. Digital image processing is performed on digital computers that manipulate images as arrays or matrices of numbers. High computational speed, high video resolution, more efficient computer language to process the data, and more efficient and reliable computer vision algorithms are some of the factors that let fields such as medical diagnosis, industrial quality control, robotic vision, astronomy, and intelligent vehicle / highway system to be included as a part of the large list of applications that use computer vision analysis to achieve their goals. More and more complex techniques have been developed to achieve new goals unthinkable in the pass.

Machine Vision researchers started using more efficient and faster mathematical approaches to solve more complicated problems. Convolution methods widely used in computer vision can be speed up by using Fourier Transforms and Fast Fourier Transforms. The idea that an image can be decomposed into a sum of weighted sine and cosine components it is very attractive. The functions that produce such decompositions are called basis functions. Examples of basis functions are Fourier and Wavelet Transforms. A wavelet, like Fourier Transform has a frequency associated with it but in addition a scale factor has to be considered. Only some basis functions produce a decomposition that has a real importance in image processing. However, the present information constriction strategies may be  far  from  a  definitive  points  of confinement. Fascinating  issues like  getting exact  models of  pictures, ideal  portrayals  of  such models,  and  quickly figuring such ideal portrayals are the fantastic  difficulties confronting  the  information  constriction  group.  The goal of both lossless and lossy compression techniques is to reduce the size of the compressed image, to reduce storage requirements and to increase image transmission speed. The size of the compressed image is influenced by the compression ratio, with lossless compression methods yielding ratios of 2:1 to 3:1, and lossy or irreversible compression having ratios ranging from 10:1 to 50:1 or more. It is well know that as the compression ratio increases, less storage space is required and faster transmission speeds are possible, but at the expense of image quality degradation.

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask