This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Greed

BIOLOGICALLY INSPIRED FEATURE SELECTION AND TUMOR CLASSIFICATION USING ENSEMBLE CONVOLUTIONAL NEURAL NETWORKs (ECNNs)

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

BIOLOGICALLY INSPIRED FEATURE SELECTION AND TUMOR CLASSIFICATION USING ENSEMBLE CONVOLUTIONAL NEURAL NETWORKs (ECNNs)

 

5.1 OBJECTIVE

            In general, diagnosing a brain tumour usually begins with magnetic resonance imaging (MRI). MRI shows there is a tumour in the brain; the most common way to determine the type of brain tumour is to look at the results from a sample of tissue after a biopsy or surgery. This study addresses the problems of segmentation and detection of abnormal brain tissues and healthy tissues. Processing of pictures used in the medical tools for the discovery of the tumour; only MRI images are not able to identify the tumorous region, improved semi-supervised clustering algorithm, and morphological based segmentation proposed for identification of tumour regions with preprocessing of image. Preprocessing performed by Median filter, and skull masking is used, followed by image enhancement using Contrast Limited Adaptive Histogram Equalization (CLAHE). Then the tumour segmentation process has been performed by the Semi-Supervised Fuzzy C-Means (SSFCM) clustering algorithm. After the tumour area is segmented, then feature extraction is carryout by using Gray Level Co-occurrence Matrix (GLCM). Then to reduce the number of dimension from features, Artificial Bee Colony (ABC) algorithm is introduced, which increase the detection rate of the classifier. Ensemble Convolution Neural Network (ECNN) detection system used in an unsupervised manner, which will apply to create and maintain the pattern for future use. Also, designs have to find out the feature to train ECNN. The simulation results prove the significance in terms of quality parameters and accuracy in comparison to state-of-the-art techniques.

 

5.2 INTRODUCTION

The tumour is one of the most common brain diseases in the world. Based on the WHO (World Health Organization) surveys, there are more than 400000 persons suffered from brain tumours per year. So its diagnosis and the treatment are essential in medical science. On the other side, medical imaging techniques used in several medical domains. For example, Computer-Aided follows up of the pathologies, Pathologies identification, operative preparation, medical assistance, record, and time series evaluation. Among all these processes, Magnetic Resonance Imaging (MRI) might function as an often-used imaging approach seeing that system in neurosurgery and neuroscience. The outcomes of the segmentation function as the foundation for the new uses. This segmentation procedure might change according to the technique and the particular application. When the segmentation of health picture is done, it becomes somewhat challenging enterprise to do as it requires a great deal of information, they will have some things thanks to little purchase time or individual movement plus sometimes the soft-tissue borders are not well defined. One other trouble appears that makes the segmentation process challenging while coping with segmentation. There exists a sizeable type of tumour kind, including an assortment of sizes and shapes. We’ll find several immediate purchases on the brain. Different information is provided by protocols. Every picture emphasizes a unique area of the tumour. So, the automated segmentation with prior models or using prior knowledge is difficult to implement. 

Don't use plagiarised sources.Get your custom essay just from $11/page

 

The accurate segmentation and classification of internal structures of the brain play a significant role in the study and actual treatment of the tumours. The purpose of the segmentation is to increase the operative or radio curative direction of tumours also to decrease the mortality. The brain oncology supports the illustrative brain item, which will include tumour guidance eliminated from MRI information kinds such as its location, types, also and it is anatomic-purposeful setting even the effect on head structures that are distinct. Despite a few efforts in addition to the stimulating leads to the physician imaging neighbourhood, representation of problems and precise classification and segmentation remain a job that is challenging and rough. Present processes left an area that is essential for enhanced utility automated and reality.

 

Brain tumours affect humans badly because of the abnormal growth of cells within the brain. It can disrupt proper brain function and be life-threatening. The brain controls all the necessary features of the body. The most complex organ in the human body is the brain, and it is part of the central nervous system (CNS). Skull covers the brain, and the mind is composed of three matters-“gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF).” Cerebrospinal fluid (CSF) is a transparent liquid that encapsulates the brain and the spinal cord and provides many functions to CNS. It provides a barrier against shocks, which consists of glucose, oxygen, and ions. It is distributed throughout the nervous tissue. CSF helps in the removal of waste products away from nervous tissues. According to the World Health Organization (WHO), there are 120 types of brain tumours. Broadly, brain tumours are categorized into two types: Primary brain tumour and Secondary brain tumour.

 

Primary Brain Tumors are originated in the brain, and Secondary Brain Tumors are not originated in the mind but come in mind through other body parts. This process of spreading a tumour is known as metastasis. Two types of brain tumours have been identified as Malignant and Benign. Malignant tumours are fast-growing and cancerous. Benign is slow-growing and non-cancerous and less harmful than destructive. In the medical field, brain tumours are characterized by their GRADE. The GRADE of a tumour can be decided according to their behaviour under microscopic observation. GRADE I tumours are known as benign. Their look is nearly the same as healthy brain cells, and their growth rate is slow as compared to other categories. GRADE I and GRADE II are known as low-grade tumours. GRADE III and GRADE IV are known as high-grade tumours. Their behaviour is much different from healthy cells and requires urgent treatment to cure. These types of tumours show the fastest growth rate and have the worst abnormal behaviour than other grades. GRADE 1 and GRADE 2 are Slow growing, fewer chances to come back, Do not spread in and Need only surgery other parts, not Radiotherapy. GRADE 3 and GRADE 4 are Fast-growing, Come back even after surgery, Spread to other regions and Need Radiotherapy and Chemotherapy.

 

It is essential to detect the brain tumour in an earlier stage to reduce the death rate, so most of the researchers suggested that the brain imaging techniques help radiologists and researchers to detect problems in the human brain, without the need of neurosurgery. There is a number of methods available in hospitals throughout the world that are proved to be safe. (A) CT Scan: Computed Tomography (CT) is used to construct brain images through a series of X-rays scans. (B) PET Scan: A Positron Emission Tomography (PET) scan is used to get a functional view of the brain. (C) MRI Scan In the last few years, the uses of Magnetic Resonance Imaging (MRI) scanners in the medical field have grown superbly. Doctors may use MRI scans in diagnosing brain tumours and cancer. MRI scan is an efficient way to look inside the human body without the need to cut shape. In an MRI system, the strength of the magnetic field is measured in terms of tesla and gauss (1tesla=10000gauss). Most of the medical applications use 0.5to2.0 tesla. Magnetic Resonance Imaging (MRI) is widely used as it provides more magnificent contrast images of the brain and cancerous tissues compared with the other medical imaging techniques. In MRI Modalities, the tissue appearance can be affected by the variable behaviour of protons shown in different fabrics. The speed at which mobile hydrogen protons are moving becomes helpful in determining the amount of signal produced by specific tissue. This technique is basically used to detect the differences in the muscles which have far better technology as compared to computed tomography (CT). So this makes this technique a special one for brain tumour detection. From the MRI images, the information such as the location of the tumour provided the radiologists, an easy way to diagnose the tumour and plan the surgical approach for its removal.

 

Previously, various techniques or algorithms are used in image processing techniques to identify the brain tumour by using MRI images, but it very important to get an accurate result, so this work will be used to improve accuracy for identifying the brain tumour result by MRI image. Therefore, brain tumour identification can be made through MRI images by using image processing techniques to find a tumour in the early stages. This Chapter focuses on the identification of brain tumours using image processing techniques. The work consists of different scenes. The first stage of this work is preprocessing, consists of grey level conversion, the noise gets removal using the median filter, and skull masking followed by image enhancement will take place. Secondly, brain tumour segmentation by semi-supervised Fuzzy C-Means clustering algorithm. The third stage will be feature extraction using the GLCM feature extraction method. The fourth stage is dimension reduction using the ABC algorithm, and the sixth stage is the detection of brain tumour using Ensemble Convolution Neural Network classifier. These stages are explained briefly in the proposed methodology.

5.2.1 Magnetic resonance imaging

(MRI) of brain tumours: Imaging of brain, patients who are taking the treatment of brain tumour, is often indicated at different stages and usually has an essential role in each of them. The stages of management that may be considered:

 

  1. Detection or Confirmation of a structural abnormality is present.
  2. Location and Assessment of the extent of any deformity.
  3. Characterization of the abnormality assessment of the nature of a tumour.
  4. Facilitation of additional diagnosis procedures and planning for the survey or another type of therapy.
  5. Intraoperative control of rejection progress.
  6. Monitoring of treatment response. CT is the fastest modality making it the preferred examination for imaging critically ill or medically unstable patients. Due to the ability to produce information on tissue biology and physiology, SPECT and PET imaging can be used widely.

 

 

5.2.1.1  MRI process:

The patient is placed in a strong magnetic field, Which causes the protons in the substances of the entire body to arrange using the magnetic field in both a simultaneous or antiparallel alignment. A radiofrequency pulse is introduced, causing the spinning protons to move out of alignment. When the pulse is stopped, the protons produce and re-align a transmission, which is local radiofrequency power, by magnetic areas that are diverse and quickly turned away and on. A radio antenna (or coil) within the scanner detects the signal and creates the image.

 

 

5.2.2  Pros and cons of MRI:

MRI is the most frequently used neuro-imaging technique to evaluate follow-up reviews of patients for many reasons. The ionizing radiations are used in CT, SPECT, and PET studies but not an MRI. Its contrast resolution is higher than the other techniques, so it can prefer to detect small lesions and isodense lesions to enhance CT. It is also more sensitive than CT in detecting lesion enhancement. MRI device generates images in the sagittal, axial, and coronal planes. This ability of MRI provides a better localization of a lesion in the 3D space of the brain, and it allows the structures involved in the tumour to be more clearly delineated. Finally, MR images eliminate the beam-hardening artifacts produced by skull base on CT; it makes it better to evaluate lesions in the posterior fossa and in the inferior frontal and temporal lobes. Also, another advantage is that the development of MR spectroscopy, MR dispersion imaging, and MR perfusion imaging now permits the evolution of tumour bio-physiology with MR scanners. MR imaging gives the possession of both efficient and anatomical details about the tumour during the same scan. Also, there are some limitations to MR imaging first is lack of Specificity. At a time, multiple pathologic lesions appear hypointense on T1-weighted (T1 w) images and hypointense on T2-weighted (T2w) images. The intracranial neoplasm’s done by MRI differential diagnosis includes the infracts demyelinating lesions radiation necrosis infections and other inflammatory processes. Another is higher grade tumours will frequently show enhancement on MR imaging. MR imaging is also not able to distinguish the edge of a tumour or determine the full extent of diseases. Sometimes the imaging abnormalities seen are nonspecific hence MRI alone cannot be applied to determine the presence of tumour

 

5.3 PROBLEM STATEMENT

            A brain tumour is caused due to the formation of abnormal tissues within the human brain. Therefore, it is necessary to remove the affected tumour part from the brain securely. Image classification is one of the challenging tasks in today’s medical field. A practical rating using MRI slices can help in identifying the tumour with its actual size and shape. If the classification process goes wrongly, the error will occur in pointing to the location of the cancer area. Hence a novel method called ECNN classifier is presented for sufficient identification and classification of the cancerous part. The research on the medical field is one of the emerging technologies for the past ten decades. A brain tumour is part of this field. A brain tumour is one reason for human death rapidly. Hence, it is necessary to well known about the reason for causing tumours, and medical experts started doing efficient research work on the classification of MRI slices, whether it is normal or abnormal. After grouping, the irregular slices are should involved in the classification process. Earlier stage diagnosing of the tumour can be helpful for doctors in the future process. There are a tremendous amount of classification techniques designed for an active process. However, each of the methods is to meet its own merits and demerits.

 

5.4 PROPOSED METHODOLOGY

            In this work, brain tumour detection is done by image processing techniques. The main aim of this work is to detect the early stage of brain tumours. Here, the input image is sent to preprocessing. Two processes are taking place in the preprocessing region. The first one is noise removal using a median filter and skull masking, followed by image enhancement. To improve the performance and to reduce the complex semi-supervised based segmentation takes place. Feature extraction is done by using the texture feature extraction method followed by feature dimension reduction to reduce the space by using an enhanced bee colony optimization algorithm. The brain tumour detection is done by using Ensemble Convolution Neural Network classifier in an unsupervised manner. Figure 5.1. An overview of the proposed methodology.

 

 

Figure 5.1. Overview of the proposed methodology

 

 

 

 

5.4.1 Pre-processing

 

The process of image acquisition, The images are made to pass through all the processed that need modification or in-order to collect the required information from the image. The images are collected from the MRI scans of the organ. Various formats with different file extensions of the model are used for the storage of these digital pictures that are obtained through the MRI scan. Images are then stored in the form of a matrix in the MATLAB environment. The MRI images may be present in the format of RGB, hence the grey conversion is applied here.  The Median filter is made use of for the process of removing the noise in a grayscale MRI image. Then, the process of skull masking is implemented on the image from which the sound has been removed. All the tissues that are fatty are removed using this process, The other areas such as skull and hair are also removed so that the focus can be made only on the tissues of the brain, which ambiguously reduce the overhead of identifying the fabric that has a tumour. Various masks are used on the MRI image in different positions such as horizontally, vertically, and diagonally for the skull masking process. The preprocessing functions are :

 

  1. Noise removing – median filter
  2. Skull masking and
  3. Image enhancement – CLAHE

 

5.4.2 Median filter

This is the most common technique used for noise elimination. It is a ‘nonlinear’ filtering technique. This is used to eliminate ‘Salt and Pepper noise’ form the greyscale image. The median filter is based on the average value of pixels. The advantages of the median filter are efficient in reducing Salt and Pepper noise and Speckle noise. Also, the edges and boundaries are preserved. The median filter is a nonlinear signal processing technology based on statistics. The noisy value of the digital image or the sequence is replaced by the median value of the neighbourhood (mask). The pixels of the cover are ranked in the order of their gray levels, and the median value of the group is stored to replace the noisy value. The median filtering output is g(x, y) = med{ f (x i, y j), i, jW}, where f (x, y), g(x, y) are the original image and the output image respectively, W is the two-dimensional mask: the mask size is n x n(where n is commonly odd) such as 3 x 3, 5 x 5, and etc.; the mask shape may be linear, square,

circular, cross, and etc.

 

The noise-reducing performance of the median filter:

The mathematical analysis is relatively complex for the image with random noise because the median filter is a nonlinear filter. For an image with zero mean noise under normal distribution, the noise variance of the median filtering is approximately

(5.1)

where is input noise power (the difference), n is the size of the median filtering mask, f (n) is the function of the noise density. And the noise variance of the average filtering is

(5.2)

 

As when the comparison is made between (5.1) and (5.2), the filtering of the median has effects that are depended on the mask size and the way the noise is distributed. The median filtering performed for the reduction in noise in a random manner is found to be effective than the average screening and as of the impulse noise is taken, these pulses that are very narrow are placed at a distance apart from the vibrations that have a width which is less than n/2. The median filter is found to be more productive. This is then followed by the process of skull masking that is based on the morphological procedures.

 

Skull masking

It means the removal of non-brain tissues like skull, scalp, fat, eyes, neck, etc., from MRI brain image. It helps to improve the speed and accuracy of the system in medical applications. Skull stripping is an essential process in biomedical image analysis, and it is required for the adequate examination of brain tumours from the MR images. By skull stripping, it is possible to remove additional cerebral tissues such as fat, skin, and skull in the brain images. Skull stripping methods are broadly classified into five categories: mathematical morphology-based methods, intensity-based methods, deformable surface-based methods, atlas-based methods, and hybrid methods. In this work, skull stripping is done based on a mathematical morphological based approach.

 

Morphology-Based Methods:

Analyze geometrical structures such as size, shape, connectivity, and mathematical morphology is a tool used, which is based on set theory. It was developed originally for binary images but can be used for grayscale and colour images. The procedure is to search the image at different locations with a pre-defined-defined shape and to decide as to how this shape fits or misses the forms in the picture. This is called the structuring element, and it is also a binary image. The choice of size and shape of the structuring element is need-based. Diamond, square, disc, horizontal line, vertical line, cross etc. are the most commonly used structuring element. Erosion and dilation are the two essential operations in mathematical morphology.

 

Erosion and dilation:

Let I will be the binary image. S is the structuring element. Erosion of  I by the structuring element S is denoted by IƟS. It signifies the set of all pixels which S placed at that pixel is contained within I. Erosion shrinks or performs thinning of the object. It excludes all small unwanted objects in the image.

IƟS={z ǀ(S) z ⊆ I}                                                         (5.3)

Dilation of I by the structuring element S is denoted by I ⊕ S. I overlaps by at least one element. Dilation performs the thickening of an image. It highlights small objects in the picture.

I ⊕ S= { z ǀ(S) z ∩ I ≠ ϕ}                                             (5.4)

After the process of morphological based operation, the image enhancement process will take place using CLAHE for the improvement of the segmentation process.

 

5.4.3 Image enhancement- Contrast Limited Adaptive Histogram Equalization (CLAHE)

Adaptive histogram equalization is an image processing method that is used for the improvisation of the contrast in the images. This is distinct from the conventional histogram method as here, and the adaptive approach is used that computes with many histograms with each of them corresponds to a unique section in the image and thereby use them for the re-distributing the values of lightness in an image. The conventional technique uses only one histogram for the complete picture.

 

As a consequence, the method of adaptive histogram equalization has taken an image enhancement method that has the capacity to improvise the local contrast of an image, which brings more detail in the picture. These are but subjected to have tremendous noise. CLAHE was introduced to address the problem of the amplification of the sound. These operate in only the title part of the image rather than concentrating on the complete picture. The contrasts for each title are thus enhanced such that the final histograms of the region in output are matching the histogram that is specified through the parameters of the distribution. The tiles that are nearby are then compared and combined with the use of bilinear interpolation in-order to remove the boundaries that are artificially induced. The contrasts of the similar areas are then limited in-order to omit any noise that is amplified, and that may be present in an image. Once the enhancement is over, the segmentation is made by using the SSFCM algorithm. This process makes it easier for further analysis of the image and also it identifies the in formations that are more important from a digital picture.

 

5.4.4.Segmentation

Image Segmentation is the method by which the partition is made to a digital image in various regions as a whole or in a group of pixels. Particularly in a digital image, the barriers are termed as different objects with a similar colour and texture. The segmentations are the set of areas that cover the entire image as a whole and the contours that are extracted. Each pixel in a particular area is similar in terms of specific characters that include colour, intensity, or texture. The regions that are near are notably different in respect to the same individuality. In the proposed work, the segmentation is carried out by the SSFCM, as discussed in the section that follows.

 

Semi-Supervised Fuzzy C-Means clustering algorithm (SSFCM)

There exist various means of SSFCM techniques that differ in the measurements of the similarity and that of dissimilarity along with the factor of scaling and the function for optimization. In the proposed research, the SSFCM is used in order to perform the clustering of the image. The most crucial objective here has a couple of supervised and unsupervised parts and are expressed as

 

J=       (5.5)

 

Where it is the membership value of data pattern j in cluster i, dij is the distance between the center of cluster vi and data pattern j, c is the number of groups, fij is the membership value of training data pattern j in group I, which never gets updated in the algorithm, bj is a Boolean vector indicating that a particular pattern is training (3) or is not training (0), p is the fuzzifier parameter that is commonly set as two and α is a scaling factor that balances the supervised and unsupervised learning components. The scalding factor is usually expressed as a ratio (N/M) between the numbers of all data patterns (N) and training data patterns (M).

 

The algorithm used in SSFCM is summarized in the following steps:

  • The initial step is for the determination of the parameters in the initial level that includes the count of the clusters c, the values of the membership of the data used for training that patterns for every clusters and the foremost fuzzy partition matrix U0 with benefits that are obtained randomly  0 and 1 where: {U│Uk∈[0,1];

 

(5.6)

 

  • The iterative procedure is started
  • Compute the centers of the clusters V=[vi] (prototypes) as:

 

(5.7)

 

  • It is to be taken care of that only the training data are used to compute the initial clusters.
  • Update the partition matrix U:

 

  • And at last, to define the criteria dir stopping. If Jr−Jr−1ε, then the iteration is stopped, and if not, steps 2 to 5  are repeated by having  U=Ur.Equations for partitioning matrix (Equation (5.5)) and prototypes (Equation (5.4)) are then derived by reducing the value of the function (Equation (5.3)) with the help of a suitable technique for optimization. This is an iterative process, and in each iteration, the membership matrix of non-training data and cluster’s prototypes are computed. This process is repeated until there is a variation founded in the objective functions of any consecutive iterations meeting the criteria for termination. Each cluster prototype is the membership, and all of the samples for the various clusters are considered as the output in the proposed analysis.

 

Then, the feature extraction is taking place by using GLCM for texture extraction is to extracts the features that are termed as high-level for classification.

 

4.4.5 Feature extraction

The process of extracting the features is the way in which the clusters that are predicted to have been infected in the FCM output. Gray Level Co-occurrence Matrix (GLCM) is a method that uses a statistical approach for the examinations of the features of the textures, which has considered in the relations that are spatial among the pixels. Here, the GLCM matrix is created from the calculation of how many a pixel having an intensity ‘i’ tend to occur in a particular relationship that is spatial for a  pixel with the value j. These have frequencies in which there are two pixels and are then separated using a vector that comes in the image. The matrix distribution is all depended on the properties of the GLCM in all dimensions. Considering an image that is the input, which has a total of M pixels in the horizontal size and M no of pixels in a vertical way. .In the case of the gray level, which appears in every pixel are quantized to a number of levels denoted by z,  and let us assume Nx =1,2,3, etc. Mand  Ny=1,2,3…….N are the horizontal and vertical space considered, and G =0,1,2,3…Z is the set of Z quantized levels of gray.  In a given distance d and direction provided by, the GLCM is calculated by using grayscale pixel i and j, expressed as the number of co-occurrence matrix in different directions.

P(i,j│d,θ) =                                      (5.9)

Among them five features Contrast, Correlation, Entropy, Energy and Homogeneity.

 

Contrast

Contrast is the measure of the intensity in-between a pixel and its near one in the entire image, and it is assumed to be zero for the image that is constant, and it is also termed as the variance.

 

Contrast =                                  (5.10)

Correlation

Correlation determines as how the correlation is done on a pixel to its near one over the entire image

 

Correlation =                                 (5.11)

Entropy

 

Entropy is the measure over the complexity in an image, and this tends into a large entropy.

 

Entropy =                                         (5.12)

Energy

 

Energy is the addition of the elements that are squared in the GLCM, and it has a default value of 1 for an image that is constant.

 

 

Energy =                                   (5.13)

 

The features of the texture are extracted using the GLCM in order to get a classification that gives more efficiency in terms of results. The process that follows is the reduction of the dimensions using the ABC approach.

 

 

 

5.4.6 Dimensionality Reduction

Dimensionality reduction is introduced to increase the detection rate of the classifier. It is done by using an Artificial Bee Colony (ABC). The primary factor that is being limited is the high-dimensionality for the processing of data in line with various fields. This can cause high ambiguousness in finding the essential elements for the analysis of the data. The reduction in terms of the dimensions is needed in-order to segregate the data that are not relevant to the data that are desired. The proposed research presents a novel approach for the dimension reduction that is based on ABC.

 

Artificial Bee Colony Optimization algorithm

Artificial Bee Colony (ABC) algorithm, in general, is the simulations of the bees’ foraging. The first bees will try for the maximization of the amount of nectar that is stored inside the hive by making an effective division of the organization by self. The bees are made use of for the reduction in the number of dimensions that of the features of the text. For performing the required task, the ABC comprised three categories of bees, namely the Employed bees, onlooker bees and scout bees. An equal part of the colony consists of the bees that are employed, and the other equal part has the onlooker type of bees. The former takes the responsibility of the information exploitation in a feature of texture from the sources that are explored in prior and which gives information for the latter inside the hive on the quality of the source of the food that they are under the exploitation. The bees that are onlooking are subjected to wait and to make a decision of the source of food that depends on the information about the features that are adhered to the employed bees. Random search is done by the scout bees for finding the source of food that is dependent on the information features. The subsets of the features are then selected by itself, and the fitness is computed to find the best for each iteration. This step is repeated for a given count of iterations in order to find the subset of the features at a reduced level. At the initial step, for all bees that are employed  whose total is equal to equal half on the count of sources of food, a current source is produced with :

 

Vij = xij + ϕij(xij-xkj)                                                                (5.14)

 

where, φij is a uniformly distributed real random number within the range [-1,1], k is the index of the solution chosen randomly from the colony (k = int (rand * N) + 1), j = 1, . . ., D and D is the dimension of the problem. After producing vi, this new reduced feature set is compared to solution xi, and the working bee exploits the better source. In the second step of the algorithm, an onlooker bee chooses a food source with the probability and produces a new reduced feature source in the selected food source site. As for the working bee, the better source is decided based on the objective value (fi). Fitness value has to be maximized, as given in the following equation:

fit =                 (5.15)

The probability is calculated by means of fitness value using the following equation:

Pi =                                                                       (5.16)

where it is the fitness of the solution xi.

Once the distribution is over of all the onlookers, checking is performed on the sources to decide on the abandon bees. If the count of the cycles is not improved is found to be greater than the limit that is pre-defined-defined, The sources are termed as exhausted. The bees that are employed in association with the source that is exhausted will now act as a scout and does the random search in the problem domain by :

 

Zij = +()*rand                                           (5.17)

 

Algorithm for Artificial Bee Colony for dimensionality reduction

C, the set of all conditional features;

  1. The collection of decision features.
  • Select the initial parameter values for ABC.
  • Initialize the population(xi)
  • Calculate the objective and fitness value
  • Find the optimum feature subset as global.

 

  1. Produce new feature subset(vi)
  2. Apply the greedy selection between xi and vi
  3. calculate the fitness and probability values
  4. Produce solutions for onlookers.
  5. Apply greedy selection for onlookers.
  6. Determine the abandoned solution
  7. Calculate the cycle best feature subset
  8. Memorize the best optimum feature subset. Repeat // for a maximum number of cycles.

After the process of dimensionality reduction using the ABC algorithm, an ECNN will be considered as the last stage to identify the tumour and non-tumour images.

 

5.4.7 Classification- Ensemble Convolution Neural Network (ECNN)

In this work, the classification is used to differentiate the tumour and non-tumour region by Ensemble Convolution Neural Network classifier.

 

Ensemble Convolution Neural Network (ECNN) classifier

The ensemble is a process of combining the different classifier outputs into a single output for producing a better and efficient result. Here various convolution neural networks will be connected using an ensemble method using the sum rule. To obtain a better estimator of the posterior probability by combining the resulting estimates of the individual members of the ensemble is the idea underlying multiple classifier fusion.  CNN’s are a class of deep feed-forward neural networks. Like most neural networks,

 

CNN’s are made up of neurons that are interconnected with the inputs that are learnable weights, biases, and activation functions. Their arrangements of the neuron will be in a 3-D manner comprising of the width, height and depth. This makes clear that each layer in the CNN transforms a three-dimensional input to the same dimension volume of output of the neuron activations. CNN’s are comprised of five classes of layers:  convolution (CONV), activation (ACT), pooling (POOL), followed by the last stage, Fully-Connected (FC), and classification (CLASS). The CONV layer is the core building block of a CNN, which makes the implementation more costly. The layers said above are responsible for computing the output, which is connected to the regions available locally with the aid of convolution in the input. The spatial extent that appears in this connectivity in these regions are hyperparameters termed as the field of reception, and a scheme of sharing the setting is used in CONV Layers for controlling the count of the settings, This sense that the parameters that are with respect to the CONV are nothing but the sets of weights that are shared and which has a microscopic fields of reception.

 

POOL layers are the ones to take care of the operations in nonlinear sampling, and the commonly used method is the MAX pooling, where the partition is made on the input for a set that is non-overlapping in nature and in a rectangular shape and gives the maximum expected output. This reduces the spatial area and also in parallel the count of parameters and the overfitting and the computational complexity are also minimized.ACT layers apply some activation functions, such as the non-saturating ReLU(Rectified Linear Unit) function f(x) – max(0,x) or the saturating hyperbolic tangent f(x) – tanh(x), f(x) =│tanh(x)|, or the sigmoid function f(x) = (1+e-x)-1. The ultimate combination of the results from the classifiers is done by the averaging concept. The same is denoted using the formula.

 

(5.18)

 

Where  denotes the classification results from CNNs, i belongs to the number of classifiers and j belongs to no. of samples. Finally, the classifier is used for the purpose of classifying the tumour and non-tumour regions. Also, we have a comparison of various classification methods, and the results are discussed in the following section briefly.

 

5.5 RESULTS & DISCUSSION

In this section, we are going to discuss various techniques with our proposed method. The dataset used here is the Brain Web dataset, which consists of full three-dimensional simulated brain MR data obtained using three sequences of modalities, namely, T1- weighted MRI, T2-weighted MRI, and proton density-weighted MRI. This dataset included a variety of slice thicknesses, noise levels, and levels of intensity non-uniformity. The images used for our analysis are mostly included T2-weighted modality with 1mmslice thickness, 3% noise, and 20% intensity non-uniformity. In this dataset, 13 out of 44 images included are tumour-infected brain tissues. The last dataset collected from expert radiologists consisted of 135 images of 15 patients with all modalities. This dataset had ground truth images that helped to compare the results of the proposed method with the manual analysis of radiologists. Results of various classification techniques and the proposed methods are measured in terms of sensitivity-sensitivity, Specificity and Accuracy. The classifications techniques used for the comparison are K- Nearest Neighbours(K-NN), Adaptive Fuzzy Inference System(ANFIS), Support Vector Machine(SVM) and Ensemble Convolution Neural Network(ECNN). The three types of aspects, namely sensitivity-sensitivity, Specificity and Accuracy, are discussed briefly with a comparison between various classification methods in the following section. Figure 5.2 illustrates the results obtained in different stages of this work (brain tumour detection).

 

        (a)                  (b)                    (c)                     (d)                (e)                 (f)

 

Figure 5.2 (a) Input Image (b) Pre-processed Image (c) CLACH Image (d) Segmented Image (e)Feature Extracted Image (f) Classified Image

 

Sensitivity

Sensitivity (also called the true positive rate, the recall, or probability of detection in some fields) measures the proportion of actual positives that are correctly identified as such (e.g., the percentage of sick people who are correctly identified as having the condition). It can be written as,

 

Sensitivity = (5.19)

 

 

 

Figure 4.3. Comparison of various techniques with sensitivity

Figure 4.3 shows the comparison of different classification techniques with sensitivity-sensitivity. It is clear that the proposed ECNN has 98.82% as sensitivity-sensitivity, and the existing K-NN, ANFIS and SVM classification have 92.34%, 94.35% and 98%, respectively.

 

 

Specificity

Specificity (also called the true negative rate) measures the proportion of actual negatives that are correctly identified as such (e.g., the percentage of healthy people who are correctly identified as not having the condition). It can be written as,

Specificity = (5.20)

 

Figure 5.4. Comparison of various techniques with Specificity

Figure 5.4. Shows a comparison of different classification techniques with Specificity. It is clear that the proposed ECNN has 88.89% as Specificity and the existing K-NN, ANFIS and SVM classification have specificity value as 61.54%, 69.23% and 87.12% respectively.

 

Accuracy

The accuracy is the proportion of the total number of predictions that were correct. It is used to calculate the rate of the classifier by using the equation as:

Accuracy =  (5.21)

 

Figure 5.5. Comparison of various techniques with accuracy

Figure 5.5 shows the comparison of different classification techniques with precision. It is clear that the proposed ECNN has 97.33% accuracy, and the existing K-NN, ANFIS and SVM classification have accuracy value as 87%, 90% and 96.51%, respectively. Hence, the proposed ECNN classifier has high accuracy, specificity and sensitivity value when it is compared to all other existing work.

 

5.6 CONCLUSION

In this work, the tumour is detected by using the data mining process. Various stages are carried out for the detection of a tumour. The scenes included in this process are preprocessing, segmentation, feature extraction and classification. Preprocessing consists of filtering, skull masking and image enhancement by CLAHE for an efficient segmentation process. The critical role of this work is feature texture extraction using GLCM and classification using Ensemble Convolution Neural Network (ECNN) for the detection of brain tumour and have the comparison between the proposed ECNN classifier with other existing methods namely K-NN, ANFIS and SVM and proven that the proposed ECNN method has a high accuracy sensitivity and specificity value when it is compared with the other existing classifiers.

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask