Abstract similarity based region merging mechanism is to

Abstract – Image segmentation is an important task in image processingand computer vision. Image Segmentation is a technique that partitionedthe digital image into many number of homogeneous regions or sets ofhomogeneous pixels. In thispaper corporate frameworks for object retrieval using semi-automatic method forobject detection because fully automatic segmentation is very hard for naturalimages. To improve the effectiveness of object retrieval a maximal similaritybased region merging and flood fill technique is used. The users only need toroughly indicate the position and mainfeatures of the object and background, then any region will belong to non-labelregion or label region i.e. object or background then after which steps desired objects contour is obtainedduring the automatic merging of similar regions.

A similarity based regionmerging mechanism is to guide the merging process with the help of meanshift technique. Any twoor more regions are merged with its adjacent regions on the basis of maximal similaritymethod. The method automatically merges the regions that are initiallysegmented through initial segmentation technique, and then effectively extractsthe object contour by merging regions.Keywords – Image segmentation, maximal similarity based region merging,flood fill and mean shift.I.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

 INTRODUCTION    Image segmentation is typically used to findobjects and boundaries in images. More precisely, image segmentation is theprocess of assigning a label to every pixel in an image such that pixels withthe same label share certain visual features. ImageSegmentation is a process of partitioning digital image into multiple uniqueregions, where region is set of similar pixels. Image segmentation is thepreprocessing of pattern detection and recognition.

If R is set of all imagepixels, then by applying segmentation get different-different unique regionslike {R1,R2,R3,..,Rn} which whencombined formed the image R. Pal and Pal 1 provideda review on various image segmentation techniques. From that   thereis no single standard procedure for image segmentation. Selection of anappropriate image segmentation technique depends on the type of images andapplications used. Image segmentation techniques can be classified into fourtypes. They are thresholding, edge based, region-based, and hybrid techniques.

Thresholding technique toset two thresholds on the histogram of the image, and classify between the twothresholds in the histogram as the same region and classify the others as thesecond region. Edge-based methods undertake that the pixel properties or image features,such as intensity, color, and texture should change suddenly between different regions.Region-based methods assume that neighboring pixels within the same regionshould have similar values.

Hybrid methods is combination of edge detection andregion based together to achieve better image segmentation.      The color and texture features in anatural image are very complex and it is very difficult to get the fullysegmented image out of the natural and therefore semi-automatic or interactivesegmentation methods are used. Which supply user with means to incorporate hisknowledge into the segmentation process. Therefore, semi-automatic orinteractive segmentation methods incorporating user interactions have beenproposed 2,3,4,6 andare becoming more popular. For instance, in the active contour model (ACM),i.e. snake algorithm 2, a proper selectionof the initial curve by the user could lead to a good convergence to the trueobject contour.

Similarly, in the graph cut algorithm 4,the prior information obtained by the users is critical to the segmentation performance.       The low level image segmentation methods,such as mean shift 5,, watershed 6, level set 7 and super-pixel 8, usually dividethe image into many small regions. Although may have severe over segmentation,these low level segmentation methods provide a good basis for the subsequenthigh level operations, such as region merging. For example, in 9,10 Li et al. combined graph cut with watershedpre-segmentation for better segmentation outputs, where the segmented regionsby watershed, instead of the pixels in the original image, are regarded as thenodes of graph cut. As a popular segmentation scheme for color image, meanshift 5 can have less over segmentation than watershed while preserving wellthe edge information of the object.

     In this paper, similarity region mergingmethod based on initial segmentation of mean shift. The method will calculatethe similarity of different regions and merge them based on largest similarity.The object will then extract from the background when merging process ends.Although the idea of region merging is first introduced by 11 this paper usesthe region merging for obtaining the contour for object and then extractingdesired object from image. The key contribution of the method is a novelsimilarity based region merging technique, which is adaptive to image contentand does not requires a present threshold. With the region merging algorithm,the segmented region will be automatically merged and labeled, when the desiredobject contour is identified andavoidedfrom background, the object contour can be readily extracted from background.This algorithm is very simple but it can successfully extract the objects fromcomplex scenes.     The rest of the paper is organized asfollows; Section 2 presents the literature survey.

Section 3 performs regionmerging algorithm. Section 4 experimental results and analysis. Section 5 concludesthe paper.II. LITERATURE SURVEY   Li Zhang and Qiang Ji 12 have proposed aBayesian Network (BN) Model for both Automatic (Unsupervised) and Interactive(Supervised) image segmentation. They Constructed a Multilayer BN from the oversegmentation of an image, which find object boundaries according to themeasurements of regions, edges and vertices formed in the over segmentation ofthe image and model the relationships among the superpixel regions, edgesegment, vertices, angles and their measurements. For Automatic ImageSegmentation after the construction of BN model and belief propagationsegmented image is produced.

For Interactive Image Segmentation if segmentationresults are not satisfactory then by the human intervention active inputselection are again carried out for segmentation. Costas Panagiotakis, Ilias Grinias,and Georgios Tziritas 13 proposed a framework for image segmentation whichuses feature extraction and clustering in the feature space followed byflooding and region merging techniques in the spatial domain, based on thecomputed features of classes. A new block-based unsupervised clustering methodis introduced which ensures spatial coherence using an efficient hierarchicaltree equip attrition algorithm. They divide the image into different-differentblocks based on the feature description computation. The image is partitionedusing minimum spanning tree relationship and mallows distance. Then they applyK-centroid clustering algorithm and Bhattacharya distance and compute theposteriori distributions and distances and perform initial labelling. Prioritymulticlass flooding algorithm is applied and in the end regions are merged sothat segmented image is produced. Jifeng Ning, LeiZhang, David Zhang andChengke Wu 14 develop a image segmentation model based on maximal similarinteractive image segmentation method.

The users only need to roughly indicatethe location and region of the object and background by using strokes, whichare called markers. A novel maximal-similarity based region merging mechanismis to guide the merging process with the help of markers. A region R is merged withits adjacent region Qif Q has the highestsimilarity with Qamong all Q’s adjacentregions. The method automatically merges the regions that are initiallysegmented by mean shift segmentation, and then effectively extracts the objectcontour by labeling all the non-marker regions as either background or object.The region merging process is adaptive to the image content and it does notneed to set the similarity threshold in advance. III.

   SIMILARITY REGION MERGING      An initial segmentation is required topartition the image into homogeneous region for merging. For this use anyexisting low level image segmentation methods e.g.

watershed 6, super-pixel8, level set 7 and mean-shift 5 can be used for this step. In this papermean-shift method for initial segmentation is used because it has less oversegmentation and well preserve the object boundaries. For the initialsegmentation use the mean shift segmentation software the EDISON system 15 toobtain the initial segmentation map Fig. 1. Shows an example of mean shiftinitial segmentation. .

In this paper only focus on the object retrieval basedon similarity region merging and flood fill method.                      Fig. 1. (a) Initial meanshift segmentation. (b) Initial Segmentation result by the mean-shift algorithm.

Similarity Measure Using Metric Descriptor     After mean shift initial segmentation, havea number of small regions. To guide the following region merging process, needto represents these regions using some descriptor and define a rule formerging. Color descriptor is very useful to represents the object colorfeatures. In the context of region merging based segmentation, color descriptoris more robust than other feature descriptors because shape and size feature isvary lots while the colors of different regions from the same object will havehigh similarity. Therefore use color histogram that represent each region inthis paper. The RGB color space is used to compute the color histogram of eachregion, uniformly quantize each color channels into 16 levels and then thehistogram is calculated in the feature spaceof 4096 bins.  B. Merging Rule Using BhattacharyyaDescriptor                  Merging the region based on their colorhistograms so that the desired object can be extracted.

The key issue in regionmerging is how to determine the similarity between different segmented regionsof image so that the similar regions can be merged by some logic control.Therefore need to define a similarity measure formula between two regions Rand Q to accommodate the comparison betweenvarious regions, for this there are some well-known statistical metrics. Here useBhattacharyya coefficient 16 to measure the similarity between two regionssay R and Q is: (1)     (2)     WhereHistR and HistQ are the normalized histograms of R and Q, respectively,and the superscript u representsthe uth element ofthem. Bhattacharyya coefficient is a divergence-type measure which has astraight forward geometric interpretation.

It is the cosine of the anglebetween the unit vectors.     &                                                                  Thehigher the Bhattacharyya coefficient between R and Q is, the higher the similaritybetween them i.e. similar the angle. The geometric explanation of the Bhattacharyya coefficientactually reflects the perceptual similarity between regions. If two regionshave similar contents, their histograms will be very similar, and hence theirBhattacharyya coefficient will be very high, i.e. the angle between the twohistogram vectors is very small.

Certainly, it is possible that twoperceptually very different regions may have very similar histograms.Object and background marking      In the interactive image segmentation, theusers need to specify the object and background conceptually. The users can input interactive information by drawingmarkers, which could be lines, curves and strokes on the image. The regionsthat have pixels inside the object markers are thus called object markerregions, while the regions that have pixels inside the background markers arecalled background marker regions. Fig. 1b shows examples of the object andbackground markers by using simple lines. The use of green markers to mark theobject while using blue markers to represent the background.

Please notethat usually only a small portion of the object regions and background regionswill be marked by the user. Actually, the less the required inputs by theusers, the more convenient and more robust the interactive algorithm is.      After object marking, each region will belabeled as one of three kinds of regions: the marker object region, the markerbackground region and the non-marker region.

To completely extract the objectcontour, need to automatically assign each non-marker region with a correctlabel of either object region or background region. For the convenience ofthe following development, denote by Mo and MB the sets ofmarker object regions and marker background regions, respectively, and denoteby Ntheset of non-marker regions.D. Similarity based merging rule     After object/background marking, it is still a challenging problem toextract accurately the object contour from the background. The region mergingmethod will start from any random segment partandstart automatic region merging process. The entire region will be graduallylabeled as either object region or background region.

The lazy snapping cutoutmethod proposed in 10, which combine graph cut with watershed based initialsegmentation, is actually a region merging method. In this paper present anadaptive similarity based merging technique of regions either in foreground orin background.                        (3)               Let Qbean adjacent region of R and denote by      q theset of Q’s adjacent regions.

The similaritybetween Q and all its adjacent regions, i.e. (Q, SiQ),i = 1,2, ..

. ,q, arecalculated. Obviously, R is a member of S¯Q.If the similarity between R and Qisthe maximal one among all the similarities (Q,SiQ), yhen mergeR and Q.

The followingmerging rule is defined: E.The merging process    (4)               The wholeobject retrieval process is working in two stages. In first stage similarregion merging process is as follows, our strategy to merge the small segmentedimage which is start with any randomly selected and merge this with any of itsadjacent regions with high similarity. Then merge segmented image regions withtheir adjacent regions as: if for each region Q set its adjacent regions .If the similarity between any  for any i = j ismaximum i.e.                     Then Q and Rj aremerged into one region and new region (4) is same leveled by                                                                  (5)Theabove procedure is implemented iteratively.

Note that to each and everyiterative step see whether the desired object is retrieve or not. Specificallythe segmented region is shrinking; stopiteration when desired object is found. After the first stage i.e. when fullpart of object boundaries or likely to appear which is seems in every stepapply second stage of algorithm for this select a input point on the object andexpand this using four connectivity of pixels by using well known Flood Fillmethod. F.

Object Retrieval Flood Fill Algorithm Input: (1) the image (2) the initial mean shift segmentation ofinput image Output: desired multiple objects  While there is a merging up to extraction of object contourfrom input image: 1.                  Inputis image I and initial segmentation  2.                  Afterstep (1) stage of merging of initial segmented image using similar mergingrule. 3.

                  Afterstep one number of regions are minimized and again apply similar region mergingrule, this is and iterative procedure. 4.                  Afterretrieving object contour go to step (5). 5.                  ApplyRegion Labeling and after that Flood Fill method on the image after step(4).  Region Labeling (I) % I: binary Image; I (u, v) =0: background, I (u, v) =1:foreground % 5.1.

                                     Let m?2 5.2.                                     for all image coordinates (u, v) do 5.3.                                     if I (u, v) =1 then 5.

4.                                     Flood Fill (I, u, v, m) 5.5.                                     m? m+1            5.

6.                                     return the labeled image I. % After region labeling we apply Flood Fill method using DFS% 6.                  FloodFill (I, u, v, label) 6.1.                                     Create an empty StackS 6.2.

                                     Push (S, (u, v)) 6.3.                                     While S is not empty do 6.

4.                                     (x, y)? Pop (Q) 6.5.

                                     If (x, y) is inside image and I (x, y) =1 then 6.6.                                     Set I (x, y)= label 6.7.                                     Push (Q, (x+1, y)) 6.8.

                                     Push (Q, (x, y+1)) 6.9.                                     Push (Q, (x-1, y))     6.9.1.     Push(Q, (x, y-1)) 6.10.

                                  return  IV. EXPERIMENTAL ANALYSIS Although RGB space and Bhattacharyyadistance are used in this method, other color spaces and metrics can also beused. In this section, present some example to verify the performance ofunsupervised region merging and flood fill method in RGB color space.

Similaritybased object segmentation model is very simple as compared to the other existingmethods of segmentation. This method is less time consuming and provides betterresults. Because it is interactive method so the time taken in the segmentationis totally depend on the size, number of super pixels of the input image. Thesegmentation speed mainly depends on the complexity of the region merging andflood fill model.

Object extraction time is totally depends on the size andshape of the object of interest. Results show that our approach is flexibleenough to segment different types of images. As to the images with smallchanges or similar color of foreground and background, our algorithm will notable to achieve an ideal segmentation effect. Another kind of error is causedby the clutter.

When the background (e.g., the shadow) has a similar appearanceas the foreground, the model may not be able to completely separate them butstill achieve satisfactory results.  Experimental analysis and Results    Fig. 2. Shows an example of how similarity region merging method extractobject contour in complex scene. After initial segmentation by mean shift,automatic segmentation merging starts and after every step test our mergingresults and also after which stage of merging to use flood fill method.

Fig.2(a) is the initial segmented regions cover only small part but representativefeatures of object and background regions. As shown in figure 2 the similarregion merging steps via iterative implementation.                                                     Fig.2 Initialsegmentation                           a)1st  stage merging                              b)2nd stage merging                      c) 3rd stage merging                                d) Object contour                                 e) Object                         Fig. 2(a), 2(b), 2(c) and 2(d) shows thatdifferent steps of well extract object contour from the image and Fig. 2(e)shows the extracted object using the two steps object retrieval method B.  Comparison with MSRM Method        In this section, compare our method ofsegmentation with the MSRM segmentation method.

In the MSRM method ofsegmentation first has to select the region of interest by capturing the objectin a contour by dragging the mouse on the object and giving some objectsboundaries. Since the MSRM segmentation is a pixel based method, selection ofregion of interest makes the MSRM method a region based method. Figure 3 showsthe segmentation results of the two methods on four test images.

The firstcolumn shows the input image; the second column shows the results by MSRMmethod; the third column shows the results produced by our method.                 Figure 3:Comparisons between the MSRM Method and MSRM linked flood fill method. V.

CONCLUSION       In this paper a class specific objectsegmentation method using maximal similar region merging and flood fillalgorithm. The image is initially segmented using mean shift segmentation and theusers only need to roughly indicate the main features of the object andbackground by using some strokes. Since  the  object regions  will  have high similarity  are  merged after  applying  region merging based  on  Bhattacharyya rule.  The  similarity based merging rule, a two stage iterative merging algorithm waspresented  to  gradually label  each  non-marker region  as either  object or  background.

An automatic startof merging with any random segmented region and after each merging checkwhether the object contour is obtained or not, if at any particular stage ofmerging object contour is obtained thenuse flood fill algorithm and click with mouse which object wantto extract. This method is simple yet powerful and it is image contentadaptive.      Infuture extract multiple objects from input image by using unsupervised methodas well as supervised method by merging similar regions using some metric.Extensive experiments were conducted to validate the method of extractingsingle object from complex scenes. This method is efficiently exploits thecolor similarity of the target. This method provides a general region mergingframework, it does not depend initially mean shift segmentation method or othercolor image segmentation methods can also be used for segmentation. Also useappending the different object part to obtaining complete object from complexscene, and also use some supervised technique also.