Abstract similarity based region merging mechanism is to

Abstract – Image segmentation is an important task in image processing
and computer vision. Image Segmentation is a technique that partitioned
the digital image into many number of homogeneous regions or sets of
homogeneous pixels. In this
paper corporate frameworks for object retrieval using semi-automatic method for
object detection because fully automatic segmentation is very hard for natural
images. To improve the effectiveness of object retrieval a maximal similarity
based region merging and flood fill technique is used. The users only need to
roughly indicate the position and main
features of the object and background, then any region will belong to non-label
region or label region i.e. object or background then after which steps desired objects contour is obtained
during the automatic merging of similar regions. A similarity based region
merging mechanism is to guide the merging process with the help of mean
shift technique. Any two
or more regions are merged with its adjacent regions on the basis of maximal similarity
method. The method automatically merges the regions that are initially
segmented through initial segmentation technique, and then effectively extracts
the object contour by merging regions.

Keywords – Image segmentation, maximal similarity based region merging,
flood fill and mean shift.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

I. 
INTRODUCTION

    Image segmentation is typically used to find
objects and boundaries in images. More precisely, image segmentation is the
process of assigning a label to every pixel in an image such that pixels with
the same label share certain visual features. Image
Segmentation is a process of partitioning digital image into multiple unique
regions, where region is set of similar pixels. Image segmentation is the
preprocessing of pattern detection and recognition. If R is set of all image
pixels, then by applying segmentation get different-different unique regions
like {R1,R2,R3,..,Rn} which when
combined formed the image R. Pal and Pal 1 provided
a review on various image segmentation techniques. From that   there
is no single standard procedure for image segmentation. Selection of an
appropriate image segmentation technique depends on the type of images and
applications used. Image segmentation techniques can be classified into four
types. They are thresholding, edge based, region-based, and hybrid techniques.
Thresholding technique to
set two thresholds on the histogram of the image, and classify between the two
thresholds in the histogram as the same region and classify the others as the
second region. Edge-based methods undertake that the pixel properties or image features,
such as intensity, color, and texture should change suddenly between different regions.
Region-based methods assume that neighboring pixels within the same region
should have similar values. Hybrid methods is combination of edge detection and
region based together to achieve better image segmentation.

      The color and texture features in a
natural image are very complex and it is very difficult to get the fully
segmented image out of the natural and therefore semi-automatic or interactive
segmentation methods are used. Which supply user with means to incorporate his
knowledge into the segmentation process. Therefore, semi-automatic or
interactive segmentation methods incorporating user interactions have been
proposed 2,3,4,6 and
are becoming more popular. For instance, in the active contour model (ACM),
i.e. snake algorithm 2, a proper selection
of the initial curve by the user could lead to a good convergence to the true
object contour. Similarly, in the graph cut algorithm 4,
the prior information obtained by the users is critical to the segmentation performance.

      The low level image segmentation methods,
such as mean shift 5,, watershed 6, level set 7 and super-pixel 8, usually divide
the image into many small regions. Although may have severe over segmentation,
these low level segmentation methods provide a good basis for the subsequent
high level operations, such as region merging. For example, in 9,10 Li et al. combined graph cut with watershed
pre-segmentation for better segmentation outputs, where the segmented regions
by watershed, instead of the pixels in the original image, are regarded as the
nodes of graph cut. As a popular segmentation scheme for color image, mean
shift 5 can have less over segmentation than watershed while preserving well
the edge information of the object.

     In this paper, similarity region merging
method based on initial segmentation of mean shift. The method will calculate
the similarity of different regions and merge them based on largest similarity.
The object will then extract from the background when merging process ends.
Although the idea of region merging is first introduced by 11 this paper uses
the region merging for obtaining the contour for object and then extracting
desired object from image. The key contribution of the method is a novel
similarity based region merging technique, which is adaptive to image content
and does not requires a present threshold. With the region merging algorithm,
the segmented region will be automatically merged and labeled, when the desired
object contour is identified and
avoided
from background, the object contour can be readily extracted from background.
This algorithm is very simple but it can successfully extract the objects from
complex scenes.

     The rest of the paper is organized as
follows; Section 2 presents the literature survey. Section 3 performs region
merging algorithm. Section 4 experimental results and analysis. Section 5 concludes
the paper.

II. LITERATURE SURVEY

 

  Li Zhang and Qiang Ji 12 have proposed a
Bayesian Network (BN) Model for both Automatic (Unsupervised) and Interactive
(Supervised) image segmentation. They Constructed a Multilayer BN from the over
segmentation of an image, which find object boundaries according to the
measurements of regions, edges and vertices formed in the over segmentation of
the image and model the relationships among the superpixel regions, edge
segment, vertices, angles and their measurements. For Automatic Image
Segmentation after the construction of BN model and belief propagation
segmented image is produced. For Interactive Image Segmentation if segmentation
results are not satisfactory then by the human intervention active input
selection are again carried out for segmentation. Costas Panagiotakis, Ilias Grinias,
and Georgios Tziritas 13 proposed a framework for image segmentation which
uses feature extraction and clustering in the feature space followed by
flooding and region merging techniques in the spatial domain, based on the
computed features of classes. A new block-based unsupervised clustering method
is introduced which ensures spatial coherence using an efficient hierarchical
tree equip attrition algorithm. They divide the image into different-different
blocks based on the feature description computation. The image is partitioned
using minimum spanning tree relationship and mallows distance. Then they apply
K-centroid clustering algorithm and Bhattacharya distance and compute the
posteriori distributions and distances and perform initial labelling. Priority
multiclass flooding algorithm is applied and in the end regions are merged so
that segmented image is produced. Jifeng Ning, LeiZhang, David Zhang and
Chengke Wu 14 develop a image segmentation model based on maximal similar
interactive image segmentation method. The users only need to roughly indicate
the location and region of the object and background by using strokes, which
are called markers. A novel maximal-similarity based region merging mechanism
is to guide the merging process with the help of markers. A region R is merged with
its adjacent region Q
if Q has the highest
similarity with Q
among all Q’s adjacent
regions. The method automatically merges the regions that are initially
segmented by mean shift segmentation, and then effectively extracts the object
contour by labeling all the non-marker regions as either background or object.
The region merging process is adaptive to the image content and it does not
need to set the similarity threshold in advance.

 

III.   SIMILARITY REGION MERGING

 

     An initial segmentation is required to
partition the image into homogeneous region for merging. For this use any
existing low level image segmentation methods e.g. watershed 6, super-pixel
8, level set 7 and mean-shift 5 can be used for this step. In this paper
mean-shift method for initial segmentation is used because it has less over
segmentation and well preserve the object boundaries. For the initial
segmentation use the mean shift segmentation software the EDISON system 15 to
obtain the initial segmentation map Fig. 1. Shows an example of mean shift
initial segmentation. .In this paper only focus on the object retrieval based
on similarity region merging and flood fill method.

                      

Fig. 1. (a) Initial mean
shift segmentation. (b) Initial Segmentation result by the mean-shift algorithm.

Similarity
Measure Using Metric Descriptor

 

    After mean shift initial segmentation, have
a number of small regions. To guide the following region merging process, need
to represents these regions using some descriptor and define a rule for
merging. Color descriptor is very useful to represents the object color
features. In the context of region merging based segmentation, color descriptor
is more robust than other feature descriptors because shape and size feature is
vary lots while the colors of different regions from the same object will have
high similarity. Therefore use color histogram that represent each region in
this paper. The RGB color space is used to compute the color histogram of each
region, uniformly quantize each color channels into 16 levels and then the
histogram is calculated in the feature space
of 4096 bins.

 

B. Merging Rule Using Bhattacharyya
Descriptor          

 

 

     Merging the region based on their color
histograms so that the desired object can be extracted. The key issue in region
merging is how to determine the similarity between different segmented regions
of image so that the similar regions can be merged by some logic control.
Therefore need to define a similarity measure formula between two regions R
and Q to accommodate the comparison between
various regions, for this there are some well-known statistical metrics. Here use
Bhattacharyya coefficient 16 to measure the similarity between two regions
say R and Q is:

(1)
 

 

(2)
 

 

Where
HistR and HistQ are the normalized histograms of R and Q, respectively,
and the superscript u represents
the uth element of
them. Bhattacharyya coefficient is a divergence-type measure which has a
straight forward geometric interpretation. It is the cosine of the angle
between the unit vectors.

     &                                                                  

The
higher the Bhattacharyya coefficient between R and Q is, the higher the similarity
between them i.e. similar the angle. The geometric explanation of the Bhattacharyya coefficient
actually reflects the perceptual similarity between regions. If two regions
have similar contents, their histograms will be very similar, and hence their
Bhattacharyya coefficient will be very high, i.e. the angle between the two
histogram vectors is very small. Certainly, it is possible that two
perceptually very different regions may have very similar histograms.

Object and background marking

     In the interactive image segmentation, the
users need to specify the object and background conceptually. The users can input interactive information by drawing
markers, which could be lines, curves and strokes on the image. The regions
that have pixels inside the object markers are thus called object marker
regions, while the regions that have pixels inside the background markers are
called background marker regions. Fig. 1b shows examples of the object and
background markers by using simple lines. The use of green markers to mark the
object while using blue markers to represent the background. Please note
that usually only a small portion of the object regions and background regions
will be marked by the user. Actually, the less the required inputs by the
users, the more convenient and more robust the interactive algorithm is.

      After object marking, each region will be
labeled as one of three kinds of regions: the marker object region, the marker
background region and the non-marker region. To completely extract the object
contour, need to automatically assign each non-marker region with a correct
label of either object region or background region. For the convenience of
the following development, denote by Mo and MB the sets of
marker object regions and marker background regions, respectively, and denote
by N
the
set of non-marker regions.

D. Similarity based merging rule  

  
After object/background marking, it is still a challenging problem to
extract accurately the object contour from the background. The region merging
method will start from any random segment part
and
start automatic region merging process. The entire region will be gradually
labeled as either object region or background region. The lazy snapping cutout
method proposed in 10, which combine graph cut with watershed based initial
segmentation, is actually a region merging method. In this paper present an
adaptive similarity based merging technique of regions either in foreground or
in background.                       

(3)
 
 
 
 

 

    Let Q
be
an adjacent region of R and denote by      q the
set of Q’s adjacent regions. The similarity
between Q and all its adjacent regions, i.e. (Q, SiQ),
i = 1,2, … ,q, are
calculated. Obviously, R is a member of S¯Q.
If the similarity between R and Q
is
the maximal one among all the similarities (Q,
SiQ), yhen merge
R and Q. The following
merging rule is defined:

 

E.
The merging process 

 

(4)
 
 
 
 

 

    The whole
object retrieval process is working in two stages. In first stage similar
region merging process is as follows, our strategy to merge the small segmented
image which is start with any randomly selected and merge this with any of its
adjacent regions with high similarity. Then merge segmented image regions with
their adjacent regions as: if for each region Q set its adjacent regions .If the similarity between any  for any i = j is
maximum i.e.  

                  

Then Q and Rj are
merged into one region and new region (4) is same leveled by

                                                                  (5)

The
above procedure is implemented iteratively. Note that to each and every
iterative step see whether the desired object is retrieve or not. Specifically
the segmented region is shrinking; stop
iteration when desired object is found. After the first stage i.e. when full
part of object boundaries or likely to appear which is seems in every step
apply second stage of algorithm for this select a input point on the object and
expand this using four connectivity of pixels by using well known Flood Fill
method. 

F. Object Retrieval Flood Fill Algorithm

 

Input: (1) the image (2) the initial mean shift segmentation of
input image

Output: desired multiple objects 

While there is a merging up to extraction of object contour
from input image:

1.                  
Input
is image I and initial segmentation  

2.                  
After
step (1) stage of merging of initial segmented image using similar merging
rule.

3.                  
After
step one number of regions are minimized and again apply similar region merging
rule, this is and iterative procedure.

4.                  
After
retrieving object contour go to step (5).

5.                  
Apply
Region Labeling and after that Flood Fill method on the image after step
(4).  Region Labeling (I)

% I: binary Image; I (u, v) =0: background, I (u, v) =1:
foreground %

5.1.                                     
Let m?2

5.2.                                     
for all image coordinates (u, v) do

5.3.                                     
if I (u, v) =1 then

5.4.                                     
Flood Fill (I, u, v, m)

5.5.                                     
m? m+1            

5.6.                                     
return the labeled image I.

% After region labeling we apply Flood Fill method using DFS
%

6.                  
Flood
Fill (I, u, v, label)

6.1.                                     
Create an empty Stack
S

6.2.                                     
Push (S, (u, v))

6.3.                                     
While S is not empty do

6.4.                                     
(x, y)? Pop (Q)

6.5.                                     
If (x, y) is inside image and I (x, y) =1 then

6.6.                                     
Set I (x, y)= label

6.7.                                     
Push (Q, (x+1, y))

6.8.                                     
Push (Q, (x, y+1))

6.9.                                     
Push (Q, (x-1, y))

    6.9.1.     Push
(Q, (x, y-1))

6.10.                                  
return
 

 

IV. EXPERIMENTAL ANALYSIS

 

Although RGB space and Bhattacharyya
distance are used in this method, other color spaces and metrics can also be
used. In this section, present some example to verify the performance of
unsupervised region merging and flood fill method in RGB color space. Similarity
based object segmentation model is very simple as compared to the other existing
methods of segmentation. This method is less time consuming and provides better
results. Because it is interactive method so the time taken in the segmentation
is totally depend on the size, number of super pixels of the input image. The
segmentation speed mainly depends on the complexity of the region merging and
flood fill model. Object extraction time is totally depends on the size and
shape of the object of interest. Results show that our approach is flexible
enough to segment different types of images. As to the images with small
changes or similar color of foreground and background, our algorithm will not
able to achieve an ideal segmentation effect. Another kind of error is caused
by the clutter. When the background (e.g., the shadow) has a similar appearance
as the foreground, the model may not be able to completely separate them but
still achieve satisfactory results.

 

Experimental
analysis and Results

    
Fig. 2. Shows an example of how similarity region merging method extract
object contour in complex scene. After initial segmentation by mean shift,
automatic segmentation merging starts and after every step test our merging
results and also after which stage of merging to use flood fill method. Fig.
2(a) is the initial segmented regions cover only small part but representative
features of object and background regions. As shown in figure 2 the similar
region merging steps via iterative implementation.

 

                                              

      Fig.2 Initial
segmentation                           a)1st  stage merging

 

                  

           b)
2nd stage merging                      c) 3rd stage merging

               

   
              d) Object contour                                 e) Object                  

 

     Fig. 2(a), 2(b), 2(c) and 2(d) shows that
different steps of well extract object contour from the image and Fig. 2(e)
shows the extracted object using the two steps object retrieval method

 

B.  Comparison with MSRM Method

 

      In this section, compare our method of
segmentation with the MSRM segmentation method. In the MSRM method of
segmentation first has to select the region of interest by capturing the object
in a contour by dragging the mouse on the object and giving some objects
boundaries. Since the MSRM segmentation is a pixel based method, selection of
region of interest makes the MSRM method a region based method. Figure 3 shows
the segmentation results of the two methods on four test images. The first
column shows the input image; the second column shows the results by MSRM
method; the third column shows the results produced by our method.

  
  

      

   

 

   

Figure 3:
Comparisons between the MSRM Method and MSRM linked flood fill method.

 

V. CONCLUSION

 

      In this paper a class specific object
segmentation method using maximal similar region merging and flood fill
algorithm. The image is initially segmented using mean shift segmentation and the
users only need to roughly indicate the main features of the object and
background by using some strokes. 
Since  the  object 
regions  will  have 
high similarity  are  merged 
after  applying  region 
merging based  on  Bhattacharyya 
rule.  The  similarity 
based merging rule, a two stage iterative merging algorithm was
presented  to  gradually 
label  each  non-marker 
region  as either  object 
or  background. An automatic start
of merging with any random segmented region and after each merging check
whether the object contour is obtained or not, if at any particular stage of
merging object contour is obtained then
use flood fill algorithm and click with mouse which object want
to extract. This method is simple yet powerful and it is image content
adaptive. 

    In
future extract multiple objects from input image by using unsupervised method
as well as supervised method by merging similar regions using some metric.
Extensive experiments were conducted to validate the method of extracting
single object from complex scenes. This method is efficiently exploits the
color similarity of the target. This method provides a general region merging
framework, it does not depend initially mean shift segmentation method or other
color image segmentation methods can also be used for segmentation. Also use
appending the different object part to obtaining complete object from complex
scene, and also use some supervised technique also.

 

x

Hi!
I'm Mack!

Would you like to get a custom essay? How about receiving a customized one?

Check it out