A Graph-Based Approach for Image Segmentation
2008, Lecture Notes in Computer Science
https://doi.org/10.1007/978-3-540-89639-5_27…
8 pages
1 file
Sign up for access to the world's latest research
Abstract
We present a novel graph-based approach to image segmentation. The objective is to partition images such that nearby pixels with similar colors or grayscale intensities belong to the same segment. A graph representing an image is derived from the similarity between the pixels and partitioned by a computationally efficient graph clustering method, which identifies representative nodes for each cluster and then expands them to obtain complete clusters of the graph. Experiments with synthetic and natural images are presented. A comparison with the well known graph clustering method of normalized cuts shows that our approach is faster and produces segmentations that are in better agreement with visual assessment on original images.






Related papers
Image segmentation is the process of subdividing a digital image into its systematized regions or objects which is useful in image analysis. In this review paper, we carried out an organized survey of many image segmentation techniques which are flexible, cost effective and computationally more efficient. We classify these segmentation methods into three categories: the traditional methods, graph theoretical methods and combination of both traditional and graph theoretical methods. In the second and third category of image segmentation approaches, the image is modeled as a weighted and undirected graph. Normally a pixel or a group of pixels are connected with nodes. The edge weights represent the dissimilarity between the neighborhood pixels. The graph or the image is then divided according to a benchmark designed to model good clusters. Every partition of the nodes or the pixels as output from these algorithms is measured as an object segment in an image representing a graph. Some of the popular algorithms are thresholding, normalized cuts, iterated graph cut, clustering method, watershed transformation, minimum cut, grey graph cut, and minimum spanning treebased segmentation.
Abstract The analysis of digital scenes often requires the segmentation of connected components, named objects, in images and videos. The problem consists of defining the whereabouts of a desired object (recognition) and its spatial extension in the image (delineation). Humans can outperform computers in recognition, but the other way around is valid for delineation.
Concepts, Methodologies, Tools, and Applications
2008 Ieee 16th Signal Processing Communication and Applications Conference, 2008
A graph theoretic color image segmentation algorithm is proposed, in which the popular normalized cuts image segmentation method is improved with modifications on its graph structure. The image is represented by a weighted undirected graph, whose nodes correspond to over-segmented regions, instead of pixels, that decreases the complexity of the overall algorithm. In addition, the link weights between the nodes are calculated through the intensity similarities of the neighboring regions. The irregular distribution of the nodes, as a result of such a modification, causes a bias towards combining regions with high number of links. This bias is removed by limiting the number of links for each node. Finally, segmentation is achieved by bipartitioning the graph recursively according to the minimization of the normalized cut measure. The simulation results indicate that the proposed segmentation scheme performs quite faster than the traditional normalized cut methods, as well as yielding better segmentation results due to its regionbased representation.
Computer Vision and Image Understanding, 2004
The goal of this communication is to suggest an alternative implementation of the k-way Ncut approach for image segmentation. We believe that our implementation alleviates a problem associated with the Ncut algorithm for some types of images: its tendency to partition regions that are nearly uniform with respect to the segmentation parameter. Previous implementations have used the k-means algorithm to cluster the data in the eigenspace of the affinity matrix. In the k-means based implementations, the number of clusters is estimated by minimizing a function that represents the quality of the results produced by each possible value of k. Our proposed approach uses the clustering algorithm of Koontz and Fukunaga in which k is automatically selected as clusters are formed (in a single iteration). We show comparison results obtained with the two different approaches to non-parametric clustering. The Ncut generated oversegmentations are further suppressed by a grouping stage-also Ncut based-in our implementation. The affinity matrix for the grouping stage uses similarity based on the mean values of the segments.
2013 IEEE International Conference on Image Processing, 2013
Constructing a discriminative affinity graph plays an essential role in graph-based image segmentation, and feature directly influences the discriminative power of the affinity graph. In this paper, we propose a new method based on the weighted color patch to compute the weight of edges in an affinity graph. The proposed method intends to incorporate both color and neighborhood information by representing pixels with color patches. Furthermore, we assign both local and global weights adaptively for each pixel in a patch in order to alleviate the over-smooth effect of using patches. The normalized cut (NCut) algorithm is then applied on the resulting affinity graph to find partitions. We evaluate the proposed method on the Prague color texture image benchmark and the Berkeley image segmentation database. The extensive experiments show that our method is competitive compared to the other standard methods with multiple evaluation metrics.
2012 IEEE International Conference on Control System, Computing and Engineering (ICCSCE)
Image segmentation has been widely applied in image analysis for various areas such as biomedical imaging, intelligent transportation systems and satellite imaging. The main goal of image segmentation is to simplify an image into segments that have a strong correlation with objects in the real world. Homogeneous regions of an image are regions containing common characteristics and are grouped as single segment. One of the graph partitioning methods in image segmentation, normalised cuts, has been recognised producing reliable segmentation result. To date, normalised cuts in image segmentation of various sized images is still lacking of analysis of its performance. In this paper, segmentation on synthetic images and natural images are covered to study the performance and effect of different image complexity towards segmentation process. This study gives some research findings for effective image segmentation using graph partitioning method with computation cost reduced. Because of its cost expensive and it becomes unfavourable in performing image segmentation on high resolution image especially in online image retrieval systems. Thus, a graph-based image segmentation method done in multistage approach is introduced here.
International Journal of Image and Graphics, 2015
We present a new segmentation method called weighted Felzenszwalb and Huttenlocher (WFH), an improved version of the well-known graph-based segmentation method, Felzenszwalb and Huttenlocher (FH). Our algorithm uses a nonlinear discrimination function based on polynomial Mahalanobis Distance (PMD) as the color similarity metric. Two empirical validation experiments were performed using as a golden standard ground truths (GTs) from a publicly available source, the Berkeley dataset, and an objective segmentation quality measure, the Rand dissimilarity index. In the first experiment the results were compared against the original FH method. In the second, WFH was compared against several well-known segmentation methods. In both cases, WFH presented significant better similarity results when compared with the golden standard and segmentation results presented a reduction of over-segmented regions.
2012
The graph partitioning has been widely used as a mean of image segmentation. One way to partition graphs is through a technique known as Normalized Cut, which analyzes the graph’s Laplacian matrix eigenvectors and uses some of them for the cut. This work proposes the use of Normalized Cut in graphs generated by structures based on Quadtree and Component Tree to perform image segmentation. Experiments of image segmentation by Normalized Cut in these models are made and a specific benchmark compares and ranks the results obtained by other graph-conversion techniques proposed in the literature. The results are promising and allow us to conclude that the use of different graph models combined with the Normalized Cut can yield better segmentations according to the characteristics of images. Keywords-Image Segmentation; Normalized Cut; Quadtree; Component Tree
Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in computer vision and graphics. In this paper we have analyzed mainly four techniques Graph cuts (GC), Iterative Graph Cuts (IGC), Multi-label Random walker (RW) and Lazy Snapping (LS). Graph Cut techniques are used for completely automatic high-level grouping of image pixels. Iterative Graph Cut technique allows some sort of user interaction for extracting objects from a complex background. Random Walker algorithm requires user specific labels and produce a segmentation where each segment is connected to a labeled pixel. Lazy Snapping provides instant visual feedback, snapping the cutout contour to the true object boundary efficiently despite the presence of ambiguous or low contrast edges.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (8)
- Le, T., Kulikowski, C., Muchnik, I.: Coring method for clustering a graph. Proceedings of the 19th International Conference on Pattern Recognition (2008)
- Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. on Pattern Analy- sis and Machine Intelligence (2000) 888-905
- Charikar, M.: Greedy approximation algorithms for finding dense components in a graph. Volume 1913 of Lecture Notes in Computer Science, Springer-Verlag (2000) 84-95
- Berkeley segmentation dataset, http://www.cs.berkeley.edu/projects/vision/bsds/
- Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural im- ages and its application to evaluating segmentation algorithms and measuring ecological statistics. Proc. of the 8th International Conference on Computer Vision (2001) 416-423
- Luxburg, U.: A tutorial on spectral clustering. Statistics and Computing (2007) 395-416
- Kannan, R., Vempala, S., Vetta, A.: On Clusterings: Good, Bad and Spectral. Proc. 41st Annual Symposium on the Foundation of Computer Science (2000) 367-380
- Fig. 5. Segmentations and derived boundaries on images from the Berkeley dataset. The same parameter settings b = 3, d = 97% are used for all the images