EVENT RECOGNITION: IMAGE & VIDEO SEGMENTATION
Sign up for access to the world's latest research
Abstract
This paper gives a clear look at the segmentation process at the basic level. Segmentation is done at multiple levels so that we will get different results. Segmentation of relative motion descriptors gives a clear picture about the segmentation done for a given input video. Relative motion computation and histograms incrementation are used to evaluate this approach. Also here we will give complete information about the related research which is done about how segmentation can be done for the both images and videos.
Related papers
2014
Temporal segmentation of videos into meaningful image sequences containing some particular activities is an interesting problem in computer vision. We present a novel algorithm to achieve this semantic video segmentation. The segmentation task is accomplished through event detection in a frame-by-frame processing setup. We propose using one-class classification (OCC) techniques to detect events that indicate a new segment, since they have been proved to be successful in object classification and they allow for unsupervised event detection in a natural way. Various OCC schemes have been tested and compared, and additionally, an approach based on the temporal self-similarity maps (TSSMs) is also presented. The testing was done on a challenging publicly available thermal video data-set. The results are promising and show the suitability of our approaches for the task of temporal video segmentation.
Advanced Concepts for …, 2010
In this paper, we present an overview of a hybrid approach for event detection from video surveillance sequences that has been developed within the REGIMVid project. This system can be used to index and search the video sequence by the visual content. The platform provides moving object segmentation and tracking, High-level feature extraction and video event detection.We describe the architecture of the system as well as providing an overview of the descriptors supported to date. We then demonstrate the usefulness of the toolbox in the context of feature extraction, events learning and detection in large collection of video surveillance dataset.
Event detection in video sequences: Challenges and perspectives, 2017
The growing need for information and high-quality video cameras has led to the proliferation of video based systems that perform tasks such as traffic monitoring, surveillance, etc. A basic component in these systems is the visual tracking of objects contained into a video sequence in order to estimate their paths. Generally, the framework of a video surveillance system includes the following steps: Environment modelling, object detection, classification and tracking of moving objects and description of their behaviours. Indeed, the main purpose of event detection systems is to characterize activities using unsupervised or supervised techniques. In the following paper, we present the state of the art of various event detection methods such as approaches used to extract primitives characterizing the movement then the means of classification.
2008
INTEREST from industry and academia has increased dra-matically over recent years in the challenging area of event analysis and recognition from various video sources including sports, surveillance, user-generated video, etc. Video event analysis and recognition is a critical task in many applications such as detection of sporting highlights, incident detection in surveillance video, indexing, retrieval and summarization of video databases, and human-computer interaction.
The ability of accurately segmenting a video into events or scenes without any human intervention has been a challenge in computer vision. In this paper, we present a more accurate way of segmenting a video into events using trajectory discontinuities, which consist of two stages. In the first stage, we estimate possible regions where the scene changed using comparison of motion features we extracted by edge detection segmentation and the second stage, we use the estimated regions to estimate trajectory discontinuity using Large Displacement Optical flow (LDOF) which calculates the actual frame where the scene has changed. Our experiments have shown that this method have high accuracy rate and it is faster that the conventional way of event video segmentation
2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011
Real-world environment introduces many variations into video recordings such as changing illumination and object dynamics. In this paper, a technique for abstracting useful spatio-temporal features from graph-based segmentation operations has been proposed. A spatiotemporal volume (STV)-based shape matching algorithm is then devised by using the intersection theory to facilitate the definition and detection of video events. To maintain system efficiency, this research has integrated an innovative feature-weight evaluation mechanism which "rewards" or "punishes" recognition outputs based on the segmentation quality. Substantial improvements on both the event "Precision" and "Recall" rate and the processing efficiency have been observed in the experiments in the project.
Pattern Recognition and Image Analysis, 2014
http: // www. inf-cv. uni-jena. de Temporal segmentation of videos into meaningful image sequences containing some particular activities is an interesting problem in computer vision. We present a novel algorithm to achieve this semantic video segmentation. The segmentation task is accomplished through event detection in a frame-by-frame processing setup. We propose using one-class classification (OCC) techniques to detect events that indicate a new segment, since they have been proved to be successful in object classification and they allow for unsupervised event detection in a natural way. Various OCC schemes have been tested and compared, and additionally, an approach based on the temporal self-similarity maps (TSSMs) is also presented. The testing was done on a challenging publicly available thermal video data-set. The results are promising and show the suitability of our approaches for the task of temporal video segmentation.
We present a system for event detection and analysis from video streams. Our approach is based on a detection and tracking module which extracts moving objects trajectories from a video stream. These trajecto-ries, together with a rough description of the scene, are then used by the behavior inference module in order to recognize and classify object motion. The hierarchical tasks are performed on a buffered set of frames in order to provide accurate results by taking into account the temporal coherence of moving objects.
Multimedia Tools and Applications, 2011
Research on methods for detection and recognition of events and actions in videos is receiving an increasing attention from the scientific community, because of its relevance for many applications, from semantic video indexing to intelligent video surveillance systems and advanced human-computer interaction interfaces. Event detection and recognition requires to consider the temporal aspect of video, either at the low-level with appropriate features, or at a higher-level with models and classifiers than can represent time. In this paper we survey the field of event recognition, from interest point detectors and descriptors, to event modelling techniques and knowledge management technologies. We provide an overview of the methods, categorising them according to video production methods and video domains, and according to types of events and actions that are typical of these domains.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (5)
- H. Wang, M. M. Ullah, A. Klaser, I. Laptev, C. Schmid, Evaluation of local spatio-temporal features for action recognition, BMVC 2009.
- M.-Y. Chen and A. Hauptmann, MoSIFT: Recognizing Human Actionsin Surveillance Videos, CMU-CS-09-161, Carnegie Mellon University, 2009.
- J. Sun, X. Wu, S Yan, L.-F. Cheong, T.S. Chua, and J. Li, Hierarchical spatio-temporal context modeling for action recognition, CVPR, 2009.
- A. Noguchi and K. Yanai, A SURF-Based Spatio- Temporal Feature for Feature-Fusio-Based Action Recognition, ECCV, 2010.
- H. Wang, A. Klser, C. Schmid, and L. Cheng-Lin, Action Recognition by Dense Trajectories, CVPR, 2011.