Academia.eduAcademia.edu

Normalized Cross Correlation

description540 papers
group34 followers
lightbulbAbout this topic
Normalized Cross Correlation (NCC) is a statistical method used to measure the similarity between two signals or datasets by calculating the correlation coefficient after normalizing the data. It accounts for variations in amplitude and allows for comparison of signals with different scales, providing a standardized measure of their correlation.
lightbulbAbout this topic
Normalized Cross Correlation (NCC) is a statistical method used to measure the similarity between two signals or datasets by calculating the correlation coefficient after normalizing the data. It accounts for variations in amplitude and allows for comparison of signals with different scales, providing a standardized measure of their correlation.

Key research themes

1. How can normalization improve robust and efficient correlation estimation in noisy or small-sample data?

This theme investigates the role of normalization techniques on enhancing the stability, robustness, and accuracy of correlation measures and associated statistical procedures, especially in contexts of correlated or noisy data, small sample sizes, or challenging signal conditions. Normalization methods are studied both as preprocessing steps (e.g., batch normalization and its variants in neural networks) as well as mathematical adjustments to correlation estimators to ensure correct variance estimates, robustness to nonnormality, and improved inference.

Key finding: This work introduces GhostNorm and SeqNorm, normalization layers that independently estimate batch mean and variance on small ghost batches or sequentially over input dimensions. Empirically, GhostNorm and SeqNorm reduce loss... Read more
Key finding: This work develops the 'blocking' method—a renormalization group technique—to rigorously and efficiently estimate statistical errors on averages of correlated data, demonstrating how to correct bias and subjective choices in... Read more
by C. Lai
Key finding: Through simulation and numerical analysis of the bivariate lognormal distribution, this study reveals large biases and variances in the sample Pearson correlation coefficient when marginals are skewed and true correlation is... Read more
Key finding: This paper introduces new robust correlation coefficients (Taba, TabWil, TabWil rank) designed to remain accurate in the presence of outliers and heavy-tailed distributions, outperforming classical Pearson and Spearman... Read more

2. What advances in multivariate correlation analysis enable detection of complex dependencies beyond pairwise measures?

Multivariate correlation methods extend beyond pairwise correlations, capturing complex dependencies among multiple variables or datasets. This research theme centers on canonical correlation analysis (CCA) and its variants, kernel or nonlinear extensions, and newly proposed concordance-based techniques. The focus is on methodological developments that generalize correlation measures to extract interpretable multivariate relations and optimize detection of nonlinear, high-dimensional, or nonlinear dependencies in diverse data domains such as genomics, neuroscience, and social sciences.

Key finding: This tutorial systematically presents classical CCA and its modern extensions—regularized, kernel, sparse, Bayesian, and deep CCA—providing optimization techniques, statistical evaluation methods, and interpretation... Read more
Key finding: This paper introduces Canonical Concordance Correlation Analysis (CCCA), which maximizes Lin's concordance correlation coefficient instead of Pearson's correlation, accounting simultaneously for correlation and closeness of... Read more
Key finding: This work develops Orthonormal Canonical Analysis (ORCA), a multivariate technique based on singular value decomposition of the cross-correlation matrix of two data sets, avoiding matrix inversion and thus mitigating... Read more
Key finding: This study proposes algorithms for efficient detection of strong multivariate correlations among multiple variables (3–5 dimensions), supporting four correlation measures and applicable to static and streaming data. It... Read more

3. How can normalized cross correlation and compression-based distances be applied for image matching and neural synchronization measures?

This research theme focuses on specialized applications of normalized cross correlation (NCC) and normalized compression distance (NCD) methods in pattern recognition and neuroscience. It covers algorithmic advances that combine normalization with signal processing and compression to enhance face matching under varying conditions and quantify cortico-muscular synchronization in brain signals.

Key finding: This paper proposes and implements a face matching algorithm that extracts face region templates and performs normalized cross-correlation (NCC) based matching across images taken at different times, viewpoints, or lighting... Read more
Key finding: Using normalized compression distance (NCD) based on lengths of compressed concatenated signals, this study quantifies synchronization between EEG and EMG time-series, finding that NCD sensitively measures cortico-muscular... Read more
Key finding: This work develops a theoretical framework interpreting covariance as an inner product in a vector space of random variables, defining a metric angle to quantify correlations. Applied to climate indices, the authors extend... Read more

All papers in Normalized Cross Correlation

Cardiac motion has been tracked using various methods, which vary in their invasiveness and dimensionality. One such noninvasive modality for cardiac motion tracking is ultrasound. Threedimensional ultrasound motion tracking has been... more
Cardiac motion has been tracked using various methods, which vary in their invasiveness and dimensionality. One such noninvasive modality for cardiac motion tracking is ultrasound. Threedimensional ultrasound motion tracking has been... more
Stereophotogrammetry is finding increased use in clinical breast surgery, both for breast reconstruction after oncological procedures and cosmetic augmentation and reduction. The ability to visualize and quantify morphological features of... more
Data security is the essential in the today’s world of internet and networking. In any organization information is critical. In today’s world people are ready to spent thousands and lacks of money in order to ensure high level of... more
India, among the agriculture-based economy grows wide variety of rice along with other crops. These varieties have different commercial values as they are different in their features. It becomes extremely challenging to classify rice... more
This paper describes a methodology for obtaining a high resolution dense point cloud using Kinect (J. Smisek and Pajdla, 2011) and HD cameras. Kinect produces a VGA resolution photograph and a noisy point cloud. But high resolution images... more
Publication in the conference proceedings of EUSIPCO, Bucharest, Romania, 2012
Imaging and Image sensors is a field that is continuously evolving. There are new products coming into the market every day. Some of these have very severe Size, Weight and Power constraints whereas other devices have to handle very high... more
We explore the possibility that the G2 gas cloud falling in toward SgrA * is the mass-loss envelope of a young T Tauri star. As the star plunges to smaller radius at 1000-6000 km s -1 , a strong bow shock forms where the stellar wind is... more
Estimation of fundamental matrices is important in 3D computer vision. It is well known that the estimation of fundamental matrices is sensitive to outliers-even a few of imprecise point correspondences may result in an estimated... more
Purpose Large area traffic monitoring with high spatial and temporal resolution is a challenge that cannot be served by today available static infrastructure. Therefore, we present an automatic near real-time traffic monitoring approach... more
In the state-of-the-art PIV, image sensors are growing in size and PIV algorithms are increasing in spatial resolution capabilities. These technological advances allow for the simultaneous measurement of increasingly larger spatial scales... more
In the past, the authors have addressed the tasks of assessing read-out and peak-locking errors separately. In this paper an improved approach is tested for assessing both errors simultaneously. The rationale is that, generally these... more
Abstract: In this paper, efficient biometric security techniques for iris recognition system with high performance and high confidence are described. The system is based on an empirical analysis of the iris image and it is split in... more
Medical imaging is a vital component of large number of applications within current clinical settings. Image registration is a fundamental task in medical imaging. It is a process that overlays two or more medical images that are taken... more
Internet revolution resulted in an explosive growth in multimedia applications. The rapid advancement of internet has made it easier to send the data/image accurate and faster to the destination. Watermarking biometric data is a still a... more
PPG is a potential tool in clinical applications. Among such, the relationship between respiration and PPG signal has attracted attention in past decades. In this research, a bivariate AR spectral estimation method was utilized for the... more
A single-chip white light LED is commonly modeled by considering the phosphor coating as a homogeneous Lambertian light source. However, this approach leads to an incorrect optical simulation of phosphor-coated multi-chip LEDs due to the... more
In this paper, we present two different double-talk detection schemes for Acoustic Echo Cancellation (AEC). First, we present a novel normalized detection statistic based on the cross-correlation coefficient between the microphone signal... more
Imaging and Image sensors is a field that is continuously evolving. There are new products coming into the market every day. Some of these have very severe Size, Weight and Power constraints whereas other devices have to handle very high... more
This paper analyzes the performance of sum of squared differences (SSD), sum of absolute differences (SAD), normalized cross correlation (NCC), zero mean normalized cross correlation (ZNCC) and several other proposed modified expressions... more
This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author's institution, sharing with colleagues and providing to... more
The Internet as a whole does not use secure links, thus information in transit may be vulnerable to interruption as well. The important of reducing a chance of the information being detected during the transmission is being an issue in... more
Present ventricular rate-based arrhythmia detection algorithms lack specificity. Using a training set of 109 endocardial electrogram recordings, a sensitive and specific dual-chamber arrhythmia recognition algorithm has been developed.... more
Automatic detection of lung nodules is an important problem in computer analysis of chest radiographs. In this paper we propose a novel algorithm for isolating lung nodules from spiral CT scans. The proposed algorithm is based on using... more
Automatic detection of lung nodules is an important problem in computer analysis of chest radiographs. In this paper, we propose a novel algorithm for isolating lung abnormalities (nodules) from spiral chest low-dose CT (LDCT) scans. The... more
As automated image processing techniques have been required in multi-temporal/multi-sensor geospatial image applications, use of automated but highly invariant image matching technique has been a critical ingredient. Note that there is... more
Motion estimation is the most challenging and time consuming stage in block based video codec. To reduce the computation time, many fast motion estimation algorithms were proposed and implemented. This paper proposes a quad-tree based... more
The traditional iris recognition systems require equal high quality human iris images. A cheap image acquisition system has difficulty in capturing equal high quality iris images. This paper describes a new feature representation method... more
In this paper, we present a new two-microphone approach that improves speech recognition accuracy when speech is masked by other speech. The algorithm improves on previous systems that have been successful in separating signals based on... more
In this paper we present a new method of signal processing for robust speech recognition using two microphones. The method, loosely based on the human binaural hearing system, consists of passing the speech signals detected by two... more
This paper discusses a new combination of techniques that help in improving the accuracy of speech recognition in adverse conditions using two microphones. Classic approaches toward binaural speech processing use some form of... more
This paper describes an algorithm that efficiently segregates desired speech features from spatially-separated interfering sources in reverberant environments. Although most binaural segregation techniques successfully remove interference... more
This paper describes an algorithm that achieves noise robustness in speech recognition by reconstructing the desired signal from a mixture of two signals using continuously-variable masks. In contrast to current methods which use binary... more
In this paper we present a new method of signal processing for robust speech recognition using two microphones. The method, loosely based on the human binaural hearing system, consists of passing the speech signals detected by two... more
In this paper, we present a new dereverberation algorithm called Temporal Masking and Thresholding (TMT) to enhance the temporal spectra of spectral features for robust speech recognition in reverberant environments. This algorithm is... more
In this paper, we present a new two-microphone approach that improves speech recognition accuracy when speech is masked by other speech. The separation of the target speech source from interfering sources and the effects of reverberation... more
This paper discusses a combination of techniques for improving speech recognition accuracy in the presence of reverberation and spatially-separated interfering sound sources. Interaural Time Delay (ITD), observed as a consequence of the... more
In this paper the improvement in performance of automatic speech recognition (ASR) system is achieved with help of pitch dependent features and probability of voicing estimated features. The pitch dependent features are useful for tonal... more
The 3D information of road infrastructures is growing in importance with the development of autonomous driving. In this context, the exact 2D position of road markings as well as height information play an important role in, e.g.,... more
This paper presents an inexpensive framework for 3-D seabed mosaic reconstruction, based on an asynchronous stereo vision system when simplifying motion assumptions are used. In order to achieve a metric reconstruction some knowledge... more
Imaging and Image sensors is a field that is continuously evolving. There are new products coming into the market every day. Some of these have very severe Size, Weight and Power constraints whereas other devices have to handle very high... more
One of the serious problems is how to authenticate the passport document for its holder. The major factor of this authenticity is the corresponding of the Passport's photo with its holder. Most of the Passport document contains a holder's... more
Today, communication devices are evolving towards friendly-user interactivity while permanently eyeing towards 3D display technologies. As such, 3D face generation, modelling and animation techniques are in the frontline to design... more
Intermediate Significant Bit digital watermarking technique (ISB) is a new approved technique of embedding a watermark by replacing the original image pixels with new pixels. This is done by ensuring a close connection between the new... more
The impeccability (quality) represent as one of the most important requirements of any watermarking system of the watermarked gray scale image. In most studies, the watermarking algorithm has to embed the watermark so that this will not... more
Download research papers for free!