World Wide Web (WWW) is the most popular global information sharing and communication system cons... more World Wide Web (WWW) is the most popular global information sharing and communication system consisting of three standards .i.e., Uniform Resource Identifier (URL), Hypertext Transfer Protocol (HTTP) and Hypertext Mark-up Language (HTML). Information is provided in text, image, audio and video formats over the web by using HTML which is considered to be unconventional in defining and formalizing the meaning of the context. Most of the available information is unstructured and due to this reason it is very difficult to extract concrete information. Although some search engines and screen scrapers are developed but they are not quite efficient and requires excessive manual preprocessing e.g. designing a schema, cleaning raw data, manually classifying documents into taxonomy and manual post processing. To increase the integration and interoperability over the web the concept of "Web Service" was introduced. Due to the dynamic nature Web Service become very popular in industry in short time but with the passage of time due to the heavily increase in number of services the problems of end-to-end service authentication, authorization, data integrity and confidentiality were identified [1].To cope with the existing web based problems .i.e., Information filtration, security, confidentiality and augmentation of meaningful contents in mark-up presentation over the web a semantic based solution "Semantic Web" was introduced by Tim Berners Lee . Semantic Web is an intelligent incarnation and advancement in World Wide Web to collect, manipulate and annotate the information by providing categorization, uniform access to resources and structuring the information in machine process able format. To structure the information in machine process able semantic models Semantic Web have introduced the concept of "Ontology" . Ontology is the collection of interrelated semantic based modeled concepts based on already defined finite sets of terms and concepts used in information integration and knowledge management. To obtain the desired results Ontology is categorized in to three categories .i.e., Natural Language Ontology (NLO), Domain Ontology (DO) and Ontology Instance (OI). NLO creates relationships between generated lexical tokens obtained from natural language statements, DO contains the knowledge of a particular domain and OL generates automatic object based web pages. Ontology construction is a highly relevant research issue depending on the extraction of information from web and emergence of ontologies. Ontologies are constructed using some ontology supporting languages like RDF, OWL etc. and connected to each other in a decentralized manner to clearly express semantic contents and arrange semantic boundaries to extract concrete information. Ontology is heavily contributing in industry by supporting the development of advanced language
Journal of Astronomical Telescopes, Instruments, and Systems, 2022
The third generation South Pole Telescope camera (SPT-3G) improves upon its predecessor (SPTpol) ... more The third generation South Pole Telescope camera (SPT-3G) improves upon its predecessor (SPTpol) by an order of magnitude increase in detectors on the focal plane. The technology used to read out and control these detectors, digital frequency-domain multiplexing (DfMUX), is conceptually the same as used for SPTpol, but extended to accommodate more detectors. A nearly 5x expansion in the readout operating bandwidth has enabled the use of this large focal plane, and SPT-3G performance meets the forecasting targets relevant to its science objectives. However, the electrical dynamics of the higher-bandwidth readout differ from predictions based on models of the SPTpol system. To address this, we present an updated derivation for electrical crosstalk in higher-bandwidth DfMUX systems, and identify two previously uncharacterized contributions to readout noise. The updated crosstalk and noise models successfully describe the measured crosstalk and readout noise performance of SPT-3G, and suggest improvements to the readout system for future experiments using DfMUX, such as the LiteBIRD space telescope.
Partial Discharge (PD) diagnostic is an effective tool for condition monitoring of the high volta... more Partial Discharge (PD) diagnostic is an effective tool for condition monitoring of the high voltage equipment that provides an updated status of the dielectric insulation of the components. Reliability of the diagnostics depends on the quality of the PD measurement techniques and the processing of the measured PD data. The online measured data suffer from various inaccuracies caused by external noise from various sources such as power electronic equipment, radio broadband signals and wireless communication, etc. Therefore, extraction of useful data from the on-site measurements is still a challenge. This article presents a discrete wavelet transform (DWT) based adaptive de-noising algorithm and evaluates its performance. Various decisive steps in applying DWT based de-noising on any signal, including selection of mother wavelet, number of levels in multiresolution decomposition and criteria for reconstruction of the de-noised signals are taken by the proposed algorithm and vary from one signal to another without a human intervention. Hence, the proposed technique is adaptive. The proposed solution can enhance the accuracy of the PD diagnostic for HV power components.
The paper presents a technique for pars-ing a speech utterance from its phonetic representation. ... more The paper presents a technique for pars-ing a speech utterance from its phonetic representation. The technique is different from a conventional spoken language pars-ing techniques where a speech utterance is first transcribed at word-level and a syn-tactic structure is produced from the tran-scribed words. In a word-level parsing ap-proach, an error caused by a speech rec-ognizer propagates through the parser into the resultant syntactic structure. Further-more, sometimes transcribed speech utter-ances are not parse-able even though lat-tices or confusion networks are used. These problems are addressed by the proposed phonetically aided parser. In the phoneti-cally aided parsing approach, the parsing is performed from a phonetic representation (phone sequence) of the recognized utter-ance using a joint modeling of probabilistic context free grammars and a n-gram lan-guage model. The technique results in bet-ter parsing accuracy then word-level pars-ing when evaluated on spoken dialo...
Journal of Data Mining in Genomics & Proteomics, 2014
Machine learning aims of facilitating complex system data analysis, optimization, classification ... more Machine learning aims of facilitating complex system data analysis, optimization, classification and prediction with the use of different mathematical and statistical algorithms. In this research, we are interested in establishing the process of estimating best optimal input parameters to train networks. Using WEKA, this paper implements a classifier with Back-propagation Neural Networks and Genetic Algorithm towards efficient data classification and optimization. The implemented classifier is capable of reading and analyzing a number of populations in giving datasets, and based on the identified population it estimates kinds of species in a population, hidden layers, momentum, accuracy, correct and incorrect instances.
Trust, Privacy and Security in Digital Business, 2010
Actuellement, les politiques de pare-feu (en anglais firewall) peuvent contenir de milliers de rè... more Actuellement, les politiques de pare-feu (en anglais firewall) peuvent contenir de milliers de règles et ceà cause de la tailleénorme et la structure complexe des réseaux modernes. De ce fait, ces politiques nécessitent des outils automatiques fournissant un environnement convivial pour spécifier, configurer et déployer en sûreté une politique cible. Beaucoup de travaux de recherche ont traité de la spécification des politiques, la détection des conflits et le problème d'optimisation, mais très peu de travaux se sont intéressés au déploiement de politiques. Ce n'est que récemment, certains chercheurs ont proposé des stratégies de déploiement pour les deux importantes catégories d'édition de politiques. Dans ce rapport, nous montrons que ces stratégies sont erronées et pourraient mener a des failles de sécurité. Ensuite, nous fournissons deux algorithmes corrects, efficaces et sûrs pour les classes d'édition de politiques. Nos résultats expérimentaux montrent que ces algorithmes sont trés rapides et peuventêtre utilisés en toute sûreté, même pour le déploiement de politiques dont la taille est très importante.
Database : the journal of biological databases and curation, 2014
The composition of stable-isotope labelled isotopologues/isotopomers in metabolic products can be... more The composition of stable-isotope labelled isotopologues/isotopomers in metabolic products can be measured by mass spectrometry and supports the analysis of pathways and fluxes. As a prerequisite, the original mass spectra have to be processed, managed and stored to rapidly calculate, analyse and compare isotopomer enrichments to study, for instance, bacterial metabolism in infection. For such applications, we provide here the database application 'Isotopo'. This software package includes (i) a database to store and process isotopomer data, (ii) a parser to upload and translate different data formats for such data and (iii) an improved application to process and convert signal intensities from mass spectra of (13)C-labelled metabolites such as tertbutyldimethylsilyl-derivatives of amino acids. Relative mass intensities and isotopomer distributions are calculated applying a partial least square method with iterative refinement for high precision data. The data output includes...
Soft biometric gender classification using face for real time surveillance in cross dataset environment
INMIC, 2013
ABSTRACT Gender classification is a challenging task in surveillance videos due to their relative... more ABSTRACT Gender classification is a challenging task in surveillance videos due to their relatively low solution, uncontrolled environment and viewing angles of an object. It has potential applications as well in visual surveillance and human-computer interaction systems. While a lot of work have considered still face images for soft biometrics recognition and applied still image-based methods, recent developments indicated that excellent results can be obtain on moving faces using texture-based spatiotemporal representations to describe and analyze faces in videos. This paper investigates the combination of facial appearance and motion for face analysis in videos. We proposed an approach for gender classification in spatiotemporal environment from videos by using huge set of training features derived from rich collection of various datasets. We tested our system with several publicly available videos, which have been taken in un-controlled environment in terms of background, light, expression, motion, angle and appearance. We also tested our system with several self recorded surveillance videos. Our extensive cross dataset experimental analysis clearly assessed the promising performance of our system for gender classification using faces in videos. Another novel part of our current research negates the recent theory based on experimental results which claimed that the combination of motion and appearance is only useful for gender analysis of familiar faces.
This paper describes a demonstration of the WinkTalk system, which is a speech synthesis platform... more This paper describes a demonstration of the WinkTalk system, which is a speech synthesis platform using expressive synthetic voices. With the help of a webcamera and facial expression analysis, the system allows the user to control the expressive features of the synthetic speech for a particular utterance with their facial expressions. Based on a personalised mapping between three expressive synthetic voices and the users facial expressions, the system selects a voice that matches their face at the moment of sending a message. The WinkTalk system is an early research prototype that aims to demonstrate that facial expressions can be used as a more intuitive control over expressive speech synthesis than manual selection of voice types, thereby contributing to an improved communication experience for users of speech generating devices.
The ability to efficiently facilitate social interaction and emotional expression is an important... more The ability to efficiently facilitate social interaction and emotional expression is an important, yet unmet requirement for speech generating devices aimed at individuals with speech impairment. Using gestures such as facial expressions to control aspects of expressive synthetic speech could contribute to an improved communication experience for both the user of the device and the conversation partner. For this purpose, a mapping model between facial expressions and speech is needed, that is high level (utterance-based), versatile and personalisable. In the mapping developed in this work, visual and auditory modalities are connected based on the intended emotional salience of a message: the intensity of facial expressions of the user to the emotional intensity of the synthetic speech. The mapping model has been implemented in a system called WinkTalk that uses estimated facial expression categories and their intensity values to automatically select between three expressive synthetic voices reflecting three degrees of emotional intensity. An evaluation is conducted through an interactive experiment using simulated augmented conversations. The results have shown that automatic control of synthetic speech through facial expressions is fast, non-intrusive, sufficiently accurate and supports the user to feel more involved in the conversation. It can be concluded that the system has the potential to facilitate a more efficient communication process between user and listener.
Background The knowledge of metabolic pathways and fluxes is important to understand the adaptati... more Background The knowledge of metabolic pathways and fluxes is important to understand the adaptation of organisms to their biotic and abiotic environment. The specific distribution of stable isotope labelled precursors into metabolic products can be taken as fingerprints of the metabolic events and dynamics through the metabolic networks. An open-source software is required that easily and rapidly calculates from mass spectra of labelled metabolites, derivatives and their fragments global isotope excess and isotopomer distribution. Results The open-source software “Least Square Mass Isotopomer Analyzer” (LS-MIDA) is presented that processes experimental mass spectrometry (MS) data on the basis of metabolite information such as the number of atoms in the compound, mass to charge ratio (m/e or m/z) values of the compounds and fragments under study, and the experimental relative MS intensities reflecting the enrichments of isotopomers in 13C- or 15 N-labelled compounds, in comparison to...
Software design and sustainable software engineering are essential for the long-term development ... more Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (“Butterfly” paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions.
Uploads
Papers by zeeshan ahmed