Papers by Bilal Abul-huda

Application of multi-media database system in detection and expectation of groundwater quality degradation: A case study-North Jordan
Journal of Information and Optimization Sciences, 2000
This research work makes use of database management system (applying multi-media features) for fu... more This research work makes use of database management system (applying multi-media features) for future detection and expectation of groundwater quality degradation. Five locations in North Jordan were chosen and very carefully collected various samples were tested and results recorded. Recorded results data were fed into the designed and implemented multi-media database system (MMDS) then the system generated its detection and expectation of future groundwater quality degradation. In addition to that the system itself has generated all required graphs and plots which show the future expectation as well as other required graphs and figures. Sections of this paper explains experiment word, as well as results and discussions, and finally future expansion.

Time is an essential dimension to many domain-specific problems, such as the medical and financia... more Time is an essential dimension to many domain-specific problems, such as the medical and financial domains. This research introduces TLEX (Temporal Lexical Patterns), a framework to categorize temporal data that effectively induces semantic temporal patterns. TLEX is a rule-based classification framework dedicated to enhance the classification accuracy by focusing on eliminating outliers and minimizing classification errors. The contributions of this research are 1) formulating semantic temporal patterns as basic classification features, and 2) introducing an induction technique to discriminate semantic temporal patterns. To illustrate the design, the paper provides a detailed mathematical description that relies on set-theory to model the framework of TLEX. Furthermore, a detailed description of the proposed algorithms to facilitate implementing and reproducing the results has been described. Further, to evaluate the effectiveness of TLEX, extensive experiments have been performed ...

Incorporating Uncertainty into Decision-Making: An Information Visualisation Approach
Lecture Notes in Business Information Processing, 2017
Incorporating uncertainty into the decision-making process and exposing its effects are crucial f... more Incorporating uncertainty into the decision-making process and exposing its effects are crucial for making informed decisions and maximizing the benefits attained from such decisions. Yet, the explicit incorporation of uncertainty into decision-making poses significant cognitive challenges. The decision-maker could be overloaded, and thus may not effectively take the advantages of the uncertainty information. In this paper, we present an information visualisation approach, called RiDeViz, to facilitate the incorporation of uncertainty into decision-making. The main intention of RiDeViz is to enable the decision-maker to explore and analyse the uncertainty and its effects at different levels of detail. It is also intended to enable the decision-maker to explore cause and effect relationships and experiment with multiple “what-if” scenarios. We demonstrate the utility of RiDeViz through an application example of a financial decision-making scenario.

International Journal of Advanced Computer Science and Applications, 2011
The quality of software systems is the most important factor to consider when designing and using... more The quality of software systems is the most important factor to consider when designing and using these systems. The quality of the database or the database management system is particularly important as it is the backbone for all types of systems that it holds their data. Many researches argued that software with high quality will lead to an effective and secure system. Software quality can be assessed by using software measurements or metrics. Typically, metrics have several problems such as: having no specific standards, sometimes they are hard to measure, while at the same time they are time and resource consuming. Metrics need also to be continuously updated. A possible solution to some of those problems is to automate the process of gathering and assessing those metrics. In this research the metrics that evaluate the complexity of Object Oriented Relational Database (ORDB) are composed of the object oriented metrics and relational database metrics. This research is based on common theoretical calculations and formulations of ORDB metrics proposed by database experts. A tool is developed that takes the ORDB schema as an input and then collects several database structural metrics. Based on those proposed and gathered metrics, a study is conducted and showed that such metrics' assessment can be very useful in assessing the database complexity.

International Journal of Computing and Digital Systems
Twins recognition and identification is one of the important challenges in the field of image pro... more Twins recognition and identification is one of the important challenges in the field of image processing. The strong similarity between identical twins makes it hard to distinguish the twin from his/her sibling. Similarities come from biometric, geometric, and photometric features. In biometric patterns, the fingerprints found to be identical in some cases, geometrically, the twins' faces rarely differ which confuses people. Photometric features are very close to each other even though they rarely success in twins' recognition. We tackle this challenge by a model for twin's face recognition (FR) where our solution is based on deep transfer learning in terms of residual neural networks including two VGG16 trained networks, which are considered to be one of the powerful and deeply learned neural networks. For comparison purposes, we check other approaches to solve the twins' problem including iris, fingerprints, and lip corners. The data used was collected from Google which is a challenge. Data contains 4-pairs of twins with the 17-different position for each one which produces 5×2×17 (170) different images. Collected images were used for comparisons between features. Results show that geometrical features gave 85% of success while photometric features gave 96%. By hybridizing geometrical and photometric features together, the results reach 98% of accuracy. Biometric measures, in this research, prove the superiority of deeply transferring learning over traditional methods. The newly achieved method could be replaced to assist authentication systems that fully depend on biometric features.

— As a new field of study, software engineering teaching and subjects vary from one textbook to a... more — As a new field of study, software engineering teaching and subjects vary from one textbook to another. Despite the fact that most of the books cover similar subjects, however, students ’ view of the subject is mixed. Some students have problems understanding the entire picture. Other students have problems connecting concepts with each other. In this research, an overall view of software engineering knowledge is presented. The knowledge is presented from four perspectives: Process, Project, People and Product. Those four are usually referred to as the 4Ps in literature. The goal is to make a distinction between the progresses in each area and explore the opportunities in finding windows for more research in any of those four views. Researches in this field in many published articles appear to ignore the state of the studied field in the industry and focuses on its state in the academic arenas. Related work in research papers focus on those research papers published and do not look...

International Journal of Computing and Digital Systems
Twins recognition and identification is one of the important challenges in the field of image pro... more Twins recognition and identification is one of the important challenges in the field of image processing. The strong similarity between identical twins makes it hard to distinguish the twin from his/her sibling. Similarities come from biometric, geometric, and photometric features. In biometric patterns, the fingerprints found to be identical in some cases, geometrically, the twins' faces rarely differ which confuses people. Photometric features are very close to each other even though they rarely success in twins' recognition. We tackle this challenge by a model for twin's face recognition (FR) where our solution is based on deep transfer learning in terms of residual neural networks including two VGG16 trained networks, which are considered to be one of the powerful and deeply learned neural networks. For comparison purposes, we check other approaches to solve the twins' problem including iris, fingerprints, and lip corners. The data used was collected from Google which is a challenge. Data contains 4-pairs of twins with the 17-different position for each one which produces 5×2×17 (170) different images. Collected images were used for comparisons between features. Results show that geometrical features gave 85% of success while photometric features gave 96%. By hybridizing geometrical and photometric features together, the results reach 98% of accuracy. Biometric measures, in this research, prove the superiority of deeply transferring learning over traditional methods. The newly achieved method could be replaced to assist authentication systems that fully depend on biometric features.

Recognizing human actions and activities from videos has become an important topic in computer vi... more Recognizing human actions and activities from videos has become an important topic in computer vision, machine learning, and pattern recognition applications. Some of these applications include automatic video analysis, human behavior recognition, human machine interaction, robotics, and security aspects in real-time video surveillance. This paper provides a general review of most recent advances and approaches in human action and activity recognition during the past several years. It also presents a categorization of human action and activity recognition approaches and methods with their advantages and limitations. In particular, it divides the recognition process based on (a) the method used to extract the features from the input image/video, (b) learning classifier techniques. Moreover, it presents an overview of the existing and publically available human action and activity recognition datasets. This paper also examines the requirements for an ideal human recognition system and...

Incorporating Uncertainty into Decision-Making: An Information Visualisation Approach
Incorporating uncertainty into the decision-making process and exposing its effects are crucial f... more Incorporating uncertainty into the decision-making process and exposing its effects are crucial for making informed decisions and maximizing the benefits attained from such decisions. Yet, the explicit incorporation of uncertainty into decision-making poses significant cognitive challenges. The decision-maker could be overloaded, and thus may not effectively take the advantages of the uncertainty information. In this paper, we present an information visualisation approach, called RiDeViz, to facilitate the incorporation of uncertainty into decision-making. The main intention of RiDeViz is to enable the decision-maker to explore and analyse the uncertainty and its effects at different levels of detail. It is also intended to enable the decision-maker to explore cause and effect relationships and experiment with multiple “what-if” scenarios. We demonstrate the utility of RiDeViz through an application example of a financial decision-making scenario.

Keyword extraction has many useful applications including indexing, summarization, and categoriza... more Keyword extraction has many useful applications including indexing, summarization, and categorization. In this work we present a keyword extraction system for Arabic documents using term co-occurrence statistical information which used in other systems for English and Chinese languages. This technique based on extracting top frequent terms and building the co-occurrence matrix showing the occurrence of each frequent term. In case the co-occurrence of a term is in the biasness degree, then the term is important and it is likely to be a keyword. The biasness degree of the terms and the set of frequent terms are measured using 2. Therefore terms with high 2 values are likely to be keywords. The adopted 2 method in this study is compared with another novel method based on term frequency inverted term frequency (TF-ITF) which tested for the first time. Two datasets were used to evaluate the system performance. Results show that the 2 method is better than TF-ITF, since the precision and ...

Data Mining is the process of discovering interesting knowledge (patterns) from large amounts of ... more Data Mining is the process of discovering interesting knowledge (patterns) from large amounts of data stored either in databases, data warehouses, or other information repositories. Multimedia data mining is the mining of high-level multimedia information and knowledge from large multimedia databases. Mining multimedia data is, however, at an experimental stage. Substantial progress in the field of data mining and data warehousing research has been witnessed in the last few years. Numerous research and commercial systems for data mining and data warehousing have been developed for mining knowledge in relational database and data warehouses (Fayyad et al., 1996) Despite the fact that Multimedia has been the major focus for many researchers around the world, data mining from multimedia databases is still in its infancy, multimedia mining still seem shy on results. Many techniques for representing, storing, indexing and retrieving multimedia data have been proposed. However, rare are t...

Design patterns detection based on its domain
Software maintenance is an important issue during the software design life cycle, especially, in ... more Software maintenance is an important issue during the software design life cycle, especially, in late stages. Frequent changes in the software system with lack of documentation lead to the increase in the cost of maintenance. For this reason an important step for software maintenance is detecting patterns from the source code to provide relevant information that can help in understanding the system design and improve its documentation. Design patterns help maintainers and developers to understand the implementation of any software system which make software maintenance process faster and well-informed. Also it decreases the time and efforts that are needed in learning the software. Detecting patterns is not an easy task because the nature of design patterns. This research introduces a detection process based on software domains. This work is done through collecting five different domains for Java open source code. The dynamic analysis approach is used in extracting patterns. The exp...
Ciphering algorithms play a main role in WLAN security systems. However, those algorithms consume... more Ciphering algorithms play a main role in WLAN security systems. However, those algorithms consume a significant amount of computing resources such as CPU time, and packet size. In an attempt to remedy the WLAN security issue, a novel method has been deployed to secure the transmitted data over wireless network, called a secure WiFi (sWiFi) algorithm. This paper also provides evaluation of five encryption algorithms: AES (Rijndael), DES, 3DES, Blowfish, and the proposed algorithm (sWiFi). We examine a method for analyzing trade-off between efficiency and security. A comparison has been conducted for those encryption algorithms at different settings for each algorithm such as different sizes of data blocks, different platforms (Windows XP, Windows Vista and Linux) and different encryption/decryption speed.
Investigating the applicability of generating test cases for web applications based on traditional graph coverage
International Journal of Computer Aided Engineering and Technology
The impact of the digital storytelling rubrics on the social media engagements
International Journal of Computer Applications in Technology

International Journal of Information Retrieval Research
Data classification as one of the main tasks of data mining has an important role in many fields.... more Data classification as one of the main tasks of data mining has an important role in many fields. Classification techniques differ mainly in the accuracy of their models, which depends on the method adopted during the learning phase. Several researchers attempted to enhance the classification accuracy by combining different classification methods in the same learning process; resulting in a hybrid-based classifier. In this paper, the authors propose and build a hybrid classifier technique based on Naïve Bayes and C4.5 classifiers. The main goal of the proposed model is to reduce the complexity of the NBTree technique, which is a well known hybrid classification technique, and to improve the overall classification accuracy. Thirty six samples of UCI datasets were used in evaluation. Results have shown that the proposed technique significantly outperforms the NBTree technique and some other classifiers proposed in the literature in term of classification accuracy. The proposed classif...

A Novel Secure E-Contents System for Multi-Media Interchange Workflows in E-Learning Environments
International journal of Computer Networks & Communications, 2013
The goal of e-learning is to benefit from the capabilities offered by new information technology ... more The goal of e-learning is to benefit from the capabilities offered by new information technology (such as remote digital communications, multimedia, internet, cell phones, teleconferences, etc.) and to enhance the security of several government organizations so as to take into considerations almost all the contents of elearning such as: information content, covering most of citizens or state firms or corporations queries. Content provides a service to provide most if not all basic and business services; content of communicative link provides the citizen and the state agencies together all the time and provides content security for all workers on this network to work in securely environment. Access to information as well is safeguarded. The main objective of this research is to build a novel multi-media security system (encrypting / decrypting system) that will enable E-learning to exchange more secured multi-media data/information.
International Journal, 2011
Abstract—The quality of software systems is the most important factor to consider when designing ... more Abstract—The quality of software systems is the most important factor to consider when designing and using these systems. The quality of the database or the database management system is particularly important as it is the backbone for all types of systems that it holds their data. Many researches argued that software with high quality will lead to an effective and secure system. Software quality can be assessed by using software measurements or metrics. Typically, metrics have several problems such as: having no ...

Keyword extraction has many useful applications including indexing, summarization, and categoriza... more Keyword extraction has many useful applications including indexing, summarization, and categorization. In this work we present a keyword extraction system for Arabic documents using term co-occurrence statistical information which used in other systems for English and Chinese languages. This technique based on extracting top frequent terms and building the co-occurrence matrix showing the occurrence of each frequent term. In case the co-occurrence of a term is in the biasness degree, then the term is important and it is likely to be a keyword. The biasness degree of the terms and the set of frequent terms are measured using χ2. Therefore terms with high χ2 values are likely to be keywords. The adopted χ2 method in this study is compared with another novel method based on term frequency -inverted term frequency (TF-ITF) which tested for the first time. Two datasets were used to evaluate the system performance. Results show that the χ2 method is better than TF-ITF, since the precision and the recall of the χ2 for the first experiment was 0.58 and 0.63 respectively and for the second experiment the χ2 accuracy was 64%. The results of these experiments showed the ability of the χ2 method to be applied on the Arabic documents and it has an acceptable performance among other techniques.
Uploads
Papers by Bilal Abul-huda