Papers by Jordanian Journal of Computers and Information Technology JJCIT

This paper aims to describe the design and implementation of an Unmanned Ground Vehicle (UGV) and... more This paper aims to describe the design and implementation of an Unmanned Ground Vehicle (UGV) and a smart phone virtual reality (VR) head mounted display (HMD) which enables visual situation awareness by giving the operator the feel of "head on rover" while sending the video feeds to separate operator computer for object detection and 3-D model creation of the UGV surrounding objects. The main contribution of this paper is of three folds: (i) the novel design of the HMD; the paper proposes an alternative design to the 3-D interface designs recently used in tele-operated search and rescue (SAR) UGVs. Unlike other designs that suggest to automatically move the whole UGV about two axes (pitch and yaw) with the movement of the head, this design suggests to let a separate unit of the UGV automatically move with the movement of the head and provide the user with VR. (ii) the distributed feature; the design allows multiple users to connect to the UGV using a wireless link in a secure way to receive video feeds from three on-board cameras. This feature facilitates cooperative team work in urban search and rescue (USAR) applications (a contemporary research issue in SAR UGV). (iii) a novel feature of the design is the simultaneous video feeds which are sent to the operator station computer for object detection using the scale-invariant feature transform (SIFT) algorithm and 3-D model construction of the UGV's surrounding objects from 2-D images of these objects. The design was realized using a smart phone-based HMD, which captures head movements in real time using its inertial measurement unit (IMU) and transmits it to three motors mounted on a rover to provide the movement about three axes (pitch, yaw and roll). The operator controls the motors via the HMD or a gamepad. Three on-board cameras provide video feeds which are transmitted to the HMD and operator computer. A software performs object detection and builds a 3-D model from the captured 2-D images. The realistic design constraints were identified, then the hardware/software functions that meet the constraints were listed. The UGV was implemented in a laboratory environment. It was tested over soft and rough terrain. Results showed that the UGV has higher visual-inspection capabilities compared to other existing SAR UGVs. Furthermore, it was found that the maximum speed of 3.3 m/s, six-wheel differential-drive chassis and spiked air-filled rubber tires of the rover gave it high manoeuvrability in open rough terrain compared to other SAR UGVs found in literature. The high visual inspection capabilities and relatively high speed of the UGV make it a good choice for planetary exploration and military reconnaissance. The three-motors and stereoscopic camera can be easily mounted as a separate unit on a chassis that uses different locomotion mechanism (e.g. leg type or tracked type) to extend the functionality of a SAR UGV. The design can be used in building disparity maps and in constructing 3-D models, or in real time face recognition, real time object detection and autonomous driving based on disparity maps.

To avail cloud services; namely, Software as a Service (SaaS), Platform as a Service (PaaS), Infr... more To avail cloud services; namely, Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), …etc. via insecure channel, it is necessary to establish a symmetric key between end user and remote Cloud Service Server (CSS). In such a provision, both the end parties demand proper auditing so that resources are legitimately used and privacies are maintained. To achieve this, there is a need for a robust authentication mechanism. Towards the solution, a number of single server authenticated key agreement protocols have been reported recently. However, they are vulnerable to many security threats, such as identity compromization, impersonation, man-in-the-middle, replay, byzantine, offline dictionary and privileged-insider attacks. In addition to this, most of the existing protocols adopt the single server-based authentication strategy, which are prone to single point of vulnerability and single point of failure issues. This work proposes an efficient password-based two-server authentication and key exchange protocol addressing the major limitations in the existing protocols. The formal verification of the proposed protocol using Automated Validation of Internet Security Protocols and Applications (AVISPA) proofs that it is provably secure. The informal security analysis substantiates that the proposed scheme has successfully addressed the existing issues. The performance study contemplates that the overhead of the protocol is reasonable and comparable with those of other schemes. The proposed protocol can be considered as a robust authentication protocol for a secure access to the cloud services.
In this paper, a UWB antenna with an enhanced bandwidth is proposed. The enhanced ultra wide band... more In this paper, a UWB antenna with an enhanced bandwidth is proposed. The enhanced ultra wide band (UWB) antenna consists of a rectangular patch fed by a 50 Ω microstrip feed line and partial ground plane. The bandwidth enhancement is achieved by making three modifications on the partial ground plane; adding two rectangular sleeves, adding one rectangular groove and adding two rectangular slots. The characteristics of this antenna are investigated using high frequency structure simulator (HFSS). The proposed design achieves large bandwidth at return loss RL ≥ 10 dB of (3.4-22.4) GHz (147.29%). Promising peak gain with good impedance matching and omni-directional radiation pattern are obtained.

Opportunistic communication between two encountered nodes is commonly established using a radio t... more Opportunistic communication between two encountered nodes is commonly established using a radio technology, such as Wi-Fi or Bluetooth. One issue involved in opportunistic communication is a trade-off between connection time and probability of resource consumption. This paper presents a comprehensive study on density analysis for decentralized distributed opportunistic communication using Wi-Fi technology. In this work, study and analysis of contact probability and energy efficiency of variant density in a particular area are performed. The contribution of this work is the analysis of the impact of density on the connection probability and resources, as well as a simulation study framework to analyze the contact event with a view of energy consumption. The study gave detailed contact information, such as contact probability based on node density and transmission range in a particular area, as well as the beacon exchange process as an element of channel utilization and energy consumption. The influence evaluation of various parameters on each other and finally on the system performance is also presented.

Although the advancements in hardware solutions are growing exponentially along with the communic... more Although the advancements in hardware solutions are growing exponentially along with the communication channels capacity, high quality video encoders for real-time applications are still considered an open area of research. The majority of researchers interested in video encoders target their investigations towards motion estimation and block matching algorithms. Many algorithms that aim to reduce the total number of required mathematical operations when compared to Full Search have been proposed. However, the results often converge to local minima and a significant amount of computations is still required. Therefore, in this research, a hierarchy-based block matching method that facilitates the transmission of high bit-rate videos over standard communication methods is proposed. The proposed algorithm is based on the frequency domain, where the algorithm examines the similarities between a chosen frequency subset, which significantly reduces the total number of comparisons and the total mathematical computations required per block.

This study was conducted to explore the role of two color features in improving the performance o... more This study was conducted to explore the role of two color features in improving the performance of the existing No-Reference Image Quality Assessment Algorithms for Contrast-Distorted Images (NR-IQA-CDI). The used color features were Colorfulness and Naturalness of color expressed in CIELab and CIELuv color spaces. Test images used were the public benchmark databases that contain contrast-distorted images-TID2013, CID2013 and CSIQ. Experiments for the exploration were conducted in two stages: the preliminary stage and the comprehensive stage. The results of preliminary study showed that the features of colorfulness and naturalness of color can improve the prediction of human opinion score which relies mainly on the feature of brightness-only contrast. The results inspired to more comprehensive study where the Natural Scene Statistics (NSS) of these two features were estimated by modelling the probability distribution function (pdf) of 16,873 test images from a public database called SUN2012. The results based on k-fold cross validation with k ranging from 2 to 10 showed that the performance of NR-IQA-CDI can be improved by adding the NSS of these features.
The automatic analysis and recognition of offline Arabic handwritten characters from images is an... more The automatic analysis and recognition of offline Arabic handwritten characters from images is an important problem in many applications. Even with the great progress of recent research in optical character recognition, a few problems still wait to be solved, especially for Arabic characters. The emergence of Deep Neural Networks promises a strong solution to some of these problems. We present a deep neural network for the handwritten Arabic character recognition problem that uses convolutional neural network (CNN) models with regularization parameters such as batch normalization to prevent overfitting. We applied the Deep CNN for the AIA9k and the AHCD databases and the classification accuracies for the two datasets were 94.8% and 97.6%, respectively. A study of the network performance on the EMNIST and a form-based AHCD dataset were performed to aid in the analysis.

In this paper, we propose a framework to analyze and evaluate social networking pages based on us... more In this paper, we propose a framework to analyze and evaluate social networking pages based on usage data with respect to Arab mainstream news media. The paper introduces new metrics such as: Page Penetration and Ranking Index, as well as new evaluation methods. The framework considers the twenty-two Arab countries in addition to seven Facebook pages that belong to seven prominent Arab satellite channels. The proposed framework is used to evaluate countries for their Internet and Facebook penetration rates, as well as consumption of news through those pages. Results reveal that Arabs highly credit natively Arabic news media rather than news media that only speak Arabic. Furthermore, 65% of the Arab countries have more than 50% Facebook users who are news consumers via Facebook. Additionally, Arab countries that suffered unrest, civil war or political crises in the recent years show higher page penetration rates, such as Yemen, Syria, Egypt and Libya.

This study reports on the construction of a one million word English-Arabic Political Parallel Co... more This study reports on the construction of a one million word English-Arabic Political Parallel Corpus (EAPPC), which will be a useful resource for research in translation studies, language learning and teaching, bilingual lexicography, contrastive studies, political science studies and cross-language information retrieval. It describes the phases of corpus compilation and explores the corpus, by way of illustration, to discover the translation strategies used in rendering the Arabic and Islamic culture-specific terms takfīr and takfīrī from Arabic into English and from English into Arabic. The Corpus consists of 351 English and Arabic original documents and their translations. A total of 189 speeches, 80 interviews and 68 letters, translated by anonymous translators in the Royal Hashemite Court, were selected and culled from King Abdullah II's official website, in addition to the textual material of the English and Arabic versions of His Majesty's book, Our Last Best Chance: The Pursuit of Peace in a Time of Peril (2011). The texts were meta-annotated, segmented, tokenized, English-Arabic aligned, stemmed and POS-tagged. Furthermore, a parallel (bilingual) concordance was built in order to facilitate exploration of the parallel corpus. The challenges encountered in corpus compilation were found to be the scarcity of freely available machine-readable Arabic-English translated texts and the deficiency of tools that process Arabic texts.
This work implements the Firefly algorithm (FA) to find the best decision hyper-plane in the feat... more This work implements the Firefly algorithm (FA) to find the best decision hyper-plane in the feature space. The proposed classifier uses a cross-validation of a 10-fold portioning for the training and the testing phases used for classification. Five pattern recognition binary benchmark problems with different feature vector dimensions are used to demonstrate the effectiveness of the proposed classifier. We compare the FA classifier results with those of other approaches through two experiments. The experimental results indicated that FA classifier is a competitive classification technique. The FA shows better results in three out of the four tested datasets used in the second experiment.

Many current 10G optical systems need to be upgraded to higher data rates (for example 40G, 100G,... more Many current 10G optical systems need to be upgraded to higher data rates (for example 40G, 100G, etc…), in order to satisfy the increased demand for higher bandwidth. However, many system providers in the third world countries have limited budgets and could not just replace all equipment to upgrade their systems. Thus, it is important to investigate what equipment could still be used in the upgraded system. In other words; which equipment could be used for both 10G and higher data rate transmitters? The bandwidth of the passive modules is a crucial specification that enables optical communication systems. Therefore, the effect of multiplexer (MUX) and demultiplexer (DEMUX) bandwidth on the performance of hybrid 10G/40G optical communication systems is investigated in this work. Hybrid optical systems enable adding new channels with higher data rate on current 10G common equipment. Numerical simulations are conducted on eight consecutive dense wavelength division multiplexing (DWDM) channels selected on 100-GHz ITU-grid each carrying data rate of 10 Gbps or 40 Gbps. Different loading configurations of wavelengths with data rates are considered in this work. In addition, different MUX/DEMUX bandwidths of 40, 50, 60 and 70 GHz are used to investigate the performance of each selected hybrid system configuration. It is found that the optimal MUX/DEMUX bandwidth for all investigated hybrid configurations is 60 GHz. The hybrid system performance is evaluated for both return-to-zero (RZ) and non-return-to-zero (NRZ) pulse format. The maximum reach of a selected hybrid configuration is also numerically investigated using circulating loop configuration for both RZ and NRZ pulse formats. KEYWORDS Effect of MUX/DEMUX bandwidth on signal transmission, Hybrid optical fiber communication, Upgrading 10G to 40G systems, Dispersion and non-linearity interaction in optical fibers.

Information security is becoming more important and attracting much attention nowadays, as the am... more Information security is becoming more important and attracting much attention nowadays, as the amount of data being exchanged over the internet increased. There are various techniques to secure data communication, but the well-known and widely used techniques are cryptography and steganography. Cryptography changes data into another form that is unreadable by anyone except the intended receiver. Steganography hides the existence of secret data in a cover medium, so that no one can detect the hidden data except the authorized receiver. In this paper, we proposed a new technique for securing data communication systems by combining cryptography and steganography techniques. The cryptography algorithm that was used in this paper is Modified Jamal Encryption Algorithm (MJEA); it is a symmetric (64-bit) block encryption algorithm with (120-bit) key. For steganography, we designed an enhanced form of Least Significant Bit (LSB) algorithm with (128-bit) steg-key. The performance of the proposed technique has been evaluated by considering several experimental tests, such as impressibility test, embedding capacity test and security test. For this purpose, the proposed technique was applied on several 24-bit colored PNG cover images. All experimental results proved the strength of the proposed algorithm in securing the transition of data over unsecure channels to protect it against any attack. Furthermore, the simulation results show the superiority of our proposed algorithm when compared with other algorithms in terms of PSNR and embedding capacity.

Orthogonal frequency division multiplexing (OFDM) is a promising candidate for cognitive radio tr... more Orthogonal frequency division multiplexing (OFDM) is a promising candidate for cognitive radio transmission. OFDM supports high data rates that are robust to channel impairments. However, one of the biggest problems for OFDM transmission is high out-off-band radiation, which resultes from the sidelobes of the OFDM sub-carriers. These sidelobes are a source of interference to neighbouring transmissions. This paper focuses on reducing out-of-band radiation by reading and extracting the radiation power in the sidelobes. This is done by extending the time domain OFDM signal by zeros in both sides. The resulting signal is then transformed to the time domain and extended samples are removed to obtain the N-samples of time domain signal representing the out-of-band radiated signal. The resulting signal is Fourier transformed and high frequency sub-carriers are removed to obtain pilots that are inverted and added to the original OFDM data sub-carriers, resulting in reducing the Adjacent Channel Interference (ACI), which affects the adjacent systems. The added signal represents a noise signal to the desired OFDM signal that reduces the BER performance of the desired system, thus a weighing factor is applied to the added signal in order to get a better BER performance with good out-of-band radiation reduction. Matlab/Simulink simulation is adopted to perform an assessment of the proposed technique with different weighing factors and different frequency separation between the desired signal and the adjacent one. For 0 dB attenuation on the added signal, a 10 dB reduction in out-of-band radiation is obtained, while 6 dB reduction is obtained when the weighing factor reduces the input signal power by 3 dB. BER performance is better by performing the reduction technique and depends on the frequency distance between the adjacent signal and the desired one.

Recently, the interest in wireless power transfer (WPT) has significantly increased due to its at... more Recently, the interest in wireless power transfer (WPT) has significantly increased due to its attractive applications. The power transfer efficiency and communication range of most of the existing WPT systems are still limited, which is due to many technical challenges and regulation limitations. This requires more research and technical efforts to overcome the current limitations and make WPT systems much more efficient and widely used. This paper aims at reviewing recent advances and research progress in the area of WPT for the purposes of addressing current challenges and future research directions. To obtain these purposes, an introduction to WPT is provided. Also, main research themes of WPT in free space and lossy media are discussed. Additionally, the benefits of using split ring resonators WPT in conducting lossy media are investigated. This will be very helpful to boost WPT in lossy media and inspire more optimized structures for further improvement.

Cooperative Q-learning approach allows multiple learners to learn independently and then share th... more Cooperative Q-learning approach allows multiple learners to learn independently and then share their Q-values among each other using a Q-value sharing strategy. A main problem with this approach is that the solutions of the learners may not converge to optimality, because the optimal Q-values may not be found. Another problem is that some cooperative algorithms perform very well with single-task problems, but quite poorly with multi-task problems. This paper proposes a new cooperative Q-learning algorithm called the Bat Q-learning algorithm (BQ-learning) that implements a Q-value sharing strategy based on the Bat algorithm. The Bat algorithm is a powerful optimization algorithm that increases the possibility of finding the optimal Q-values by balancing between the exploration and exploitation of actions by tuning the parameters of the algorithm. The BQ-learning algorithm was tested using two problems: the shortest path problem (single-task problem) and the taxi problem (multi-task problem). The experimental results suggest that BQ-learning performs better than single-agent Q-learning and some well-known cooperative Q-learning algorithms.

The globe is generating a high volume of data in all domains, such as social media, industries, s... more The globe is generating a high volume of data in all domains, such as social media, industries, stock markets and healthcare systems. Most of data volume has been generated in the past two years. This massive amount of data can bring benefits and draw knowledge to individuals, governments and industries and assist in decision making. In healthcare, an enormous volume of data is generated from healthcare providers and stored in digital systems. Hence, data are more accessible for reference and future use. The ultimate vision for working with health big data is to support the process of improving the quality of service in healthcare providers, reducing medical mistakes and providing a promoting consultation in addition to providing answers when needed. This paper provides a critical review of some applications of big data in healthcare, such as the flu-prediction project by the Institute of Cognitive Sciences, which combines social media data with governmental data. The project aim is to provide swift response about flu-related questions. The project should study human multi-modal representations, such as text, voice and images. Moreover, integrating social media data with governmental health data could create some challenges, because governmental health data are considered as more accurate than subjective opinions on social media. Another attempt to utilize big data in healthcare is Google Flu Trends GFT. GFT collects search queries from users to predict flu activity and outbreak. GFT performed well for the first two to three years; however, it started to perform worse since 2011 due to people behaviour changes. GFT did not update the prediction model based on new data released by the Centre for Disease Control and Prevention-US (CDC). On the other hand, ARGO (Auto Regression with Google) performed better than all previously available influenza models, because it adjusts people behaviour changes and relies on current publicly available data from google-search and CDC. This research also describes, analayzes and reflects the value of big data in healthcare. Big data has been introduced and defined based on the most agreed terms. The paper also explains big data revenue forecast for the year 2017 and historical revenue in three main domains: services, hardware and software. Big data management cycle has been reviewed and the main aspects of big data in healthcare (volume, velocity, variety and veracity) have been discussed. Finally, a discussion has been made of some challenges that face individuals and organizations in the process of utilizing big data in healthcare, such as data ownership, privacy, security, clinical data linkage, storage and processing issues and skills requirements.

Preparing course timetables for universities is a search problem with many constraints. Exhaustiv... more Preparing course timetables for universities is a search problem with many constraints. Exhaustive search techniques in theory can be used to develop course timetables for academic departments, but unfortunately these techniques are computation intensive, since the search space is very large and therefore are impractical. In this paper, Genetic Algorithms (GA's) are utilized to build an automated course timetable system. The system is designed for any academic department. The proposed timetabling system requires minimal effort from the administration staff to prepare the course timetable. Moreover, the prepared course timetable considers faculties' desires, students' needs and available resources, such as classrooms and laboratories with optimal utilization. The proposed timetabling process was divided into three stages. The first stage is the data collection stage. In this stage, the administrative staff; usually the head of the department, is responsible for preparing the required data, such as the names of the faculty personnel and their desires of courses and laboratories ordered with some priority scheme. Number and type of theoretical and practical courses are also fed to the system based on some statistics about student numbers and previous course timetable history. The system is also fed with number of lecture rooms allocated for the department and number of labs with information about theoretical courses they are able to serve. In the second stage, the program generates an initial set of suggested schedules (chromosomes). Each chromosome represents a solution to the problem, but usually is not satisfactory. Finally, the proposed timetabling system starts the search for a good solution that satisfies best interests of the department according to a cost function. GA is applied in search for a satisfactory course timetable based on a pre-defined criterion. The system has been developed and tested utilizing benchmarked datasets developed by an international timetabling competition (ITC2007) and for the Computer Engineering Department at Yarmouk University. In both cases, the algorithm showed very satisfactory results.
This paper presents a multi-modal biometric system implemented using MATLAB language. The system ... more This paper presents a multi-modal biometric system implemented using MATLAB language. The system fused fingerprint and hand geometry at matching score level by applying a proposed modified weighted sum rule. The fingerprint system was tested using five FVC databases and the hand geometry system was tested using COEP database. Hence, the multi-modal system was tested by merging each FVC database with COEP database. The experimental results show significant improvement in the multi-modal system with an average EER of 3.27%; while it is 8.86% and 8.89% for fingerprint and hand geometry systems, respectively.

Research on Arabic Natural Language Processing (NLP) is facing a lot of problems due to language ... more Research on Arabic Natural Language Processing (NLP) is facing a lot of problems due to language complexity, lack of machine readable resources and lack of interest among Arab researchers. One of the fields that research has started to appear in is the field of Question Answering. Although some research has been done in this area, few have proved to be effective in producing exact relevant answers. One of the issues that affected the accuracy of producing correct answers is proper tagging of entities and proper analysis of a user's question. In this research, a set of 60+ tagging rules, 15+ Question Analysis rules and 20+ Question Patterns were built to enhance the answer generation of Natural Language Questions posed over some corpora collected from different sources. A QA system was built and experiments showed good results with an accuracy of 78%, a recall of 97% and an F-Measure of 87%.

In this paper, a diamond shape pilot arrangement for OFDM channel estimation is investigated. Suc... more In this paper, a diamond shape pilot arrangement for OFDM channel estimation is investigated. Such arrangement will decrease the number of pilots transmitted over the communication channel, which will in turn increase data throughput while maintaining acceptable accuracy of the channel estimation. The adaptive antenna array (AAA) is combined with orthogonal frequency division multiplexing (OFDM) to combat the intersymbol interference (ISI) and the directional interferences. In this paper, The optimum beamformer weight set is obtained based on minimum bit error rate (MBER) criteria in diamond-type pilot-assisted in 3GPP long term evolution (LTE) OFDM systems under multipath fading channel. The simulation results show that the quadrature phase shift keying signaling based on MBER technique utilizes the antenna array elements more intelligently than the standard minimum mean square error (MMSE) technique.
Uploads
Papers by Jordanian Journal of Computers and Information Technology JJCIT