Papers by Yousry AbdulAzeem

Sensors
Terminal neurological conditions can affect millions of people worldwide and hinder them from doi... more Terminal neurological conditions can affect millions of people worldwide and hinder them from doing their daily tasks and movements normally. Brain computer interface (BCI) is the best hope for many individuals with motor deficiencies. It will help many patients interact with the outside world and handle their daily tasks without assistance. Therefore, machine learning-based BCI systems have emerged as non-invasive techniques for reading out signals from the brain and interpreting them into commands to help those people to perform diverse limb motor tasks. This paper proposes an innovative and improved machine learning-based BCI system that analyzes EEG signals obtained from motor imagery to distinguish among various limb motor tasks based on BCI competition III dataset IVa. The proposed framework pipeline for EEG signal processing performs the following major steps. The first step uses a meta-heuristic optimization technique, called the whale optimization algorithm (WOA), to select...

PeerJ Computer Science
Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous dise... more Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework’s adaptability. Using the Breast Cancer Dataset (two classes) and th...

PeerJ Computer Science
Many people worldwide suffer from mental illnesses such as major depressive disorder (MDD), which... more Many people worldwide suffer from mental illnesses such as major depressive disorder (MDD), which affect their thoughts, behavior, and quality of life. Suicide is regarded as the second leading cause of death among teenagers when treatment is not received. Twitter is a platform for expressing their emotions and thoughts about many subjects. Many studies, including this one, suggest using social media data to track depression and other mental illnesses. Even though Arabic is widely spoken and has a complex syntax, depressive detection methods have not been applied to the language. The Arabic tweets dataset should be scraped and annotated first. Then, a complete framework for categorizing tweet inputs into two classes (such as Normal or Suicide) is suggested in this study. The article also proposes an Arabic tweet preprocessing algorithm that contrasts lemmatization, stemming, and various lexical analysis methods. Experiments are conducted using Twitter data scraped from the Internet....

IEEE Access, 2018
Mathematical models have been ubiquitously employed in various applications. One of these applica... more Mathematical models have been ubiquitously employed in various applications. One of these applications that arose in the past few decades is cerebral tumor growth modeling. Simultaneously, medical imaging techniques, such as magnetic resonance imaging, computed tomography, and positron emission tomography, have witnessed great developments and become the primary clinical procedure in tumors diagnosis and detection. Studying tumor growth via mathematical models from medical images is an important application that is believed to play significant role in cancer treatment by predicting tumor evolution, quantifying the response to therapy, and the effective treatment planning of chemotherapy and/or radiotherapy. In this paper, we focus on the macroscopic growth modeling of brain tumors, mainly glioma, and highlight the current achievements in the state-of-the-art methods. In addition, we discuss some challenges and perspectives on this research that can further promote the research of this field. INDEX TERMS Mathematical modeling, cerebral tumors, glioma growth, macroscopic models, diffusive model, biomechanical model, chemotherapy, radiotherapy.

Scientific Reports, 2017
Reaction diffusion is the most common growth modelling methodology due to its simplicity and cons... more Reaction diffusion is the most common growth modelling methodology due to its simplicity and consistency with the biological tumor growth process. However, current extensions of the reaction diffusion model lack one or more of the following: efficient inclusion of treatments' effects, taking into account the viscoelasticity of brain tissues, and guaranteed stability of the numerical solution. We propose a new model to overcome the aforementioned drawbacks. Guided by directional information derived from diffusion tensor imaging, our model relates tissue heterogeneity with the absorption of the chemotherapy, adopts the linear-quadratic term to simulate the radiotherapy effect, employs Maxwell-Weichert model to incorporate brain viscoelasticity, and ensures the stability of the numerical solution. The performance is verified through experiments on synthetic and real MR images. Experiments on 9 MR datasets of patients with low grade gliomas undergoing surgery with different treatment regimens are carried out and validated using Jaccard score and Dice coefficient. The growth simulation accuracies of the proposed model are in ranges of [0.673 0.822] and [0.805 0.902] for Jaccard scores and Dice coefficients, respectively. The accuracies decrease up to 4% and 2.4% when ignoring treatment effects and the tensor information, while brain viscoelasticity has no significant impact on the accuracies. Gliomas are a primary brain tumors that arise from the glial cells due to disruption of the normal brain cell growth. Gliomas make up approximately 30% of tumors of brain and central nervous system and 80% of all malignant brain tumors 1. World Health Organization (WHO) divides glioma according to the degree of malignancy and other factors to four grades from I to IV 2. Grades I and II (known as low grade glioma, LGG) tend to be less malignant and slow-growing. These tumors account for about 25% of all glioma patients who may survive for many years (3-8) and have a high quality of life during that period 3. On the other hand, grades III and IV, known as high grade glioma (HGG), are highly malignant tumors that quickly lead to death. HGG, particularly glioblastoma multiforme, grows very fast and invades surrounding tissue. Unlike LGG, the prognosis of HGG is poor and, most likely, subject to recur after treatment with average survival time of 1 year 4. However, LGG are vulnerable to transformation to grades III and IV after variable period of time. In a study on the transformation of LGG 5 , it was observed that 60% of the patients with LGG progressed to HGG. Generally, glioma treatment comes in a form of surgery, radiotherapy, chemotherapy, or, most likely, a combination of them with the guidance of medical imaging techniques such as magnetic resonance imaging (MRI),

Journal of X-Ray Science and Technology, 2016
Brain tissue segmentation from magnetic resonance (MR) images is an importance task for clinical ... more Brain tissue segmentation from magnetic resonance (MR) images is an importance task for clinical use. The segmentation process becomes more challenging in the presence of noise, grayscale inhomogeneity, and other image artifacts. In this paper, we propose a robust kernelized local information fuzzy C-means clustering algorithm (RKLIFCM). It incorporates local information into the segmentation process (both grayscale and spatial) for more homogeneous segmentation. In addition, the Gaussian radial basis kernel function is adopted as a distance metric to replace the standard Euclidean distance. The main advantages of the new algorithm are: efficient utilization of local grayscale and spatial information, robustness to noise, ability to preserve image details, free from any parameter initialization, and with high speed as it runs on image histogram. We compared the proposed algorithm with 7 soft clustering algorithms that run on both image histogram and image pixels to segment brain MR images. Experimental results demonstrate that the proposed RKLIFCM algorithm is able to overcome the influence of noise and achieve higher segmentation accuracy with low computational complexity.

Neural Computing and Applications, 2021
In the current decade, advances in health care are attracting widespread interest due to their co... more In the current decade, advances in health care are attracting widespread interest due to their contributions to people longer surviving and fitter lives. Alzheimer's disease (AD) is the commonest neurodegenerative and dementing disease. The monetary value of caring for Alzheimer's disease patients is involved to rise dramatically. The necessity of having a computer-aided system for early and accurate AD classification becomes crucial. Deep-learning algorithms have notable advantages rather than machine learning methods. Many recent research studies that have used brain MRI scans and convolutional neural networks (CNN) achieved promising results for the diagnosis of Alzheimer's disease. Accordingly, this study proposes a CNN based end-to-end framework for AD-classification. The proposed framework achieved 99.6%, 99.8%, and 97.8% classification accuracies on Alzheimer's disease Neuroimaging Initiative (ADNI) dataset for the binary classification of AD and Cognitively Normal (CN). In multi-classification experiments, the proposed framework achieved 97.5% classification accuracy on the ADNI dataset. Keywords AD-classification Á Convolutional neural network (CNN) Á Magnetic resonance imaging (MRI) Á Adaptive momentum estimation (Adam) Á Glorot uniform weight initializer

IEEE Access, 2021
Human action recognition techniques have gained significant attention among next-generation techn... more Human action recognition techniques have gained significant attention among next-generation technologies due to their specific features and high capability to inspect video sequences to understand human actions. As a result, many fields have benefited from human action recognition techniques. Deep learning techniques played a primary role in many approaches to human action recognition. The new era of learning is spreading by transfer learning. Accordingly, this study's main objective is to propose a framework with three main phases for human action recognition. The phases are pre-training, preprocessing, and recognition. This framework presents a set of novel techniques that are threefold as follows, (i) in the pre-training phase, a standard convolutional neural network is trained on a generic dataset to adjust weights; (ii) to perform the recognition process, this pre-trained model is then applied to the target dataset; and (iii) the recognition phase exploits convolutional neural network and long short-term memory to apply five different architectures. Three architectures are stand-alone and single-stream, while the other two are combinations between the first three in two-stream style. Experimental results show that the first three architectures recorded accuracies of 83.24%, 90.72%, and 90.85%, respectively. The last two architectures achieved accuracies of 93.48% and 94.87%, respectively. Moreover, The recorded results outperform other state-of-the-art models in the same field. INDEX TERMS Convolutional neural network (CNN), human action recognition (HAR), long short-term memory (LSTM), spatiotemporal info, transfer learning (TL).

Bulletin of the Faculty of Engineering. Mansoura University, 2020
Big Data" connects large-volume, complex, and increasing data sets with multiple independent sour... more Big Data" connects large-volume, complex, and increasing data sets with multiple independent sources. Nowadays, Big Data are speedily expanding in all science and engineering domains due to the rapid evolution of data, data storage, and the networking collection capabilities. Due to its variability, volume, and velocity, "Big Data mining" enjoys the ability of extracting constructive information from huge streams of data or datasets. Data mining includes exploring and analyzing big quantities of data in order to locate different molds for big data. "Frequent item sets Mining" is one of the most important tasks for discovering useful and meaningful patterns from large collections of data. Mining of association rules from frequent patterns from big data mining is of interest for many industries, for it can provide guidance in decision making processes; such as cross marketing, market basket analysis, promotion assortment, ...etc. The techniques of discovering association rules from data have traditionally focused on identifying the relationship between items predicting some aspect of human behavior; usually buying behavior. This paper provides a review on different techniques for mining frequent item sets.

IEEE Access, 2019
Service-oriented architecture (SOA) has gained great attention in the enterprise information tech... more Service-oriented architecture (SOA) has gained great attention in the enterprise information technology environments (EITE) due to its technically adapted performance and its affordable cost. As a part of the successful quality of service (QoS) scenario, providing software development in a service-based conceptual style for the business companies has become a vital issue. However, this would require more hardware resources, which increase cost and complexity. The main objective of this study is to introduce a new performance-oriented integration design (POID) framework with five middleware algorithms to reliably achieve SOA constraints. The POID framework is proposed to provide two features: (i) acting as a decision support system (DSS) to guide the software architects and designers to build software architectures with better QoS attributes in terms of the scalability and end-to-end performance, and (ii) achieving high accuracy in recommending the best composite services in the simple and complex SOA integration contexts. A set of case studies based on real experiments are conducted in a telecom environment are demonstrated. The experimental results prove that the POID framework achieves better accuracy (97%-98%), average availability (92.18%-97.89%), and enhances the average response time by 17%. INDEX TERMS Enterprise information technology environments (EITE), quality of service (QoS), service level agreement (SLA), service-oriented architecture (SOA).

IEEE Access, 2019
Wireless sensor networks (WSN) have been investigated as a powerful distributed sensing applicati... more Wireless sensor networks (WSN) have been investigated as a powerful distributed sensing application to enhance the efficiency of embedded systems and wireless networking capabilities. Although WSN has offered unique opportunities to set the foundation for using ubiquitous and pervasive computing, it suffered from several issues and challenges such as frequently changing network topology and congestion issue which affect not only network bandwidth usage but also performance. The main objective of this study is to introduce a congestion-aware clustering and routing (CCR) protocol to alleviate the congestion issue over the network. The CCR protocol is proposed to decrease end-to-end delay time and prolong the network lifetime through choosing the suitable primary cluster head (PCH) and the secondary cluster head (SCH). The experimental results demonstrate that the effectiveness of the CCR protocol to satisfy the quality of service (QoS) requirements in increasing the network lifetime and raising the number of packets sent alike. Moreover, the CCR outperforms other state-of-the-art techniques in decreasing the overflow of data, and thus the network bandwidth usage is reduced. INDEX TERMS Congestion control, clustering protocols, pervasive computing, quality of service (QoS), routing protocols, ubiquitous computing, wireless sensor network (WSN).

Data & Knowledge Engineering, 2014
Distribution and uncertainty are considered as the most important design issues in database appli... more Distribution and uncertainty are considered as the most important design issues in database applications nowadays. A lot of ranking or top-k query processing techniques are introduced to solve the problems of communication cost and centralized processing. On the other hand, many techniques are also developed for modeling and managing uncertain databases. Although these techniques were efficient, they didn't deal with distributed data uncertainty. This paper proposes a framework that deals with both data distribution and uncertainty based on ranking queries. Within the proposed framework, communication and computation-efficient algorithms are investigated for retrieving the top-k tuples from distributed sites. The main objective of these algorithms is to reduce the communication rounds utilized and amount of data transmitted while achieving efficient ranking. Experimental results show that both proposed techniques have a great impact in reducing communication cost. Both techniques are efficient but in different situations. The first one is efficient in the case of low number of sites while the other achieves better performance at higher number of sites.

International Journal of Electrical and Computer Engineering (IJECE), 2014
Distributed data processing is a major field in nowadays applications. Many applications collect ... more Distributed data processing is a major field in nowadays applications. Many applications collect and process data from distributed nodes to gain overall results. Large amount of data transfer and network delay made data processing in a centralized manner a hard operation representing an important problem. A very common way to solve this problem is ranking queries. Ranking or top-k queries concentrate only on the highest ranked tuples according to user's interest. Another issue in most nowadays applications is data uncertainty. Many techniques were introduced for modeling, managing, and processing uncertain databases. Although these techniques were efficient, they didn't deal with distributed data uncertainty. This paper deals with both data uncertainty and distribution based on ranking queries. A novel framework is proposed for ranking distributed uncertain data. The framework has a suite of novel algorithms for ranking data and monitoring updates. These algorithms help in reducing the communication rounds used and amount of data transmitted while achieving efficient and effective ranking. Experimental results show that the proposed framework has a great impact in reducing communication cost compared to other techniques.

Ranking distributed database in tuple-level uncertainty
Soft Computing, 2014
ABSTRACT Ranking in uncertain database environments has gained a great importance recently. Many ... more ABSTRACT Ranking in uncertain database environments has gained a great importance recently. Many techniques were introduced to rank uncertain databases and others to rank distributed certain databases. Unfortunately, there are not that much techniques in ranking distributed uncertain databases. This paper proposes a framework that improves ranking processing in the case of uncertain and distributed database. In the proposed framework, new communication and computation-efficient algorithms are investigated for retrieving the top-k tuples from distributed sites. These algorithms are applied in tuple-level uncertainty. The main concern of the proposed algorithms is to reduce the communication rounds utilized and amount of data transmitted while achieving efficient ranking. Experimental results emphasize that both proposed algorithms have a great impact on reducing communication cost. Also, the results clarify that the first algorithm is efficient in the case of a low number of sites while the second achieves better performance in the context of a higher number of sites.
A framework for ranking uncertain distributed database

Ranking distributed database in tuple-level uncertainty
Ranking in uncertain database environments has gained a great importance recently. Many technique... more Ranking in uncertain database environments has gained a great importance recently. Many techniques were introduced to rank uncertain databases and others to rank distributed certain databases. Unfortunately, there are not that much techniques in ranking distributed uncertain databases. This paper proposes a framework that improves ranking processing in the case of uncertain and distributed database. In the proposed framework, new communication and computation-efficient algorithms are investigated for retrieving the top-k tuples from distributed sites. These algorithms are applied in tuple-level uncertainty. The main concern of the proposed algorithms is to reduce the communication rounds utilized and amount of data transmitted while achieving efficient ranking. Experimental results emphasize that both proposed algorithms have a great impact on reducing communication cost. Also, the results clarify that the first algorithm is efficient in the case of a low number of sites while the second achieves better performance in the context of a higher number of sites.

Software Performance Engineering (SPE) has a great impact on software life cycle. A great effort ... more Software Performance Engineering (SPE) has a great impact on software life cycle. A great effort had been introduced in this area. Automation degree of transformation and evaluation process, standardization degree, and performance parameters evaluated represent a big challenge for SPE. This paper presents an XSLTbased framework to overcome these challenges. This framework transforms Unified Modeling Language (UML) software models into Layered Queuing Networks (LQN) performance models. Such framework can be applied on distributed object applications (web services). What unify the proposed framework are: 1) the ability to use more than one type of UML diagrams in building Software model at first phase, 2) the standardization of algorithms applied because of applying XSLT / XQuery rules in all framework phases, 3) two new algorithms (SAT and ADLQNT) are used to achieve high automation degree of transformation and evaluation process, 4) the number of performance parameters evaluated at last phase, such as response time, throughput, and resource utilization before building the real program. Discussion and analysis of the proposed framework are based on illustrative example for validating its ability to achieve the proposed goal, and to evaluate its performance.
Ranking in uncertain distributed database environments
Computer Engineering & …, 2012
Ranking distributed uncertain database systems: Discussion and analysis
… Engineering and Systems …, 2010
Page 1. Ranking Distributed Uncertain Database Systems: Discussion and Analysis Ali I. El-Desouky... more Page 1. Ranking Distributed Uncertain Database Systems: Discussion and Analysis Ali I. El-Desouky Hesham A. Ali Yousry M. Abdul-Azeem Computers and Systems Department Faculty of Engineering, Mansoura University Mansoura, Egypt ...

… Systems Design and …, Jan 1, 2010
Large databases with uncertainty became more common in many applications. Ranking queries are ess... more Large databases with uncertainty became more common in many applications. Ranking queries are essential tools to process these databases and return only the most relevant answers of a query, based on a scoring function. Many approaches were proposed to study and analyze the problem of efficiently answering such ranking queries. Managing distributed uncertain database is also an important issue. In fact ranking queries in such systems are an open challenge. The main objective of this paper is to discuss ranking in distributed uncertain database along with its issued problems. Starting with uncertain data representation, query processing and query types in such systems are discussed along with their challenges and open research area. Top-k query is presented with its properties, as a ranking technique in uncertain data environment, mentioning distributed top-k and distributed ranking problems. 708 978-1-4244-8136-1/10/$26.00 c 2010 IEEE
Uploads
Papers by Yousry AbdulAzeem