Papers by Mohammad Ali Balafar
A novel fuzzy-based algorithm for ONU placement in FiWi broadband access network
Optical Fiber Technology, Oct 1, 2023
Computing semantic similarity of texts by utilizing dependency graph
Journal of Intelligent Information Systems, Dec 27, 2022

Evolving Systems, Jan 12, 2023
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in... more In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.

DOAJ (DOAJ: Directory of Open Access Journals), Jul 17, 2021
Background: A timely diagnosis of Alzheimer's disease (AD) is crucial to obtain more practical tr... more Background: A timely diagnosis of Alzheimer's disease (AD) is crucial to obtain more practical treatments. In this article, a novel approach using Auto-Encoder Neural Networks (AENN) for early detection of AD was proposed. Method: The proposed method mainly deals with the classification of multimodal data and the imputation of missing data. The data under study involve the MiniMental State Examination, magnetic resonance imaging, positron emission tomography, cerebrospinal fluid data, and personal information. Natural logarithm was used for normalizing the data. The Auto-Encoder Neural Networks was used for imputing missing data. Principal component analysis algorithm was used for reducing dimensionality of data. Support Vector Machine (SVM) was used as classifier. The proposed method was evaluated using Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Then, 10fold crossvalidation was used to audit the detection accuracy of the method. Results: The effectiveness of the proposed approach was studied under several scenarios considering 705 cases of ADNI database. In three binary classification problems, that is AD vs. normal controls (NCs), mild cognitive impairment (MCI) vs. NC, and MCI vs. AD, we obtained the accuracies of 95.57%, 83.01%, and 78.67%, respectively. Conclusion: Experimental results revealed that the proposed method significantly outperformed most of the stateoftheart methods.
EDCWRN: efficient deep clustering with the weight of representations and the help of neighbors
Applied Intelligence, Jul 5, 2022

Gene selection for tumor classification using a novel bio-inspired multi-objective approach
Genomics, 2018
Identifying the informative genes has always been a major step in microarray data analysis. The c... more Identifying the informative genes has always been a major step in microarray data analysis. The complexity of various cancer datasets makes this issue still challenging. In this paper, a novel Bio-inspired Multi-objective algorithm is proposed for gene selection in microarray data classification specifically in the binary domain of feature selection. The presented method extends the traditional Bat Algorithm with refined formulations, effective multi-objective operators, and novel local search strategies employing social learning concepts in designing random walks. A hybrid model using the Fisher criterion is then applied to three widely-used microarray cancer datasets to explore significant biomarkers which reveal the effectiveness of the proposed method for genomic analysis. Experimental results unveil new combinations of informative biomarkers have association with other studies.
Trust-aware and energy-efficient data gathering in wireless sensor networks using PSO
Soft Computing, Feb 7, 2023

Image segmentation is a critical part of clinical diagnostic tools. Medical images mostly contain... more Image segmentation is a critical part of clinical diagnostic tools. Medical images mostly contain noise. Therefore, accurate segmentation of medical images is highly challenging; however, accurate segmentation of these images is very important in correct diagnosis by clinical tools. We proposed a new method for image segmentation based on dominant grey level of image and Fuzzy C-Mean (FCM). In the postulated method, the colour image is converted to grey level image and stationary wavelet is applied to decrease noise; the image is clustered using ordinary FCM, afterwards, clusters with error more than a threshold are divided to two sub clusters. This process continues until there remain no such, erroneous, clusters. The dominant connected component of each cluster is obtained-if existed. In obtained dominant connected components, the n biggest connected components are selected. N is specified based upon considered number of clusters. Averages of grey levels of n selected components, in grey level image, are considered as Dominant grey levels. Dominant grey levels are used as cluster centres. Eventually, the image is clustered using specified cluster centres. Experimental results are demonstrated to show effectiveness of new method.

Knowledge graph-based recommendation system enhanced by neural collaborative filtering and knowledge graph embedding
Ain Shams Engineering Journal, Apr 1, 2023
Recommendation systems are an important and undeniable part of modern systems and applications. R... more Recommendation systems are an important and undeniable part of modern systems and applications. Recommending items and users to the users that are likely to buy or interact with them is a modern solution for AI-based applications. In this article, a novel architecture is used with the utilization of pre-trained knowledge graph embeddings of different approaches. The proposed architecture consists of several stages that have various advantages. In the first step of the proposed method, a knowledge graph from data is created, since multi-hop neighbors in this graph address the ambiguity and redundancy problems. Then knowledge graph representation learning techniques are used to learn low-dimensional vector representations for knowledge graph components. In the following a neural collaborative filtering framework is used which benefits from no extra weights on layers. It is only dependent on matrix operations. Learning over these operations uses the pre-trained embeddings, and fine-tune them. Evaluation metrics show that the proposed method is superior in over other state-of-the-art approaches. According to the experimental results, the criteria of recall, precision, and F1-score have been improved, on average by 3.87%, 2.42%, and 6.05%, respectively.
Stacking ensemble approach in data mining methods for landslide prediction
The Journal of Supercomputing, Dec 21, 2022
Using the integrated application of computational intelligence for landslide susceptibility modeling in East Azerbaijan Province, Iran
Applied Geomatics, Jan 16, 2023
EGPIECLMAC: efficient grayscale privacy image encryption with chaos logistics maps and Arnold Cat
Evolving Systems, Jan 2, 2023
Electronics, Dec 26, 2022
This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

arXiv (Cornell University), May 27, 2022
Nowadays, a tremendous amount of human communications occur on Internet-based communication infra... more Nowadays, a tremendous amount of human communications occur on Internet-based communication infrastructures, like social networks, email, forums, organizational communication platforms, etc. Indeed, the automatic prediction or assessment of individuals' personalities through their written or exchanged text would be advantageous to ameliorate their relationships. To this end, this paper aims to propose KGrAt-Net, which is a Knowledge Graph Attention Network text classifier. For the first time, it applies the knowledge graph attention network to perform Automatic Personality Prediction (APP), according to the Big Five personality traits. After performing some preprocessing activities, it first tries to acquire a knowing-full representation of the knowledge behind the concepts in the input text by building its equivalent knowledge graph. A knowledge graph collects interlinked descriptions of concepts, entities, and relationships in a machine-readable form. Practically, it provides a machine-readable cognitive understanding of concepts and semantic relationships among them. Then, applying the attention mechanism, it attempts to pay attention to the most relevant parts of the graph to predict the personality traits of the input text. We used 2,467 essays from the Essays Dataset. The results demonstrated that KGrAt-Net considerably improved personality prediction accuracies (up to 70.26% on average). Furthermore, KGrAt-Net also uses knowledge graph embedding to enrich the classification, which makes it even more accurate (on average, 72.41%) in APP.
Active constrained deep embedded clustering with dual source
Applied Intelligence, Jun 22, 2022

Chaos Solitons & Fractals, Oct 1, 2021
The objective of an expert recommendation system is to trace a set of candidates' expertise and p... more The objective of an expert recommendation system is to trace a set of candidates' expertise and preferences, recognize their expertise patterns, and identify experts. In this paper, we introduce a multimodal classification approach for expert recommendation system (BERTERS). In our proposed system, the modalities are derived from text (articles published by candidates) and graph (their co-author connections) information. BERTERS converts text into a vector using the Bidirectional Encoder Representations from Transformer (BERT). Also, a graph Representation technique called ExEm is used to extract the features of candidates from co-author network. Final representation of a candidate is the concatenation of these vectors and other features. Eventually, a classifier is built on the concatenation of features. This multimodal approach can be used in both the academic community and the community question answering. To verify the effectiveness of BERTERS, we analyze its performance on multi-label classification and visualization tasks.

<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si34.svg"><mml:mrow><mml:msub><mml:mrow><mml:mi>C</mml:mi></mml:mrow><mml:mrow><mml:mi>y</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math>: Chaotic yolo for user intended image encryption and sharing in social media
Information Sciences, 2021
Abstract Social media is an inseparable part of our daily life where we post and share photos and... more Abstract Social media is an inseparable part of our daily life where we post and share photos and media related to our life and in some cases we intend to share them between specific people. This intended and cherry picked sharing of media needs a better solution rather than simply picking users. Some social media platforms do not restrict other users from sharing timeline posts of others; meaning, one can simply forward a post from another person to a third one and no data preservation has been applied. In most cases we do not intend to secure the whole media and only important parts of it are intended to be secured. In this work we propose a novel method based on YoloV3 object detection and chaotic image encryption to overcome the issue of user intended data preservation in social media platforms. Our proposed method is capable of both automatic image encryption on full or user selected regions. Statistical and cryptographic analysis show superiority of our method compared to other state-of-the-art methods while it keeps the speed as high as possible for online and realtime use cases.
A Brain MRI Segmentation Method Using Feature Weighting and a Combination of Efficient Visual Features
Chapman and Hall/CRC eBooks, Aug 15, 2023

Complexity, Feb 28, 2023
Detecting communities in complex networks can shed light on the essential characteristics and fun... more Detecting communities in complex networks can shed light on the essential characteristics and functions of the modeled phenomena. Tis topic has attracted researchers from both academia and industry. Among diferent community detection methods, genetic algorithms (GAs) have become popular. Considering the drawbacks of the currently used locus-based and solution-vector-based encodings to represent the individuals, in this article, we propose (1) a new node similarity-based encoding method, MST-based encoding, to represent a network partition as an individual, which can avoid the shortcomings of the previous encoding schemes. Ten, we propose (2) a new adaptive genetic algorithm for the purpose of detecting communities in networks, along with (3) a new initial population generation function to improve the convergence time of the algorithm, and (4) a new sine-based adaptive mutation function which adjusts the mutations according to the improvement in the ftness value of the best individual in the population pool. Te proposed method combines the similaritybased and modularity-optimization-based approaches to fnd communities in complex networks in an evolutionary framework. Besides the fact that the proposed encoding can avoid meaningless mutations or disconnected communities, we show that the new initial population generation function and the new adaptive mutation function can improve the convergence time of the algorithm. Several experiments and comparisons were conducted to verify the efectiveness of the proposed method using modularity and NMI measures on both real-world and synthetic datasets. Te results show that the proposed method can fnd the communities in a signifcantly shorter time than other GAs while reaching a better trade-of in the diferent measures.

Scientific Reports, Aug 30, 2022
We propose a deep graph learning approach for computing semantic textual similarity (STS) by usin... more We propose a deep graph learning approach for computing semantic textual similarity (STS) by using semantic role labels generated by a Semantic Role Labeling (SRL) system. SRL system output has significant challenges in dealing with graph-neural networks because it doesn't have a graph structure. To address these challenges, we propose a novel SRL graph by using semantic role labels and dependency grammar. For processing the SRL graph, we proposed a Deep Graph Neural Network (DGNN) based graph-U-Net model that is placed on top of the transformers to use a variety of transformers to process representations obtained from them. We investigate the effect of using the proposed DGNN and SRL graph on the performance of some transformers in computing STS. For the evaluation of our approach, we use STS2017 and SICK datasets. Experimental evaluations show that using the SRL graph accompanied by applying the proposed DGNN increases the performance of the transformers used in the DGNN. The problem of similarity learning is a significant issue in pattern recognition. The goal of similarity learning is to learn a measure to reflect the semantic distance according to a specific task 1. Similarity learning includes looking for similarity patterns to find complicated and implicit semantic patterns. Similarity learning in the text area is studied in the STS computation field. STS measures the degree of semantic overlap between two texts 2. The ability to determine the semantic relationship between two texts is an integral part of machines that understand and infer natural language 3 hence STS is a directly or indirectly significant component of many applications such as information retrieval 4 , recognition of paraphrases 5 , textual entailment 6 , question answering 7 , text summarization 8 , measuring the degree of equivalence between a machine translation output and a reference translation 9 and also text summarization evaluation, text classification, document clustering, topic tracking, essay scoring, short answer scoring, etc. STS is also closely related to paraphrase identification and textual entailment recognition. Numerous research studies have been carried out on computing semantic similarity score between two sentences. The goal of the research studies in these fields is to construct a system that is able to predict the results having maximum adequateness with those assigned by human annotators. Due to the limited amount of available annotated data, variable length of sentences, and complex structure of natural language, computing semantic similarity remains a hard problem 10. An effective step was taken by computing word embeddings 11. Using word embeddings has led to valuable results in various Natural Language Processing (NLP) tasks. In recent years, in deep learning models, a variety of approaches have been proposed. These models have different architectures; therefore, their powers to detect implicit patterns for recognizing similarity are different. Some models utilized linear structure based on Recurrent Neural Network (RNN) architecture including Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models 6,10,12-14 ; some of them use a grammar tree accompanied by input text. However, it was not clear how to effectively capture the relationships among multiple words of a sentence in such a way that yields the meaning of the sentence. Efforts to obtain embeddings for larger chunks of text had not been so successful 15. The NLP community had not found the best supervised approach for embedding that captures the semantics of a whole sentence 15. With the introduction of BERT 16 , the design of a new generation of powerful models has begun. These models are collected under the name of Transformers such as BERT, RoBERTa 17 , etc. Transformers provide general-purpose architectures for natural language understanding and natural language generation 18. Transformers are trained on a large corpus while handling long-range dependencies between input sequences and output sequences and they can capture the meaning of the
Uploads
Papers by Mohammad Ali Balafar