Nowadays, the demand for pepper keeps on increasing with the increase in human population. Accura... more Nowadays, the demand for pepper keeps on increasing with the increase in human population. Accurate diagnosis, flawless identification, and early detection of the lesions will improve the income of farmers. At present, deep learning (DL) based techniques assist farmers in identifying plant diseases with low cost and minimal time complexity. Hence, this study proposes a novel optimized DL model for classifying the presence and absence of pepper leaf disease using an effective feature learning process. The proposed study undergoes four major stages namely pre-processing, segmentation, feature extraction, and classification. In the pre-processing stage, initially, the input images are resized and the Improved Contrast Limited Adaptive Histogram Equalization (ICLAHE) technique is introduced to enhance the quality of the pepper leaf images. Then, the Kernelized Gravity-based Density Clustering (KGDC) technique is conquered to segment the diseased portions from the leaf images. Finally, the Gated Self-Attentive Convoluted MobileNetV3 (GSAtt-CMNetV3) technique is proposed to extract the features and classify the pepper leaf disease accurately. Moreover, a novel osprey optimization algorithm (Os-OA) is introduced to tune the parameters of the proposed DL model for enhancing the classification performance. The proposed study is implemented via the Python platform, and a publicly available Plant-Village dataset is utilized for the simulation process. Accuracy, precision and recall values achieved by the proposed pepper leaf disease classification for training percent 80 is 97.87%, 96.87% and 97.08% respectively.
Brain tumors present a significant medical concern, posing challenges in both diagnosis and treat... more Brain tumors present a significant medical concern, posing challenges in both diagnosis and treatment. Deep learning has emerged as an evolving technique for automating the diagnostic process for brain tumors. This research paper introduces a novel deep-learning framework designed explicitly for brain tumor diagnosis. The framework encompasses various tasks: tumor detection, classification, segmentation, and survival rate prediction. The framework was applied to the BraTS dataset, an extensive collection of brain tumor images, to evaluate its effectiveness. The proposed workflow initiates with data acquisition, followed by an enhancement of this data using a Convolutional Normalized Mean Filter (CNMF) during pre-processing. This prepares the data for the multi-class classification performed using the novel DBT-CNN classifier model. The RU-Net2+ model is employed for precise tumor demarcation, yielding segmented regions from which features are subsequently extracted utilizing the Cox model. These extracted features play a pivotal role in the final step, where the survival rate of patients is predicted using a logistic regression model. The experimental results showcased the exceptional performance of the proposed framework, surpassing current benchmarks in classification accuracy, tumor segmentation precision, and survival rate prediction. For high-grade glioma (HGG) tumors, the framework achieved an impressive classification accuracy of 99.51%, while for low-grade glioma (LGG) tumors, the accuracy reached 99.28%. The accuracy of tumor segmentation stood at 98.39% for HGG tumors and 99.1% for LGG tumors. The RU-Net2+ algorithm accurately predicts patient survival rates: 85.71% long-term, 72.72% medium-term, and 61.54% short-term, with corresponding Mean Squared Errors of 0.13, 0.21, and 0.31. These results provide valuable insights for medical professionals making brain tumor treatment decisions. Additionally, the framework shows promise for automating brain tumor diagnosis and enhancing patient care. INDEX TERMS Brain tumor, MRI images, deep learning, machine learning, CNMF, RU-Net2+, DBT-CNN, BraTs. I. INTRODUCTION Brain tumors pose a significant health concern and can cause severe patient consequences. It is crucial to promptly and precisely diagnose brain tumors to facilitate effective treatment strategies [1]. The conventional approaches to segmenting, classifying, and predicting the risks associated with brain tumors have encountered limitations in accuracy and efficiency. Deep learning-based models have recently emerged as The associate editor coordinating the review of this manuscript and approving it for publication was Zhan-Li Sun. powerful tools in medical imaging analysis [2]. These models can significantly improve the accuracy and efficiency of brain tumor diagnosis. However, significant challenges hinder their effective deployment in clinical settings. These challenges include data quality and availability, computational complexity, inter-modality variations, model generalization, overfitting, interpretability, temporal dynamics, annotation, labeling issues, integration into clinical workflows, and ethical considerations, including data privacy and biases [3]. In this environment, there's a pressing requirement for an advanced deep learning model that can adeptly and precisely
Pervasive computing, human–computer interaction, human behavior analysis, and human activity reco... more Pervasive computing, human–computer interaction, human behavior analysis, and human activity recognition (HAR) fields have grown significantly. Deep learning (DL)-based techniques have recently been effectively used to predict various human actions using time series data from wearable sensors and mobile devices. The management of time series data remains difficult for DL-based techniques, despite their excellent performance in activity detection. Time series data still has several problems, such as difficulties in heavily biased data and feature extraction. For HAR, an ensemble of Deep SqueezeNet (SE) and bidirectional long short-term memory (BiLSTM) with improved flower pollination optimization algorithm (IFPOA) is designed to construct a reliable classification model utilizing wearable sensor data in this research. The significant features are extracted automatically from the raw sensor data by multi-branch SE-BiLSTM. The model can learn both short-term dependencies and long-term ...
The smart home culture is rapidly increasing across the globe and driving smart home users toward... more The smart home culture is rapidly increasing across the globe and driving smart home users toward utilizing smart appliances. Smart television (TV) is one such appliance that is embedded with smart technology. The users of smart TV have their interests in the programs. However, automatic recommendation of programs for user-to-user is still under-researched. Several papers discussed recommendation systems, but those are related to different applications. Even though there are some works on recommending programs to smart TV users (single-user and multi-user), they did not discuss the smart TV camera module to capture and validate the user image for recommending personalized programs. Hence, this paper proposes a convolutional neural network (CNN)-based personalized program recommendation system for smart TV users. To implement this proposed approach, the CNN algorithm is trained on the datasets ‘CelebFaces Attribute Dataset’ and ‘Labeled Faces in the Wild-People’ for feature extractio...
Activity recognition in unmanned aerial vehicle (UAV) surveillance is addressed in various comput... more Activity recognition in unmanned aerial vehicle (UAV) surveillance is addressed in various computer vision applications such as image retrieval, pose estimation, object detection, object detection in videos, object detection in still images, object detection in video frames, face recognition, and video action recognition. In the UAV-based surveillance technology, video segments captured from aerial vehicles make it challenging to recognize and distinguish human behavior. In this research, to recognize a single and multi-human activity using aerial data, a hybrid model of histogram of oriented gradient (HOG), mask-regional convolutional neural network (Mask-RCNN), and bidirectional long short-term memory (Bi-LSTM) is employed. The HOG algorithm extracts patterns, Mask-RCNN extracts feature maps from the raw aerial image data, and the Bi-LSTM network exploits the temporal relationship between the frames for the underlying action in the scene. This Bi-LSTM network reduces the error rat...
Indonesian Journal of Electrical Engineering and Computer Science
The technique of extracting important documents from massive data collections is known as informa... more The technique of extracting important documents from massive data collections is known as information retrieval (IR). The dataset provider coupled with the increasing demand for high-quality retrieval results, has resulted in traditional information retrieval approaches being increasingly insufficient to meet the challenge of providing high-quality search results. Research has concentrated on information retrieval and interactive query formation through ontologies in order to overcome these challenges, with a specific emphasis on enhancing the functionality between information and search queries in order to bring the outcome sets closer to the research requirements of users. In the context of document retrieval technologies, it is a process that assists researchers in extracting documents from data collections. It is discussed in this research how to use ontology-based information retrieval approaches and techniques, taking into account the issues of ontology modelling, processing, ...
Big-data takes us into new epoch of data, which commonly referred to be as a big data. It pose a ... more Big-data takes us into new epoch of data, which commonly referred to be as a big data. It pose a challenges to the researchers with more velocity, more variety and large volumes and software engineers are working on variety of methods in avoiding cost of development process. Where the commonly used software’s are not able to imprisonment, accomplish and process within the lapsed time. Furthermore, there is a need to discover new procedures for to process large volumes of data to optimization, datamining and knowledge discovery. This ambitions and motivations are drives the researchers to Big-data analytics and big-data mining. Over the earlier few centuries, different procedures have been proposed to use the MapReduce model-which decrease the space of search with distributed or parallel computing-for different big data mining and analytics tasks and these methods are used in providing a cost effective solution across various software development models using Bayesian Approaches. In ...
Objectives: To find Large Software Products Cost Estimation Model by using Bayesian Approaches. M... more Objectives: To find Large Software Products Cost Estimation Model by using Bayesian Approaches. Methods/Statistical Analysis: Composite strategy for building programming models in view of a blend of information and master judgment is tried here. This system depends on the surely knew and generally acknowledged Bayes' hypothesis that has been effectively connected in other building areas incorporating to some degree in the product unwavering quality designing space. Be that as it may, the Bayesian methodology has not been viably misused for building more powerful programming estimation models that utilization a change adjusted blend of undertaking information and master judgment. The center of this paper is to demonstrate the change in precision of the cost estimation model when the Bayesian methodology is utilized versus the numerous relapse approach. Findings: We employed Bayesian model aligned utilizing a dataset of 100 datapoints approved on a dataset of 200 datapoints (sample data), it yields an expectation exactness of PRED(.30) = 76% (i.e., 106 or 76% of the 200 datapoints are evaluated inside 29.5% of the actuals). The immaculate relapse based model aligned utilizing 100 datapoints when accepted on the same 200 task dataset yields a poorer precision of PRED(.30) = 53.4%.
Uploads
Papers by Hussain Syed