IJCSIS Papers by Journal of Computer Science IJCSIS
International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 5, September, 2025
Abstract—This paper examines the critical trade-offs between
energy efficiency and performance in... more Abstract—This paper examines the critical trade-offs between
energy efficiency and performance inherent in choosing between
Bare Metal, Real-Time Operating System (RTOS), and
Full Operating System (Full OS) architectures for Internet of
Things (IoT) devices. We review recent literature, compare
energy consumption mechanisms, analyze real-world examples,
and summarize the trends that influence architectural decisions.
This work also provides comparative diagrams, performance
tables, and case examples to guide IoT designers in selecting
an informed, energy-aware architecture.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 5, September , 2025
Abstract—Optimizing CPU scheduling is crucial for enhancing
the overall performance of operating ... more Abstract—Optimizing CPU scheduling is crucial for enhancing
the overall performance of operating systems, particularly in environments
that run multiple processes, where responsiveness and
efficient resource utilization are essential. Scheduling algorithms
such as Shortest Job First (SJF) and Shortest Remaining Time
First (SRTF) are theoretically effective in reducing the average
waiting and turnaround times. However, their implementation
in real-world systems is limited due to the need for precise
knowledge of each process’s CPU burst time, which is typically
unavailable during execution. Traditional methods, such as exponential
averaging, often fail to provide accurate or adaptive
predictions for dynamic workloads. This study explores how
Machine Learning (ML) can be applied to accurately predict
CPU burst times, enabling practical use of SJF and SRTF in
real-time systems. This study leverages ML to predict CPU burst
times, using a synthetic dataset mimicking the Grid Workload
Archive (GWA-T-4) to train models including K-Nearest Neighbors
(KNN), Support Vector Machines (SVM), Decision Trees,
Random Forest, XGBoost, and Artificial Neural Networks (ANN).
The ANN and ANN+SVM ensemble achieved superior accuracy
(MAE ˜4.12–4.13 ms, CC ˜0.885), significantly outperforming
the baseline (MAE = 20.67 ms). These predictions enable the
scheduler to make more informed decisions, leading to reduced
context switching, improved resource allocation, and shorter wait
and turnaround times. Moreover, this approach helps alleviate
issues such as process starvation. By embedding ML into CPU
scheduling, the research offers a practical solution to transform
the theoretical benefits of SJF and SRTF into practical, realworld
applications, contributing to the development of intelligent
and adaptive operating systems.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 4, July-August , 2025
Cloud computing has revolutionized how
organizations and individuals store, access, and interact ... more Cloud computing has revolutionized how
organizations and individuals store, access, and interact with
data, software, and computing resources. Public cloud platforms
offer scalable, cost-efficient, and globally accessible services,
often characterized by managed infrastructure and multitenancy.
However, this technological advancement faces
escalating cyber threats and sophisticated attack vectors such as
Denial-of-Service (DoS), Man-in-the-Middle (MITM) attacks,
data breaches, and insecure APIs. These threats compromise
data integrity, availability, and confidentiality. This study
employs a descriptive research methodology, supplemented by
desktop analysis, to examine the evolving landscape of security
threats in public cloud computing. It further evaluates mitigation
strategies including encryption, access control, secure API
management, and continuous monitoring, while highlighting
persistent research gaps that demand attention for robust cloud
security frameworks.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 4, July , 2025
The proliferation of consumer Internet of Things (IoT) devices has enhanced digital connectivity ... more The proliferation of consumer Internet of Things (IoT) devices has enhanced digital connectivity in modern homes and workplaces, but it has also introduced critical security risks, especially when legacy firmware is left unpatched. This paper presents a focused security analysis of Apple TV HD (4th Gen) running tvOS 9.0, highlighting vulnerabilities associated with outdated firmware in production environments. Using tools such as Nmap and Metasploit, multiple weaknesses were discovered, including an exploitable AirPlay service port vulnerable to a Slowloris Denial of Service attack (CVE-2007-6750), and unauthorized media casting when 'Require Device Verification' was disabled. Although brute-force attacks on AirPlay passwords failed-likely due to incompatibilities with modern tools, the device still exhibited fragile authentication defaults. The study underscores two main issues: user neglect of security best practices and the continued use of unsupported firmware. These are compounded by weak enforcement mechanisms, absence of update policies, and poor user awareness. While Apple's broader ecosystem remains relatively secure, legacy devices like Apple TV HD pose significant threats in the cyber space.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
Spreading false news online has become a grave
threat to public stability, political life, and tr... more Spreading false news online has become a grave
threat to public stability, political life, and trust. Conventional
content moderation measures prove inadequate to contain the
velocity and volume with which disinformation spreads. Machine
Learning (ML) has been an invaluable aid in the automation of
fake news identification by learning patterns and cues from large
amounts of text data. This review paper provides an overview
of most of the ML methods employed for the detection of
fake news, from the simple algorithms Naive Bayes, Support
Vector Machines, and Decision Trees to the recent deep learning
methods such as LSTM, CNN, and transformer-based models
such as BERT. This work also encompasses feature extraction
methods like TF-IDF, word embeddings, and stylometric features,
benchmarking datasets, and evaluation metrics used in the
literature. The work provides the advantages and limitations
of existing methods and highlights open problems such as
dataset generalizability, model explainability, and adversarial
robustness Lastly, the review outlines future directions, such as
the development of hybrid models, real-time detection systems,
and ethics. The intention of this paper is to be a starting ground
for researchers as well as practitioners in creating improved and
more efficient fake news recognition systems.
Index Terms—Fake News Detection, Machine Learning, Natural
Language Processing, Text Classification, Deep Learning,
Transformer Models, Social Media Misinformation, NLP, News
Verification

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
Communication barriers for individuals with
hearing impairments persist due to limited assistive ... more Communication barriers for individuals with
hearing impairments persist due to limited assistive resources.
This paper introduces Hybrid_ASL, a novel deep learning model
leveraging cross-domain transfer learning to classify American
Sign Language (ASL) hand gestures with high accuracy. Built on
a transfer learning framework, Hybrid_ASL adapts knowledge
from diverse visual domains to optimize its architecture for ASL
recognition. Trained on a dataset of 87,000 ASL images, the
model underwent iterative fine-tuning to balance accuracy and
computational efficiency. Comparative experiments against stateof-the-art architectures, including convolutional neural networks
and vision transformers, demonstrate that Hybrid_ASL achieves
an exceptional accuracy of 99.98%, with matching precision,
recall, and F1-score, while maintaining low architectural
complexity. These results highlight the efficacy of transfer
learning and model adaptation in developing robust assistive
technologies, paving the way for improved accessibility and
quality of life for the hearing-impaired community.
Index Terms—Cross-Domain Transfer Learning, ASL
Recognition, Hybrid_ASL, Deep Learning, Assistive
Technology, Hand Gesture Classification, Model Adaptation,
Vision Mamba Models, Fine- Tuning, Large-Scale Image
Recognitions.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
The growth of subscription-based services has enabled brands
to build lasting customer relationsh... more The growth of subscription-based services has enabled brands
to build lasting customer relationships. But to keep and grow
those customers, you need an ever-increasingly personalized
experience that drives repeat business. It is where Adaptive
technology machine learning followed by collaborative
filtering, comes into the image. Machine learning: a method
that teaches computer systems to learn how to perform certain
tasks using data and then improve the process over time
automatically without being programmed specifically.
Recommender systems are a subfield of ML, where CF is
perhaps the most successful approach: items are
recommended to users based on behaviors and preferences of
similar-minded people. Subscription businesses can use
machine learning and collaborative filtering to gather better
customer information, create more precise customer profiles,
offer tailored advice or help products, and predict which
subscribers will cancel their subscriptions. Using ML and CF,
custom-made recommendations by targeting marketing
campaigns can improve customer experience, thus boosting
satisfaction and retention. On top of that, ML algorithms can
detect patterns from the customer data and indicate churn in
advance to mitigate, offering proactive solutions which may
help retain few numbskulls for a while longer. Thus,
deploying ML and CF in subscription businesses can lead to
greater personalization, lower customer churn rates, and
increased revenues. These methodologies allow businesses to
decipher better and tap into the consumer market, which is
extremely important for today's generation of
subscription-based models.
Keywords
Collaborative Filtering, Boosting Satisfaction, Retention,
Subscription Businesses, Proactive Solutions

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
Flying Ad Hoc Networks have emerged as the paramount component in UAV based communication systems... more Flying Ad Hoc Networks have emerged as the paramount component in UAV based communication systems as there exist applications in surveillance, disaster response, and military operations. The network topology being dynamic, greater mobility of UAVs, and constraints on battery life make it extremely challenging to have reliable and energy-efficient communication. Traditional handover mechanisms, especially those borrowed from Vehicular Ad Hoc Networks, possibly always keep static thresholds and are incapable of catering to the 3D mobility and energy constraints that a FANET has. This triggered the development of QEEH , a Q-Learning based Energy Efficient Handover framework designed for a FANET environment. QEEH uses reinforcement learning to make handover decisions based dynamically on signal strength, node density, and residual energy. The framework accounts for the sleep, hibernate, and wake-up modes to reduce energy consumption across the network without compromising a good quality of connectivity. NS3-based simulation results indicate QEEH overtaking existing protocols like CLEA-AODV, LFEAR, and PARouting in terms of throughput and packet delivery ratio, delay, energy consumption, and network lifetime. These results prove that intelligent, adaptive, and energy-aware handover schemes can significantly enhance the stability and performance of FANETs.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June , 2025
The early diagnosis of Alzheimer's disease remains a major challenge due to the complexity of mag... more The early diagnosis of Alzheimer's disease remains a major challenge due to the complexity of magnetic resonance image interpretation and the limitations of existing diagnostic models. The slow memory loss associated with the gradual loss of thinking abilities, known as Alzheimer's disease, is the most common element of the illness. Effective early diagnosis is therefore essential to treatment; unfortunately, the traditional diagnostic procedure, which involves analyzing magnetic resonance images, is a complex process and prone to mistakes. This paper aims to successfully merge these cognitive models with advanced deep learning techniques to enhance the diagnostic capabilities of Alzheimer's disease using a fusion model with 3-dimensional convolutional neural networks and long short-term memory networks. The proposed approach uses three-dimensional convolutional neural networks to extract intricate features from volumetric magnetic resonance images, while long short-term memory networks analyze sequential data to identify key temporal patterns that indicate the progression of Alzheimer's disease. The dataset used in this study is the Alzheimer's Disease Neuroimaging Initiative dataset, which contains magnetic resonance images labeled into four categories: Non-Demented, Very Mild Demented, Mild Demented, and Moderate Demented. The dataset consists of 6,400 magnetic resonance images in total, split into training (70%), validation (15%), and testing (15%) sets. These outcomes demonstrate that the hybrid model improves predictive accuracy significantly over current benchmarks on this topic. This study highlights the importance of introducing deep learning models into clinical practice, thereby providing an efficient tool for early-stage Alzheimer's Disease diagnosis, ultimately improving patient outcomes through early and accurate intervention.
International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
This paper explores the development of an AI-based framework aimed at optimizing wind turbine sit... more This paper explores the development of an AI-based framework aimed at optimizing wind turbine siting, design, and operation.
Through the integration of machine learning (ML) algorithms and optimization techniques, the goal is to enhance wind energy
yield, reduce soperational and maintenance costs, and improve responsiveness to environmental variability. The paper outlines a
comprehensive strategy that utilizes historical and real-time data—meteorological, geographical, infrastructural, legal, and
economic—to build a robust decision-making system. The proposed framework supports intelligent control systems, predictive
analytics, and automated feedback mechanisms, allowing wind farms to operate with greater efficiency and adaptability.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
Cloud data pipelines form the backbone of modern data-driven applications by enabling scalable, r... more Cloud data pipelines form the backbone of modern data-driven applications by enabling scalable, real-time data movement, transformation, and analysis. However, ensuring their performance, reliability, and security in distributed environments remains a significant challenge. This survey paper explores current architectures, technologies, and practices used in building cloud data pipelines and identifies key performance bottlenecks such as latency, data inconsistency, security overhead, and lack of observability. In particular, the emerging need for efficient communication mechanisms like gRPC, dynamic credential management via orchestrators, and real-time monitoring capabilities are highlighted. Through this analysis, research gaps in the standardization of interfaces, real-time observability, and secure, scalable integration techniques are identified. The paper concludes by proposing future directions to address these gaps and enhance the performance and resilience of cloud-native data pipelines in production-grade environments.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June , 2025
The rapid evolution and application of quantum
dot (QD) materials in chemistry, chemical engineer... more The rapid evolution and application of quantum
dot (QD) materials in chemistry, chemical engineering, and
medical science has created a crucial need for robust accurate
modeling frameworks, interpretable and secure design. Without
practical implementation of the physical product and application
in solar cells and TV, it is important to modeling the QD
through virtual representation of the molecules. Even-though
the quantum simulation and machine learning are accelerated
material screening, there are 3 business challenges remain unresolved: (1) prediction accuracy is low for the DFT quantum
simulation due to noise and systematic errors, (2) experimental
inconsistencies due to outliers, and (3) most ML models lack
interpretability, undermining trust and scientific insight. To
this end, we propose a cyber-twin framework that combines
graph neural networks, specifically GINEConv, Bayesian Gaussian Process Regression, and advanced anomaly detection based
on Isolation Forests. The proposed model learns chemically
rich representations from simulated quantum dot structures,
while Bayesian inference enables uncertainty-aware predictions
calibrated against experimental variability. We simulate realistic
scenarios including 5 % experimental outliers, 10% noise-based
attacks, 7% bias-targeted attacks, and 3% full corruption, and
design multi-feature anomaly scores to detect these threats. The
system achieves over 90% detection accuracy for outliers, with
a predictive R² of 0.95 and strong calibration (Pearson r =
0.87) between uncertainty and absolute error. Additionally, SHAP
analysis identifies key atomic features—such as electronegativity
and bond disorder—as dominant drivers of NIR absorption,
offering explainability aligned with chemical knowledge.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
There are many refinery industries in Beaumont,
Texas, and gas pipeline leak detection is the gre... more There are many refinery industries in Beaumont,
Texas, and gas pipeline leak detection is the great business challenge.
In the world, every 2 minutes there is a leak in the pipeline
and more than 40 billion dollars have been unutilized for this
impeding task. Therefore, oil pipeline leak detection and control
is a critical business issue to prevent environmental hazards,
economic losses, and public concerns. This paper presents an oil
pipeline leak detection and control framework that incorporates
optimal filtering and LSTM networks. The proposed technique
leverages sensor data from SCADA/IoT platforms to properly
estimate pipeline states, and provide real-time control commands.
Moreover, the online learning framework is introduced to adapt
the model to evolving pipeline conditions. Numerical Simulation
results show the effectiveness of the developed approach. Hopefully,
this framework and analysis will be significantly helpful to
the education and industries. The spatial and temporal features
with the hurricane and tornado will be considered in this model
and analysis the impacts.
Index Terms—Kalman Filter, Leak Detection, LSTM Networks,
Hybrid Fault Diagnosis, Pipeline.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June , 2025
Designing the real-time cyber twin ecosystem considering EV charging stations, smart grids, water... more Designing the real-time cyber twin ecosystem considering EV charging stations, smart grids, water distribution
systems, and IoT networks are challenging tasks. This paper
presents a comprehensive state-space model for an Industrial
Cyber-Physical System (CPS) Nexus integrating EV charging stations, smart grids, water distribution systems, and IoT networks.
This subsystem each communicate with other through AMQP
(Advanced Message Queuing Protocol). For this big system, some
of the states are unobservable, therefore we develop an extended
Kalman Filter framework for state estimation and demonstrate
its performance through numerical simulations. In future, we will
consider the cyber attack and apply the controller to regulate
the CPS states.
Index Terms—Electric Vehicles Extended Kalman filter HAAC
system Smart Grids Water System

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June , 2025
Cybercrimes in the financial sector are increasing
rapidly, particularly in cases of UPI and QR c... more Cybercrimes in the financial sector are increasing
rapidly, particularly in cases of UPI and QR code-based
fraud. With the growing adoption of digital
transactions, the complexity and frequency of these
crimes have also risen, highlighting the urgent need for
effective digital forensic investigation. However,
currently there is no well-structured and standardized
framework available to conduct digital forensic
investigations of financial cybercrimes. This research
paper introduces an incident-based digital forensics
framework designed specifically to investigate financial
cybercrimes. The framework is developed entirely using
open-source digital forensic tools and follows a
structured approach comprising five key phases:
incident identification, evidence preservation, evidence
collection, data analysis, and reporting &
documentation. To evaluate the effectiveness of this
framework, various UPI/QR fraud scenarios were
analyzed using digital forensic tools such as Autopsy
and Wireshark. The results demonstrate that this
framework enables a more systematic and in-depth
analysis of transaction logs, metadata, and network
artifacts compared to traditional investigation methods,
allowing for the generation of detailed forensic reports.
Unlike conventional approaches, the proposed
framework offers a more structured and innovative
digital forensics methodology, enhancing the reliability
of financial cybercrime investigations. It assists law
enforcement agencies in conducting efficient forensic
analysis while also improving the process of presenting
digital evidence in court, making legal proceedings
more effective and timely.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
The increasing connectivity of devices and the fast growth of the Internet of Things (IoT) have i... more The increasing connectivity of devices and the fast growth of the Internet of Things (IoT) have increased the threat environment
for cyberattacks, especially Advanced Persistent Threats (APTs). Conventional intrusion detection systems (IDS) tend to be
insufficient in detecting new and advanced attacks. This paper provides an exhaustive review of AI-based IDS methodologies
employing deep learning methods to identify new intrusions in IoT and APT networks. We emphasize recent progress, challenges,
datasets, evaluation criteria, and directions for future research, helping in the creation of strong IDS frameworks for emerging
cyber threats [1], [4], [7]. We further discuss hybrid methods, model deployment problems, and the use of explainable AI (XAI)
to enhance system transparency [10], [13], [18].

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May-June, 2025
Human-machine collaboration presents persistent challenges in aligning human intent with machine ... more Human-machine collaboration presents persistent challenges in aligning human intent with machine comprehension and execution. Large Language Models (LLMs) offer promising solutions by leveraging advanced natural language processing capabilities to bridge this gap. This paper surveys a novel framework that integrates LLMs into a vehicle "Co-Pilot," enabling autonomous systems to interpret and execute driving tasks based on human commands and contextual information. The proposed framework incorporates a robust interaction workflow and a memory mechanism to systematically organize and retrieve task-relevant data. By dynamically selecting appropriate controllers and planning optimal trajectories, the Co-Pilot adapts its operations to fulfill user-defined goals while maintaining safety and efficiency. Simulation experiments demonstrate the framework's ability to understand natural language instructions, plan actions, and execute driving tasks effectively, highlighting both its practical viability and limitations. Furthermore, the study emphasizes the importance of real-time adaptability in addressing complex driving scenarios and explores the concept of humanmachine hybrid intelligence. This work illustrates the potential of LLMs to revolutionize autonomous driving by enabling more intuitive and effective human-machine collaboration.

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 2, March-April , 2025
Since synthetic face photos and videos are becoming more and more like real ones; the emergence o... more Since synthetic face photos and videos are becoming more and more like real ones; the emergence of fake face technology has presented serious problems in the field of digital media. To protect against false information, identity theft, and social manipulation, as well as to maintain the integrity of online content, it is now imperative to detect these fake face photos. Advanced generative adversarial networks (GANs) have produced fake face images, which have sparked worries because of the possibility of their abuse in identity theft, fraud, and disinformation. Such phony face photos must be detected using advanced methods that make use of deep learning models. The state-of-the-art deep learning models and methods used for phony face image identification are thoroughly reviewed in this work. A thorough analysis of the most efficient deep learning models and techniques for detecting fake face images is presented in this paper. It investigates hybrid models with convolutional neural networks (CNNs), and recurrent neural networks (RNNs), which have proven successful in detecting image manipulation. We also explore specialized networks including deepfake detection models that utilize pre-trained architectures such as ResNet, VGG, and EfficientNet, which have indicated promising results in identifying subtle distortions in fake images. Generative adversarial networks (GANs) have likewise been analyzed for both generating fake images and their detection, with models like XceptionNet and EfficientNet emerging as notable for their accuracy. The document also addresses hybrid and ensemble techniques that integrate several models to enhance detection precision. Finally, this paper focuses on the challenges in fake face detection, including dataset biases, adversarial attacks, and real-time processing. It also provides a comparison of various detection techniques, highlighting their strengths and weaknesses. The results emphasize the need for continued innovation in addressing these challenges and improving the effectiveness of detection methods against fake face technology.
International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May , 2025
The automation of attendance tracking in educational settings has become increasingly vital to en... more The automation of attendance tracking in educational settings has become increasingly vital to enhance efficiency and reduce manual errors. This paper presents a software-based attendance system utilizing face recognition technology to streamline student attendance management. Developed in Python, the system captures student faces via a webcam, marks attendance in real-time, and provides comprehensive analytics without requiring specialized hardware. Features include SMS notifications, Google Form integration, and detailed attendance reports. Testing over a simulated semester demonstrates its accuracy and scalability, making it a practical solution for modern classrooms.
Keywords—Face Recognition, Machine Learning, Deep Learning, Attendance System, Computer Vision, Automation

International Journal of Computer Science and Information Security (IJCSIS), Vol. 23, No. 3, May, 2025
The Internet of Things (IoT) is transforming many
industries through the integration of the Inter... more The Internet of Things (IoT) is transforming many
industries through the integration of the Internet with electronic
and mechanical devices, as well as sensors, to make more intelligent
systems in applications such as healthcare, smart homes,
and public safety. This rapid growth in IoT devices has resulted
in significant security risks, turning these systems into targets for
cybercriminals. To counter these weaknesses, Network Intrusion
Detection Systems are deployed, which monitor network traffic
to identify potential malicious activities. This paper evaluates the
performance of machine learning models in detecting intrusions
within IoT-enabled smart city networks. For this, the UNSWNB15
dataset is used, which contains realistic network traffic. The
dataset was preprocessed, including handling missing data, onehot
encoding categorical variables, and normalizing numerical
features. The paper performs multi-class classification to identify
specific attack types. We tested various machine learning
algorithms, including Decision Tree, K-Nearest Neighbor, Linear
Regression, etc. classifiers. The preprocessed dataset contained
61 attributes with 81,173 entries, which was sufficient for the
models to be thoroughly tested. The results provide a lot of
insight into the strengths and weaknesses of various machine
learning techniques in improving the security of IoT networks,
especially in critical applications in smart cities.
Index Terms—internet of things, machine learning, security
risks, smart cities
Uploads
IJCSIS Papers by Journal of Computer Science IJCSIS
energy efficiency and performance inherent in choosing between
Bare Metal, Real-Time Operating System (RTOS), and
Full Operating System (Full OS) architectures for Internet of
Things (IoT) devices. We review recent literature, compare
energy consumption mechanisms, analyze real-world examples,
and summarize the trends that influence architectural decisions.
This work also provides comparative diagrams, performance
tables, and case examples to guide IoT designers in selecting
an informed, energy-aware architecture.
the overall performance of operating systems, particularly in environments
that run multiple processes, where responsiveness and
efficient resource utilization are essential. Scheduling algorithms
such as Shortest Job First (SJF) and Shortest Remaining Time
First (SRTF) are theoretically effective in reducing the average
waiting and turnaround times. However, their implementation
in real-world systems is limited due to the need for precise
knowledge of each process’s CPU burst time, which is typically
unavailable during execution. Traditional methods, such as exponential
averaging, often fail to provide accurate or adaptive
predictions for dynamic workloads. This study explores how
Machine Learning (ML) can be applied to accurately predict
CPU burst times, enabling practical use of SJF and SRTF in
real-time systems. This study leverages ML to predict CPU burst
times, using a synthetic dataset mimicking the Grid Workload
Archive (GWA-T-4) to train models including K-Nearest Neighbors
(KNN), Support Vector Machines (SVM), Decision Trees,
Random Forest, XGBoost, and Artificial Neural Networks (ANN).
The ANN and ANN+SVM ensemble achieved superior accuracy
(MAE ˜4.12–4.13 ms, CC ˜0.885), significantly outperforming
the baseline (MAE = 20.67 ms). These predictions enable the
scheduler to make more informed decisions, leading to reduced
context switching, improved resource allocation, and shorter wait
and turnaround times. Moreover, this approach helps alleviate
issues such as process starvation. By embedding ML into CPU
scheduling, the research offers a practical solution to transform
the theoretical benefits of SJF and SRTF into practical, realworld
applications, contributing to the development of intelligent
and adaptive operating systems.
organizations and individuals store, access, and interact with
data, software, and computing resources. Public cloud platforms
offer scalable, cost-efficient, and globally accessible services,
often characterized by managed infrastructure and multitenancy.
However, this technological advancement faces
escalating cyber threats and sophisticated attack vectors such as
Denial-of-Service (DoS), Man-in-the-Middle (MITM) attacks,
data breaches, and insecure APIs. These threats compromise
data integrity, availability, and confidentiality. This study
employs a descriptive research methodology, supplemented by
desktop analysis, to examine the evolving landscape of security
threats in public cloud computing. It further evaluates mitigation
strategies including encryption, access control, secure API
management, and continuous monitoring, while highlighting
persistent research gaps that demand attention for robust cloud
security frameworks.
threat to public stability, political life, and trust. Conventional
content moderation measures prove inadequate to contain the
velocity and volume with which disinformation spreads. Machine
Learning (ML) has been an invaluable aid in the automation of
fake news identification by learning patterns and cues from large
amounts of text data. This review paper provides an overview
of most of the ML methods employed for the detection of
fake news, from the simple algorithms Naive Bayes, Support
Vector Machines, and Decision Trees to the recent deep learning
methods such as LSTM, CNN, and transformer-based models
such as BERT. This work also encompasses feature extraction
methods like TF-IDF, word embeddings, and stylometric features,
benchmarking datasets, and evaluation metrics used in the
literature. The work provides the advantages and limitations
of existing methods and highlights open problems such as
dataset generalizability, model explainability, and adversarial
robustness Lastly, the review outlines future directions, such as
the development of hybrid models, real-time detection systems,
and ethics. The intention of this paper is to be a starting ground
for researchers as well as practitioners in creating improved and
more efficient fake news recognition systems.
Index Terms—Fake News Detection, Machine Learning, Natural
Language Processing, Text Classification, Deep Learning,
Transformer Models, Social Media Misinformation, NLP, News
Verification
hearing impairments persist due to limited assistive resources.
This paper introduces Hybrid_ASL, a novel deep learning model
leveraging cross-domain transfer learning to classify American
Sign Language (ASL) hand gestures with high accuracy. Built on
a transfer learning framework, Hybrid_ASL adapts knowledge
from diverse visual domains to optimize its architecture for ASL
recognition. Trained on a dataset of 87,000 ASL images, the
model underwent iterative fine-tuning to balance accuracy and
computational efficiency. Comparative experiments against stateof-the-art architectures, including convolutional neural networks
and vision transformers, demonstrate that Hybrid_ASL achieves
an exceptional accuracy of 99.98%, with matching precision,
recall, and F1-score, while maintaining low architectural
complexity. These results highlight the efficacy of transfer
learning and model adaptation in developing robust assistive
technologies, paving the way for improved accessibility and
quality of life for the hearing-impaired community.
Index Terms—Cross-Domain Transfer Learning, ASL
Recognition, Hybrid_ASL, Deep Learning, Assistive
Technology, Hand Gesture Classification, Model Adaptation,
Vision Mamba Models, Fine- Tuning, Large-Scale Image
Recognitions.
to build lasting customer relationships. But to keep and grow
those customers, you need an ever-increasingly personalized
experience that drives repeat business. It is where Adaptive
technology machine learning followed by collaborative
filtering, comes into the image. Machine learning: a method
that teaches computer systems to learn how to perform certain
tasks using data and then improve the process over time
automatically without being programmed specifically.
Recommender systems are a subfield of ML, where CF is
perhaps the most successful approach: items are
recommended to users based on behaviors and preferences of
similar-minded people. Subscription businesses can use
machine learning and collaborative filtering to gather better
customer information, create more precise customer profiles,
offer tailored advice or help products, and predict which
subscribers will cancel their subscriptions. Using ML and CF,
custom-made recommendations by targeting marketing
campaigns can improve customer experience, thus boosting
satisfaction and retention. On top of that, ML algorithms can
detect patterns from the customer data and indicate churn in
advance to mitigate, offering proactive solutions which may
help retain few numbskulls for a while longer. Thus,
deploying ML and CF in subscription businesses can lead to
greater personalization, lower customer churn rates, and
increased revenues. These methodologies allow businesses to
decipher better and tap into the consumer market, which is
extremely important for today's generation of
subscription-based models.
Keywords
Collaborative Filtering, Boosting Satisfaction, Retention,
Subscription Businesses, Proactive Solutions
Through the integration of machine learning (ML) algorithms and optimization techniques, the goal is to enhance wind energy
yield, reduce soperational and maintenance costs, and improve responsiveness to environmental variability. The paper outlines a
comprehensive strategy that utilizes historical and real-time data—meteorological, geographical, infrastructural, legal, and
economic—to build a robust decision-making system. The proposed framework supports intelligent control systems, predictive
analytics, and automated feedback mechanisms, allowing wind farms to operate with greater efficiency and adaptability.
dot (QD) materials in chemistry, chemical engineering, and
medical science has created a crucial need for robust accurate
modeling frameworks, interpretable and secure design. Without
practical implementation of the physical product and application
in solar cells and TV, it is important to modeling the QD
through virtual representation of the molecules. Even-though
the quantum simulation and machine learning are accelerated
material screening, there are 3 business challenges remain unresolved: (1) prediction accuracy is low for the DFT quantum
simulation due to noise and systematic errors, (2) experimental
inconsistencies due to outliers, and (3) most ML models lack
interpretability, undermining trust and scientific insight. To
this end, we propose a cyber-twin framework that combines
graph neural networks, specifically GINEConv, Bayesian Gaussian Process Regression, and advanced anomaly detection based
on Isolation Forests. The proposed model learns chemically
rich representations from simulated quantum dot structures,
while Bayesian inference enables uncertainty-aware predictions
calibrated against experimental variability. We simulate realistic
scenarios including 5 % experimental outliers, 10% noise-based
attacks, 7% bias-targeted attacks, and 3% full corruption, and
design multi-feature anomaly scores to detect these threats. The
system achieves over 90% detection accuracy for outliers, with
a predictive R² of 0.95 and strong calibration (Pearson r =
0.87) between uncertainty and absolute error. Additionally, SHAP
analysis identifies key atomic features—such as electronegativity
and bond disorder—as dominant drivers of NIR absorption,
offering explainability aligned with chemical knowledge.
Texas, and gas pipeline leak detection is the great business challenge.
In the world, every 2 minutes there is a leak in the pipeline
and more than 40 billion dollars have been unutilized for this
impeding task. Therefore, oil pipeline leak detection and control
is a critical business issue to prevent environmental hazards,
economic losses, and public concerns. This paper presents an oil
pipeline leak detection and control framework that incorporates
optimal filtering and LSTM networks. The proposed technique
leverages sensor data from SCADA/IoT platforms to properly
estimate pipeline states, and provide real-time control commands.
Moreover, the online learning framework is introduced to adapt
the model to evolving pipeline conditions. Numerical Simulation
results show the effectiveness of the developed approach. Hopefully,
this framework and analysis will be significantly helpful to
the education and industries. The spatial and temporal features
with the hurricane and tornado will be considered in this model
and analysis the impacts.
Index Terms—Kalman Filter, Leak Detection, LSTM Networks,
Hybrid Fault Diagnosis, Pipeline.
systems, and IoT networks are challenging tasks. This paper
presents a comprehensive state-space model for an Industrial
Cyber-Physical System (CPS) Nexus integrating EV charging stations, smart grids, water distribution systems, and IoT networks.
This subsystem each communicate with other through AMQP
(Advanced Message Queuing Protocol). For this big system, some
of the states are unobservable, therefore we develop an extended
Kalman Filter framework for state estimation and demonstrate
its performance through numerical simulations. In future, we will
consider the cyber attack and apply the controller to regulate
the CPS states.
Index Terms—Electric Vehicles Extended Kalman filter HAAC
system Smart Grids Water System
rapidly, particularly in cases of UPI and QR code-based
fraud. With the growing adoption of digital
transactions, the complexity and frequency of these
crimes have also risen, highlighting the urgent need for
effective digital forensic investigation. However,
currently there is no well-structured and standardized
framework available to conduct digital forensic
investigations of financial cybercrimes. This research
paper introduces an incident-based digital forensics
framework designed specifically to investigate financial
cybercrimes. The framework is developed entirely using
open-source digital forensic tools and follows a
structured approach comprising five key phases:
incident identification, evidence preservation, evidence
collection, data analysis, and reporting &
documentation. To evaluate the effectiveness of this
framework, various UPI/QR fraud scenarios were
analyzed using digital forensic tools such as Autopsy
and Wireshark. The results demonstrate that this
framework enables a more systematic and in-depth
analysis of transaction logs, metadata, and network
artifacts compared to traditional investigation methods,
allowing for the generation of detailed forensic reports.
Unlike conventional approaches, the proposed
framework offers a more structured and innovative
digital forensics methodology, enhancing the reliability
of financial cybercrime investigations. It assists law
enforcement agencies in conducting efficient forensic
analysis while also improving the process of presenting
digital evidence in court, making legal proceedings
more effective and timely.
for cyberattacks, especially Advanced Persistent Threats (APTs). Conventional intrusion detection systems (IDS) tend to be
insufficient in detecting new and advanced attacks. This paper provides an exhaustive review of AI-based IDS methodologies
employing deep learning methods to identify new intrusions in IoT and APT networks. We emphasize recent progress, challenges,
datasets, evaluation criteria, and directions for future research, helping in the creation of strong IDS frameworks for emerging
cyber threats [1], [4], [7]. We further discuss hybrid methods, model deployment problems, and the use of explainable AI (XAI)
to enhance system transparency [10], [13], [18].
Keywords—Face Recognition, Machine Learning, Deep Learning, Attendance System, Computer Vision, Automation
industries through the integration of the Internet with electronic
and mechanical devices, as well as sensors, to make more intelligent
systems in applications such as healthcare, smart homes,
and public safety. This rapid growth in IoT devices has resulted
in significant security risks, turning these systems into targets for
cybercriminals. To counter these weaknesses, Network Intrusion
Detection Systems are deployed, which monitor network traffic
to identify potential malicious activities. This paper evaluates the
performance of machine learning models in detecting intrusions
within IoT-enabled smart city networks. For this, the UNSWNB15
dataset is used, which contains realistic network traffic. The
dataset was preprocessed, including handling missing data, onehot
encoding categorical variables, and normalizing numerical
features. The paper performs multi-class classification to identify
specific attack types. We tested various machine learning
algorithms, including Decision Tree, K-Nearest Neighbor, Linear
Regression, etc. classifiers. The preprocessed dataset contained
61 attributes with 81,173 entries, which was sufficient for the
models to be thoroughly tested. The results provide a lot of
insight into the strengths and weaknesses of various machine
learning techniques in improving the security of IoT networks,
especially in critical applications in smart cities.
Index Terms—internet of things, machine learning, security
risks, smart cities