Papers by International Journal of Scientific Research in Computer Science, Engineering and Information Technology IJSRCSEIT

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2023
In the rapidly evolving digital economy, global enterprises require immediate access to actionabl... more In the rapidly evolving digital economy, global enterprises require immediate access to actionable insights to remain competitive and responsive. Real-time data integration has become the cornerstone of Operational Business Intelligence (OBI), enabling organizations to monitor, analyze, and act upon business events as they occur. Unlike traditional business intelligence systems that rely on batch processing, OBI demands architectures capable of handling high-velocity data from diverse, distributed sources with minimal latency. This paper explores the doctrinal foundations and technological frameworks of real-time data integration architectures that support OBI in global enterprises. It discusses architectural models such as federated systems, event-driven frameworks, and data mesh approaches that ensure scalability, compliance, and interoperability across international boundaries. The paper also examines the convergence of cloud computing, AI, and edge technologies with real-time data processing, highlighting their collective impact on enterprise agility. Legal and ethical considerations—including data privacy, governance, and algorithmic transparency—are integrated into the analysis to provide a comprehensive view of implementing real-time systems responsibly. The research concludes by proposing a unified, scalable, and legally compliant framework tailored to the needs of globally distributed enterprises aiming for real-time operational intelligence.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2025
This article presents a comprehensive examination of a transformative project that reimagined mar... more This article presents a comprehensive examination of a transformative project that reimagined marketplace payment onboarding through the implementation of a unified global payment framework. The initiative addressed critical challenges faced by marketplace platforms operating across borders, including fragmented user experiences, technical debt accumulation, inconsistent security controls, operational inefficiencies, and limited scalability. Through a carefully architected solution built on React JS, Spring Boot microservices, and Kubernetes orchestration, the article implemented key innovations including universal payment provider integration, enhanced multi-factor authentication, and dynamic compliance management. The phased deployment strategy minimized disruption while validating system performance, resulting in dramatic improvements in seller onboarding time, support requirements, completion rates, and fraud prevention. The article examines technical challenges overcome during implementation, including integration complexity, data migration, and scalability under load, while detailing the multi-layered security approach that protected sensitive financial information through encryption, tokenization, behavioral analysis, and automated compliance controls. This work advances industry understanding of how unified payment frameworks can enable marketplace scalability in an increasingly global commercial landscape.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2025
This article presents a comprehensive examination of a transformative project that reimagined mar... more This article presents a comprehensive examination of a transformative project that reimagined marketplace payment onboarding through the implementation of a unified global payment framework. The initiative addressed critical challenges faced by marketplace platforms operating across borders, including fragmented user experiences, technical debt accumulation, inconsistent security controls, operational inefficiencies, and limited scalability. Through a carefully architected solution built on React JS, Spring Boot microservices, and Kubernetes orchestration, the article implemented key innovations including universal payment provider integration, enhanced multi-factor authentication, and dynamic compliance management. The phased deployment strategy minimized disruption while validating system performance, resulting in dramatic improvements in seller onboarding time, support requirements, completion rates, and fraud prevention. The article examines technical challenges overcome during implementation, including integration complexity, data migration, and scalability under load, while detailing the multi-layered security approach that protected sensitive financial information through encryption, tokenization, behavioral analysis, and automated compliance controls. This work advances industry understanding of how unified payment frameworks can enable marketplace scalability in an increasingly global commercial landscape.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2018
Quantum Key Distribution (QKD) is a groundbreaking solution to secure communication that uses fun... more Quantum Key Distribution (QKD) is a groundbreaking solution to secure communication that uses fundamental quantum mechanics principles to allow for the exchange of cryptographic keys with provable security. BB84 and E91, two of the first and most significant QKD systems, presented two different approaches to using quantum characteristics to prevent eavesdropping and maintain confidentiality. A brief literature review of these two fundamental protocols is presented in this paper. Bennett and Brassard created the BB84 protocol in 1984. It uses polarized photons and detects interception using the uncertainty principle. On the other hand, Bell's theorem and entangled photon pairs are used in the E91 protocol, which was first presented by Ekert in 1991, to guarantee the integrity of the shared key. Each protocol's theoretical foundations, methods of implementation, and relative benefits has been explored in addition to how they have influenced contemporary quantum cryptography systems. This paper attempts to give a basic understanding of QKD and its early advancements by examining the similarities and differences between BB84 and E91.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2024
Heterogeneous cloud-edge computing environments present unique challenges in resource allocation ... more Heterogeneous cloud-edge computing environments present unique challenges in resource allocation due to their distributed nature, varying computational capabilities, and dynamic workload patterns. This paper presents a comprehensive analysis of machine learning approaches for optimizing resource allocation in these environments. I categorize and evaluate various ML techniques including reinforcement learning, deep learning, and federated learning approaches, highlighting their strengths and limitations. A comparative analysis of these techniques demonstrates that hybrid approaches combining reinforcement learning with deep neural networks achieve 18-22% better resource utilization and 15% lower latency compared to traditional heuristic methods. I also propose a novel adaptive resource allocation framework that dynamically adjusts allocation policies based on changing network conditions and application requirements, demonstrating superior performance in real-world testbeds.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2025
Our work dives into mixing AI-powered anomaly detection with microservices segregated by Multi-Pr... more Our work dives into mixing AI-powered anomaly detection with microservices segregated by Multi-Protocol Kinematics (MPK), all meant to shore up security in optical networks. We hit a point where, generally speaking, traditional detection methods just couldn’t handle the vulnerabilities these networks face. Using a huge dataset of everyday traffic and those odd, unexpected spikes, we pieced together a system that speeds up real-time detection and response—often in ways that feel both innovative and, well, a bit off the beaten path. One standout is that this combo boosts how often we catch anomalies by nearly 30% over older techniques, and it slashes false alerts by about 25%; results like that really help make the whole operation more trustworthy. It’s key in places like healthcare too—where optical networks aren't just transferring data, they're safeguarding sensitive patient info. Keeping these data streams solid and secure builds trust in digital health systems and even bumps up overall patient safety. Plus, this approach might just serve as a rough blueprint for future security measures in other sectors that lean on optical networks, helping nudge our entire digital infrastructure towards being a bit more secure.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2020
Efficient training data caching is a critical aspect of enhancing deep learning performance withi... more Efficient training data caching is a critical aspect of enhancing deep learning performance within edge computing networks, where computational resources and data bandwidth are often constrained. This paper investigates innovative methodologies for optimizing data caching mechanisms to address challenges associated with latency, data redundancy, and resource utilization in distributed edge systems. The exponential growth in data generation, coupled with the increasing demand for real-time learning and deployment, necessitates advanced techniques to manage and cache training datasets effectively. Traditional caching methods, designed for centralized cloud environments, are inherently unsuitable for the decentralized and resource-constrained nature of edge computing. This study presents a detailed exploration of adaptive caching strategies, data prioritization techniques, and compression algorithms tailored for edge systems, emphasizing their integration with deep learning workflows to ensure minimal delay and optimal performance.
The research introduces a comprehensive framework for managing training data across distributed edge nodes, leveraging predictive caching models that incorporate reinforcement learning and statistical optimization to anticipate data needs dynamically. These models adapt to varying workload patterns, data access frequencies, and network conditions, thus enhancing cache hit rates and reducing computational overhead. Furthermore, the paper examines techniques for minimizing data redundancy, such as deduplication and data partitioning, which are crucial for optimizing storage and bandwidth in edge networks. The integration of these approaches with edge-based deep learning systems enables efficient data sharing and collaborative model training, fostering improved scalability and robustness in distributed environments. (1)
The proposed solutions are evaluated through rigorous experimental setups, including real-world edge computing scenarios, to analyze their effectiveness in reducing latency, improving model training times, and optimizing resource utilization. The results demonstrate that adaptive caching mechanisms and data-aware scheduling significantly enhance the performance of deep learning applications in edge networks. Additionally, the study addresses the trade-offs between computational efficiency and data consistency, highlighting strategies to balance these competing objectives in edge systems.
This research contributes to the growing body of knowledge on edge computing by providing actionable insights and practical guidelines for deploying efficient data caching systems tailored to deep learning tasks. The findings underscore the potential of intelligent caching to bridge the gap between the increasing computational demands of modern deep learning models and the limited resources available in edge networks. Moreover, the paper discusses the implications of these advancements for emerging applications, such as autonomous vehicles, smart cities, and industrial IoT, where real-time decision-making and low-latency processing are paramount. By presenting a unified approach to managing training data in edge computing environments, this work lays the foundation for future research into optimizing deep learning workflows in decentralized systems.
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2025
roadmap encompassing technical and organizational considerations, this article serves as an essen... more roadmap encompassing technical and organizational considerations, this article serves as an essential resource for technology leaders and architects tasked with modernizing mission-critical mainframe systems while ensuring minimal disruption to core business functions.
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2023
This paper explores the integration of artificial intelligence (AI) and cloud technologies in the... more This paper explores the integration of artificial intelligence (AI) and cloud technologies in the hospitality industry to enhance security and privacy. It examines AI applications such as facial recognition for secure room access, intelligent surveillance, and fraud detection in online transactions. Additionally, the paper discusses cloud-based systems for encrypted data storage, management, and disaster recovery. Key challenges, including privacy concerns and compliance with regulations like GDPR and CCPA, are addressed alongside future trends like quantum encryption. Case studies and comparative analyses provide practical insights into mitigating digital risks while ensuring seamless guest experiences.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2024
This comprehensive article explores the symbiotic relationship between digital transformation and... more This comprehensive article explores the symbiotic relationship between digital transformation and innovation in driving business growth. It examines the key components of digital transformation, including cloud computing, big data analytics, AI, IoT, cybersecurity, and digital platforms. The article discusses how digital transformation catalyzes innovation through data-driven decision-making, fostering experimentation, enhancing collaboration, and accelerating time-to-market. It outlines the competitive advantages of digital transformation, such as improved customer experience, operational efficiency, new revenue streams, agility, and talent attraction. Case studies of successful digital transformations are presented, along with an analysis of organizations' challenges and considerations in their digital transformation journeys.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2024
The rapid expansion of the Internet of Things (IoT) introduces significant security challenges, g... more The rapid expansion of the Internet of Things (IoT) introduces significant security challenges, given the resource-constrained nature of most IoT devices. To address these challenges, elliptic curve cryptography (ECC) has emerged as a promising solution due to its ability to deliver high levels of security with shorter key lengths, making it highly efficient for devices with limited computational power and memory. This paper focuses on a comparative analysis of three widely used ECC-based cryptographic algorithms: Elliptic Curve Digital Signature Algorithm (ECDSA), Elliptic Curve Integrated Encryption Scheme (ECIES), and Elliptic Curve Diffie-Hellman (ECDH). Through a detailed evaluation of performance metrics such as key generation speed, execution time, and overall efficiency, the study identifies the strengths and limitations of each algorithm in securing IoT environments. The results reveal that ECDH excels in public key generation speed, making it suitable for applications requiring frequent key exchanges. ECDSA demonstrates the fastest overall execution time, providing an efficient option for digital signatures and authentication. Conversely, ECIES, while slower, offers robust encryption capabilities ideal for scenarios demanding enhanced confidentiality. This comparative study highlights the importance of aligning algorithm selection with specific IoT application requirements, balancing factors like security, performance, resource constraints, and operational complexity. The findings underscore the suitability of ECC-based algorithms in addressing the unique challenges of IoT security.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2024
In today’s digital landscape, organizations are increasingly required to manage their data and ac... more In today’s digital landscape, organizations are increasingly required to manage their data and access control mechanisms in alignment with cybersecurity frameworks such as National Institute of Standards & Technology (NIST), ISO 27001, and General Data Protection Regulation (GDPR). Identity Governance and Administration (IGA) is a critical component in achieving both compliance and security objectives. This paper examines the role of identity governance solutions (IGS) in enhancing cybersecurity compliance by integrating identity lifecycle management, role-based access control (RBAC), and auditing mechanisms into the design of cybersecurity frameworks. We discuss the challenges organizations face when designing such solutions, including scalability, automation, and integration with existing enterprise systems. Additionally, we explore common IGA tools available in the market and their effectiveness in meeting compliance objectives. A case study is used to demonstrate the practical implementation of identity governance solutions, revealing how they mitigate security risks and streamline compliance reporting. Our findings suggest that a well-designed IGS not only enhances security posture but also improves operational efficiency while ensuring adherence to regulatory standards.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2018
Innovative and long-lasting testing techniques are required as the difficulties of testing softwa... more Innovative and long-lasting testing techniques are required as the difficulties of testing software programs rise in tandem with their complexity and scope. —A crucial step in the software development process is software testing. Unfortunately, despite testing efforts, defects still plague many projects, and testing still consumes a large amount of time and money. Software testing offers a way to lower the system's total cost and mistake rate. To improve software quality, a variety of software testing approaches, strategies, and tools are available. Given its importance in both the earlier and later creation phases, software validation is an essential component of the life cycle of software development. Should be supported by improved and effective processes and procedures. This article offers a brief overview of software testing, including its goals and fundamentals. Additionally it also responds to inquiries concerning the fundamental abilities needed for software testers, or those who want to pursue a career in testing. Focuses on the fundamentals that are considered while creating test cases and planning. Writing effective test cases is another topic covered in this article. This is among the crucial elements in testing.
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2020
This paper highlights various data integration techniques that I applied when implementing cross ... more This paper highlights various data integration techniques that I applied when implementing cross Platform analysis including ETL integration techniques, API integration and real-time integration techniques. This looks at examples from Netflix, Amazon and Uber how these enable organisations to aggregate information from various sources for decisions, customisation and business operations. The consequence of concerning working discussed in this paper is an evaluation of the potentials of effective data integration and presenting it as a crucial factor towards the realization of competitiveness and innovation.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2022
Generative AI is thus a game-changer in the creative industries, especially in music and Art, sin... more Generative AI is thus a game-changer in the creative industries, especially in music and Art, since using machines to produce content by themselves has become a reality. This paper aims to review the use of generative AI in these fields, particularly emphasizing the techniques and models conducive to innovation. By examining state-of-the-art methods, such as GANs and RNNs, the paper explains how these technologies are leveraged to generate music and artwork. This paper provides case studies to show AI's potential in developing new music and artworks. Furthermore, the difficulties of incorporating AI into creative workflows, including ethical questions and overcoming the uncanny valley, are discussed. Moreover, the study indicates that while Generative AI can produce colossal returns, efforts should be made to minimize its weaknesses and address the future balance between utilizing artificial intelligence as a creative tool and a scripted one.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2022
Generative AI has become a change-maker in many fields, using different text, image, and voice ge... more Generative AI has become a change-maker in many fields, using different text, image, and voice generation modes. One of the profound sub-areas within this domain is the optimum utilization of learning systems with minimal information using zero- and few-shot learning. Zero-shot learning lets models operate on novel classes or tasks for which it has no training sample, while few-shot learning allows models to learn with initial samples. Such approaches are useful when it is difficult or expensive to obtain information, which suggests a technique for providing a direction for developing accurate AI models when data are lacking. This paper explains the background, application, and challenges of generative AI models that use zero-shot/one-shot learning, outlining how these techniques help set new paradigms and raise innovative horizons for AI systems.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2024
Federated learning (FL) on edge devices has emerged as a promising approach for decentralized mod... more Federated learning (FL) on edge devices has emerged as a promising approach for decentralized model training, enabling data privacy and efficiency in distributed networks. However, the complexity of these models presents significant challenges in terms of transparency and interpretability, which are critical for trust and accountability in real-world applications. This paper explores the integration of explainable AI techniques to enhance model interpretability within federated learning systems. By incorporating computational geometry, we aim to optimize model structure and decision-making processes, providing clearer insights into how models generate predictions. Additionally, we examine the role of advanced database architectures in managing the complexity of federated learning models on edge devices, ensuring efficient data handling and storage. Together, these approaches contribute to a more transparent, efficient, and scalable framework for federated learning on edge networks, addressing key challenges in both model explainability and performance optimization. This review highlights recent advancements and suggests future directions for research at the intersection of federated learning (FL), edge computing, explainability, and computational techniques.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, Aug 30, 2021
AI and ML in practice are discussed in this paper to analyse how these technologies affect busine... more AI and ML in practice are discussed in this paper to analyse how these technologies affect business operations in various industries. Artificial Intelligence and Machine Learning, concepts which for a long time were a part of theory only, are now essential to develop organizational performance and minimize the burden of various tasks as well as optimize decision making. In the present paper, reviewing a vast amount of the relevant literature, the author explains the advantages of AI and ML, including higher productivity, lower costs, and higher customer satisfaction; at the same time, the listed disadvantages, including poor data quality, adaptation of employees, and ethical issues, are also mentioned. It covers industries such as finance, healthcare, retail, manufacturing, and many others to demonstrate how AI & ML disrupt conventional approaches and unveil innovative sources of competitiveness. The paper also covers the future of AI and focuses on existing hurdles that inhibit the utilization of these technologies in businesses.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2023
This comprehensive research paper explores cutting-edge debugging techniques for multi-processor ... more This comprehensive research paper explores cutting-edge debugging techniques for multi-processor communication in 5G systems. As 5G networks continue to evolve and expand, the complexity of multi-processor communication introduces unique challenges in system debugging and optimization. This study examines various advanced debugging methodologies, including distributed tracing, time-travel debugging, AI-assisted anomaly detection, and hardwareassisted techniques. The research also delves into real-time debugging protocols, security considerations, and performance analysis of these debugging solutions. By synthesizing current literature and industry practices, this paper provides valuable insights into the state-of-the-art debugging approaches for 5G systems and outlines future research directions in this critical field.

International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2020
As this research paper will demonstrate, integrating cloud related architectures in to SAP landsc... more As this research paper will demonstrate, integrating cloud related architectures in to SAP landscapes has revolutionized the approach to optimization. The old approaches to SAP landscape optimization are compared with the new approaches which are based upon cloud solutions such as scalability, cost effectiveness and flexibility. Some of the important new concepts like virtualization, containerization and serverless architecture are discussed with regards to the performance characteristics and operational improvements. It also identifies the various issues of implementation strategies and integration and gives recommendations regarding organisations sustainability of the transition. Projections that would allow users to have a glimpse of the developments in the features and features of cloud-based SAP environments are considered to present users with directions in the development of trends and technologies. Thus, the paper has concluded that implementing cloudbased strategies prepares organizations to take adv
Uploads
Papers by International Journal of Scientific Research in Computer Science, Engineering and Information Technology IJSRCSEIT
The research introduces a comprehensive framework for managing training data across distributed edge nodes, leveraging predictive caching models that incorporate reinforcement learning and statistical optimization to anticipate data needs dynamically. These models adapt to varying workload patterns, data access frequencies, and network conditions, thus enhancing cache hit rates and reducing computational overhead. Furthermore, the paper examines techniques for minimizing data redundancy, such as deduplication and data partitioning, which are crucial for optimizing storage and bandwidth in edge networks. The integration of these approaches with edge-based deep learning systems enables efficient data sharing and collaborative model training, fostering improved scalability and robustness in distributed environments. (1)
The proposed solutions are evaluated through rigorous experimental setups, including real-world edge computing scenarios, to analyze their effectiveness in reducing latency, improving model training times, and optimizing resource utilization. The results demonstrate that adaptive caching mechanisms and data-aware scheduling significantly enhance the performance of deep learning applications in edge networks. Additionally, the study addresses the trade-offs between computational efficiency and data consistency, highlighting strategies to balance these competing objectives in edge systems.
This research contributes to the growing body of knowledge on edge computing by providing actionable insights and practical guidelines for deploying efficient data caching systems tailored to deep learning tasks. The findings underscore the potential of intelligent caching to bridge the gap between the increasing computational demands of modern deep learning models and the limited resources available in edge networks. Moreover, the paper discusses the implications of these advancements for emerging applications, such as autonomous vehicles, smart cities, and industrial IoT, where real-time decision-making and low-latency processing are paramount. By presenting a unified approach to managing training data in edge computing environments, this work lays the foundation for future research into optimizing deep learning workflows in decentralized systems.