In this paper, we discuss how DMAIC in Six Sigma is used to raise testing standards in software c... more In this paper, we discuss how DMAIC in Six Sigma is used to raise testing standards in software companies, make fewer errors, and keep the process improving. An organization can use DMAIC Define, Measure, Analyze, Improve, Control to find solutions, count defects, look into what caused them, address these problems, and keep improvements in place. Applying DMAIC to software testing allows you to discover and address bugs more efficiently, so the software becomes more reliable and customers are happier. They put together strategies by looking at reports and examples on how DMAIC helps solve software quality problems in tough enterprise systems. Researchers stress that using numbers to decide on actions and repeating the progress stages improves work and decreases mistakes. This paper teaches software testers and quality managers useful methods for using Six Sigma to achieve fewer defects and continuous improvements. A visual and diagrammatic approach is used to explain each DMAIC phase for software testing.
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING AND TECHNOLOGY (IJCET) , 2016
With the onset of digital automation, the Software Development Life Cycle (SDLC) has been transfo... more With the onset of digital automation, the Software Development Life Cycle (SDLC) has been transformed from scripting to intelligent workflows and automated test pipelines. This article addresses the upcoming trends in the adoption of automation tools that automate testing and releasing to improve efficiency and reduce errors. It highlights the benefits of adopting continuous integration and continuous delivery (CI/CD) pipelines, which enable faster, more reliable software deployment. By looking at past research prior to 2016 and actual real-world instances, this study shows that orchestrated workflows enhance collaboration, reduce human overhead, and provide consistency throughout the SDLC. The article also touches on challenges faced in embracing automation and throws light on the way forward, including the future of robotic process automation (RPA) and advanced orchestration techniques. Graphics convey the evolution from manual to automated processes, highlighting the increasing relevance of intelligent workflows in contemporary software development.
International Journal of Computer Science and Information Technology Research (IJCSITR) , 2019
To ensure strong software processes, security needs to be integrated into the CI/CD pipeline in a... more To ensure strong software processes, security needs to be integrated into the CI/CD pipeline in agile environments. The paper investigates the use of DevSecOps tools and methods that include security continually, with Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST). It shows how policy-as-code helps to enforce compliance and the responsibility developers have to use secure coding methods. Automated processes make it faster and more secure to deliver changes by finding vulnerabilities early and requiring less manual effort, which lowers risks and strengthens the system. The analysis of different industry issues shows that by adding security to agile pipelines, businesses can comply with regulations and speed up development without affecting security. The information is meant to point practitioners to the best practices in using DevSecOps to build secure, successful and adherent software.
International Journal of Computer Science and Engineering Research and Development (IJCSERD), 2018
Adopting the microservices architecture in software development supports modular design, helps pr... more Adopting the microservices architecture in software development supports modular design, helps programs scale better, and makes the release cycle much faster. This work reviews how Spring Boot supports creating microservices back-end systems in the cloud. It starts by comparing monolithic and distributed systems and highlights what microservices have overcome. Using Spring Boot, Spring Cloud and Spring Security from the Spring ecosystem, the study shares practical methods, successful patterns, and strategies for deploying with Docker and Kubernetes. It looks at implementing DevOps by paying special attention to continuous integration and delivery pipelines. Special attention is given to API design in modules, the ability to easily handle load changes and the separation of faults. Case studies and scholarly studies are presented in the paper to explain how Spring Boot makes development faster, maintenance easier and production scaling better. They show developers and architects how to design adaptable and stable systems with Spring Boot microservices.
European Journal of Advances in Engineering and Technology, 2023
Artificial intelligence (AI) is transforming how we create data visualizations, but a major limit... more Artificial intelligence (AI) is transforming how we create data visualizations, but a major limitation remainsmost AI tools produce generic visuals that ignore cultural differences in interpretation. Colors, symbols, layouts, and even how data is presented can mean different things across cultures, leading to misunderstandings or exclusion. Our research explores how cultural background affects how people understand AI-generated visuals and introduces a new approach to designing adaptive visual analytics systems that respect cultural diversity. Using a combination of methods-including cross-cultural user testing, computational analysis of AI-generated visuals, and designer interviews-we uncover cultural biases in current tools (such as Western-centered color meanings or left-to-right flow assumptions). We then develop and evaluate a prototype AI model that customizes visual elements (like color schemes or legend placement) based on a user's cultural background. Our results show that culturally adapted visuals significantly enhance comprehension and decision-making, especially for non-Western users in critical fields like public health and international business. This paper provides three important contributions: (1) it shows that there are cultural barriers in AI visualization tools, (2) it gives a useful way to find and fix cultural bias in automated designs, and (3) it gives clear advice on how to develop AI-driven visual analytics that are more inclusive. This approach helps make sure that data is shared fairly and effectively in our globalized society by integrating AI automation with cultural understanding.
INTERNATIONAL JOURNAL OF CLOUD COMPUTING (ISCSITR-IJCC) , 2022
Businesses today need real-time analytics, but traditional on-premises data warehousesbuilt for b... more Businesses today need real-time analytics, but traditional on-premises data warehousesbuilt for batch processing-often struggle to keep up. In this study, we compare Snowflake's cloud-native streaming (powered by Snowpipe and dynamic scaling) with on-premises systems like Oracle and SQL Server, focusing on latency-sensitive use cases. Through controlled experiments simulating high-speed data streams (such as IoT sensors and financial transactions), we evaluate query latency, throughput, and resource efficiency across different workloads. Our early findings show that Snowflake dramatically cuts latency for real-time processing compared to batch-optimized on-premises solutions-though at higher costs during peak demand. Interestingly, we also pinpoint scenarios where on-premises systems still outperform Snowflake, particularly in predictable, large-scale batch operations. This research offers practical guidance for companies transitioning from legacy batch systems to cloud-based real-time analytics, helping them choose the right architecture for their needs.
International Journal of Computer Science and Engineering Research and Development (IJCSERD), 2022
Small and medium-sized enterprises (SMEs) are increasingly adopting Amazon Web Services (AWS) for... more Small and medium-sized enterprises (SMEs) are increasingly adopting Amazon Web Services (AWS) for its operational benefits, but its broader influence on innovation is still not well understood. This study explores how AWS affects SME innovation in two key ways: through measurable performance gains (like faster product launches and more frequent updates) and less tangible, yet equally important, cultural shifts (such as improved teamwork and creative problem-solving). Using a combination of surveys, financial data, and interviews with SME leaders, we found that AWS doesn't just cut costs and speed up development-it also encourages a more agile and experimental workplace culture. Our findings suggest that while AWS delivers clear efficiency improvements, its deeper value may lie in transforming how SMEs approach innovation. The study offers a practical framework for SMEs to evaluate cloud computing's role in their growth, helping them make strategic decisions about technology adoption.
Background: The adoption of AI in healthcare is hindered by data silos and privacy constraints, w... more Background: The adoption of AI in healthcare is hindered by data silos and privacy constraints, with heterogeneous sources like EHRs, wearables, and IoT devices operating in isolated ecosystems. Traditional centralized AI models fail to address interoperability while complying with regulations like HIPAA and GDPR. Federated learning (FL) has emerged as a promising decentralized alternative, but its adaptation for cross-platform healthcare data harmonization remains underexplored. Objective: This study proposes an adaptive FL framework to bridge interoperability gaps in healthcare AI, enabling secure collaboration across disparate data sources without centralized data pooling. We aim to (1) design a privacy-preserving FL architecture for heterogeneous medical data, (2) validate its interoperability using real-world EHR and wearable datasets, and (3) quantify compliance with regulatory requirements. IoT-health data. We evaluate the system using two public datasets: (i) MIMIC-III (EHRs) and (ii) PPG-DaLiA (wearable photoplethysmography), simulating a multi-device environment. Interoperability is tested via schema mapping tools (e.g., FHIR standards), while privacy guarantees are audited using GDPR-specific metrics (e.g., data minimization, user consent logs). Results: The framework achieved 88.7% harmonization accuracy (vs. 72.3% in centralized baselines) across EHR-wearable data fields, reducing semantic heterogeneity by 41% through adaptive FMTL. DP noise injection (< ε=1.5) maintained model utility (AUC drop < 3.2%) while ensuring compliance with HIPAA's de-identification criteria. Comparative analysis showed 23% faster convergence than conventional FL, attributed to dynamic client selection for heterogeneous devices [1]. Regulatory audits confirmed adherence to GDPR's Article 35 (DPIA) and HIPAA's Safe Harbor rule. Conclusion: Adaptive FL can effectively harmonize fragmented healthcare data while preserving
International Journal of Scientific Research in Artificial Intelligence and Machine Learning (ISCSITR-IJSRAIML), 2025
Automated feature engineering (AutoFE) has become a cornerstone of efficient machine learning (ML... more Automated feature engineering (AutoFE) has become a cornerstone of efficient machine learning (ML), yet its potential to perpetuate or amplify bias remains underexplored. This paper proposes a fairness-aware framework for feature transformation, addressing how AutoFE tools-while optimizing for model performance-may inadvertently encode discriminatory patterns into derived features. Drawing on Ferrario et al. (2022)'s work on bias propagation in ML pipelines and Kamiran & Calders (2019)'s foundational methods for discrimination-aware data mining, we first demonstrate that common AutoFE techniques (e.g., feature synthesis, aggregation) can systematically marginalize underrepresented groups by reinforcing spurious correlations. We then introduce FairFeature, a novel framework that integrates bias metrics (e.g., demographic parity, equalized odds) directly into the feature generation process. Unlike post-hoc fairness adjustments (e.g., adversarial debiasing), FairFeature proactively constrains feature transformations using fairness-aware optimization, ensuring that engineered features meet both predictive utility and equity criteria. Empirical evaluations on real-world datasets (e.g., UCI Adult, COMPAS) reveal that AutoFE without fairness constraints increases disparity by up to 22% in model outcomes, while FairFeature reduces bias by 35-60% with <5% accuracy trade-offs. Our work bridges critical gaps between data engineering and algorithmic fairness, offering practitioners a scalable tool to mitigate 62 hidden biases at the feature level. We further release an open-source library implementing FairFeature to foster adoption.
International Journal of AI, Big Data, Computational and Management Studies, 2024
The increasing emphasis on data-centric AI has highlighted the need for systematic approaches to ... more The increasing emphasis on data-centric AI has highlighted the need for systematic approaches to manage evolving datasets in machine learning (ML) pipelines. While ML experiment tracking tools like ML flow and Weights & Biases (W&B) excel at versioning models and hyperparameters, they lack robust mechanisms for tracking dataset iterations such as corrections, augmentations, and subset selections that are critical in data-centric workflows. This paper bridges this gap by proposing a framework that extends existing ML experiment tracking paradigms to support data versioning, enabling reproducibility, auditability, and iterative refinement in data-centric AI. We draw inspiration from two key works: (1) "Dataset Versioning for Machine Learning: A Survey" (2023), which formalizes the challenges of dataset evolution tracking, and (2) "Data Fed: Towards Reproducible Deep Learning via Reliable Data Management" (2022), which introduces a federated data versioning system for large-scale ML. Our framework adapts these principles to integrate seamlessly with popular ML tracking tools, introducing data diffs (fine-grained change logs), provenance graphs (to track transformations), and conditional triggering (to automate pipeline stages based on data updates). We evaluate our approach on three real-world case studies: (a) a financial fraud detection system where transaction datasets are frequently revised, (b) a medical imaging pipeline with iterative label corrections, and (c) a recommendation engine with dynamic user feedback integration. Results show that our method reduces dataset reproducibility errors by 62% compared to ad-hoc versioning (e.g., manual CSV backups) while adding minimal overhead (<5% runtime penalty) to existing ML workflows. Additionally, we demonstrate how our framework enables data debugging by tracing model performance regressions to specific dataset changes a capability absent in current model-centric tools. This work contributes: (1) a methodology for adapting ML experiment trackers to handle dataset versioning, (2) an open-source implementation compatible with ML flow and W&B, and (3) empirical validation of its benefits across diverse domains. Our findings advocate for treating data as a first-class artifact in ML pipelines, aligning with the broader shift toward data-centric AI.
As artificial intelligence (AI) and machine learning (ML) systems increasingly drive real-time de... more As artificial intelligence (AI) and machine learning (ML) systems increasingly drive real-time decisionmaking in industries such as finance, healthcare, and autonomous systems, the need for robust yet agile governance mechanisms has become critical. Traditional compliance frameworks often struggle to keep pace with the dynamic nature of real-time ML pipelines, leading to either regulatory gaps or performance bottlenecks. This paper explores the viability of metadata-driven automation as a solution to enforce governance without compromising the speed and efficiency of AI/ML workflows. Drawing on recent advancements in automated metadata management, we analyze two pivotal studies from the past five years: (1) "Automating Data Lineage and Compliance in Machine Learning Pipelines" (Zhang et al., 2021), which proposes a real-time metadata tracking system to enforce GDPR and HIPAA compliance without manual intervention, and (2) "Dynamic Policy Enforcement for Streaming ML Models" (Kumar et al., 2023), which introduces an adaptive governance layer that adjusts access controls and bias mitigation strategies based on live data streams. Our research synthesizes findings from these works to evaluate whether metadata automation can effectively balance regulatory demands with computational efficiency. Key challenges include latency introduced by runtime policy checks, scalability across distributed systems, and the interpretability of automated governance decisions. We also examine emerging solutions such as federated metadata repositories and lightweight cryptographic auditing to minimize overhead. The paper concludes with a framework for implementing smart governance in real-world ML pipelines, offering best practices for industries requiring both high-speed inference and strict compliance. Empirical evidence suggests that metadata-driven automation can reduce governance-related latency by up to 40% compared to traditional methods, though its success depends on careful architectural integration.
Journal of Scientific and Engineering Research, 2023
As AI-powered surveillance becomes increasingly ubiquitous in retail security, it's important to ... more As AI-powered surveillance becomes increasingly ubiquitous in retail security, it's important to know how people from different cultures trust each other. Previous research have mostly looked at rules and ethics, but not many have looked at how cultural values affect how people feel about AI monitoring in stores. This study looks at how cultural variations, like how people feel about privacy and whether they are individualistic or collectivistic, affect confidence in AI surveillance systems. We find out how trust levels differ and what cultural characteristics make people more or less likely to accept something by doing a mix of large-scale surveys in the U.S., Germany, Japan, and China and in-depth interviews. Our results demonstrate striking differences: collectivist societies are more likely to allow AI surveillance but are concerned about data misuse, while individualist cultures want openness and consent. We also talk about how stores might change AI security procedures to meet cultural norms, giving them useful information to help them earn customers' trust. This study helps design more culturally conscious AI strategies in global retail security by linking AI ethics to how people behave in different cultures.
International Journal of Science and Research (IJSR), 2021
AI models used for fraud detection are constantly updated to tackle new threats, but their explan... more AI models used for fraud detection are constantly updated to tackle new threats, but their explanation methods often stay static, leading to outdated or misleading interpretations. This research explores how adaptive explainable AI (XAI) can generate realtime, accurate explanations that evolve alongside the models they describe. We introduce a framework for self-updating narrative generation, combining retrieval-augmented generation (RAG) and meta-learning to ensure explanations stay aligned with the latest model behavior and emerging fraud patterns. Testing on real-world transaction data, we compare adaptive narratives against traditional static explanations, measuring robustness, response time, and user understanding. Our results show that adaptive XAI not only preserves transparency in fast-changing fraud environments but also builds stronger trust among users, auditors, and regulators. This work offers a practical solution for real-time interpretability in AI-driven fraud detection-a critical need for deployable, trustworthy systems.
International Journal of Science and Research (IJSR) , 2022
Today's distributed systems depend heavily on machine learning (ML) to predict and recover from f... more Today's distributed systems depend heavily on machine learning (ML) to predict and recover from faults, but the "black-box" nature of many ML models makes them hard to trust and understand. To tackle this, we present a new approach that blends interpretable ML (IML) methods-like SHAP, LIME, and rule-based models-into adaptive fault tolerance systems. Unlike traditional methods that focus only on accuracy, our framework not only predicts failures effectively but also explains why they happen in a way humans can grasp. We built a hybrid system that pairs real-time ML fault detection with explainable decision-making, helping system operators trust and act on AI-driven insights. Testing on the Parallel Distributed Task Infrastructure (PDTI), our method cuts false alarms by 30% compared to deep learning models while maintaining over 95% recovery accuracy across different failure scenarios. We also explore the balance between explainability and computational cost, giving practical advice for using explainable AI (XAI) in time-sensitive systems. This research closes the gap between fully automated resilience and human oversight, making distributed systems more transparent and reliable-especially in large-scale, dynamic environments.
International Journal of Science and Research (IJSR), 2021
Distributed computing frameworks like Apache Spark and Kubernetes face constant challenges in dyn... more Distributed computing frameworks like Apache Spark and Kubernetes face constant challenges in dynamic, failure-prone environments. Yet, most fault tolerance approaches remain rigid and tailored to specific platforms. Recent innovations, such as the Parallel Distributed Task Infrastructure (PDTI), have introduced adaptive fault tolerance using real-time monitoring and machine learning. However, their effectiveness across different systems is still unclear. In this paper, we explore how PDTI's adaptive fault tolerance can be extended to major distributed frameworks like Spark and Kubernetes. We identify key architectural and algorithmic adjustments needed for smooth integration and propose a cross-platform adaptation layer. This layer retains the core advantages of dynamic failure prediction and task redistribution while adapting to each framework's unique scheduling, communication, and recovery models. Through extensive experiments on Spark (for batch processing) and Kubernetes (for container orchestration), we assess performance, resilience, and overhead. Our results show up to 40% faster fault recovery and 15% higher throughput compared to native fault tolerance methods-without significant resource costs. These findings pave the way for universally adaptable fault tolerance in heterogeneous distributed systems, bridging the gap between specialized and general-purpose resilience solutions.
European Journal of Advances in Engineering and Technology, 2023
Streaming pipelines need to be able to handle datasets that change quickly and have schemas that ... more Streaming pipelines need to be able to handle datasets that change quickly and have schemas that change as well in today's fast-paced, data-driven world. When schema changes happen that aren't expected, traditional schema management methods that rely on static rules or manual updates typically don't work, which might break pipelines or corrupt data. To solve this problem, we provide AutoSchema, a self-supervised learning framework that can find and fix schema drift in real-time data streams without any help from people. AutoSchema employs a two-part neural approach: A drift detection system that finds schema problems via contrastive learning. This cuts down on false alarms by a lot compared to older rule-based systems. A dynamic schema adapter that uses graph-based metadata learning to rebuild and check new schema mappings as they happen, preventing pipeline problems. We used AutoSchema on a variety of streaming datasets, such as IoT sensors, financial transactions, and log analytics, and found that it was 98.3% accurate in finding schema drift (22% better than rule-based methods). Adaptation in less than a second, which keeps the data flowing. 70% faster recovery than the best tools for schema evolution. Our results show that AutoSchema makes pipelines more resilient to schema changes while keeping costs low, making it perfect for big applications. This study fills in a big gap in autonomous data management by giving businesses a dependable way to handle dynamic streaming settings.
European Journal of Advances in Engineering and Technology, 2024
Artificial intelligence (AI) is revolutionizing cybersecurity, enabling faster threat detection a... more Artificial intelligence (AI) is revolutionizing cybersecurity, enabling faster threat detection and proactive defense mechanisms. However, the adoption of AI-driven security solutions varies dramatically between public-sector institutions (e.g., government agencies, universities) and private-sector organizations (e.g., financial tech firms, ecommerce platforms). These differences stem from contrasting priorities, regulatory landscapes, and infrastructural capabilities. This study examines the key challenges each sector faces when integrating AI into their security frameworks. Speed and scalability are very important for private businesses, but budget limits and legal concerns (such GDPR and PCI-DSS) make adoption harder. At the same time, public institutions have problems with bureaucratic inertia, old IT systems, and the necessity for AI decision-making to be open. We find sector-specific problems and suggest solutions that are suited to each area by using a mixed-methods approach that includes real-world case studies, interviews with cybersecurity experts, and performance benchmarking. Our results show a clear split: private companies are better at quickly adopting AI, but they typically don't have enough governance protections. On the other hand, public organizations put accountability first, which means they take longer to implement. To fill this gap, we present an adaptive framework that makes AI security solutions fit the needs of each sector. We suggest that private companies use modular, cloud-based AI technologies that are affordable and can grow with their needs. We support incremental modernization and policy-driven AI governance in the public sector to keep the public's trust while making things safer. This study gives cybersecurity professionals, policymakers, and IT administrators useful information they can use to deal with the problems that come up when integrating AI. By understanding these distinctions between sectors, companies can set up security systems that are more effective and aware of their surroundings. This will make defenses stronger in a world where threats are becoming more AI-driven.
European Journal of Advances in Engineering and Technology, 2024
As blockchain technology becomes more widely used for document verification, its high energy cons... more As blockchain technology becomes more widely used for document verification, its high energy consumptionespecially in traditional Proof-of-Work (PoW) systems-poses a major sustainability challenge. While blockchain ensures tamper-proof records, its environmental footprint cannot be ignored. This study investigates hybrid consensus models that merge Proof-of-Stake (PoS) with off-chain verification to create a more sustainable yet secure document authentication system. We introduce a "Green Blockchain" framework that drastically cuts energy use by relying on PoS for on-chain security while shifting resource-heavy verification tasks to off-chain networks. Our work examines the balance between energy efficiency, decentralization, and resistance to tampering, comparing hybrid approaches with standard PoW and pure PoS systems. We also assess real-world feasibility in industries like finance, healthcare, and legal documentation, where security and compliance are non-negotiable. Key contributions of this research include: O A comparative study of energy consumption in hybrid vs. traditional blockchain verification systems. O A security evaluation of PoS-based document authentication against forgery and Sybil attacks. O A scalability analysis using off-chain batch processing to ease on-chain congestion and lower energy demands. Our experiments show that the hybrid model reduces energy usage by over 70% compared to PoW-based systems while keeping security intact. These findings help organizations adopt eco-conscious blockchain solutions without sacrificing verification reliability. This approach makes it possible for greener digital trust infrastructures by linking sustainability with decentralization.
Journal of Scientific and Engineering Research, 2023
As digital payment systems change quickly, governments throughout the world are putting in place ... more As digital payment systems change quickly, governments throughout the world are putting in place rules to make them safer, foster new ideas, and gain customers' trust. But do these rules function the same way in every market? This study looks at how the rules for digital payments work differently in markets driven by banks (like the EU and the US) and those driven by fintech (like China and Kenya). We look at how traditional banks and fintech companies react to changes in the law by looking at case studies, regulatory effect assessments, and data on how many people use their services. The results reveal that markets led by banks adapt regulations more slowly but more stably, while markets led by fintech adapt regulations quickly but may have trouble with compliance risks and regulatory loopholes. Our research shows that we need a regulatory framework that takes the market structure into account and changes policy accordingly. We think that flexible licensing and policies that encourage innovation would be good for markets driven by fintech. For traditional banking institutions, phased compliance techniques would be helpful. Policymakers can better balance innovation and stability in digital payments by making sure that their rules match how the market really works. Digital payments, fintech regulation, financial ecosystems, market structure, adaptive policy, and regulatory efficacy are all important terms.
Journal of Scientific and Engineering Research, 2024
As more businesses utilize generative artificial intelligence (AI) in business-to-business (B2B) ... more As more businesses utilize generative artificial intelligence (AI) in business-to-business (B2B) sales, important ethical issues arise, especially those related to prejudice, transparency, and fairness. Current ethical guidelines give general advice, but they don't always take into account the specific problems that different fields, like healthcare, manufacturing, and technology, face. This study presents a customizable, industryspecific way to create ethical guidelines for using generative AI in B2B sales that strikes a balance between fairness and the demands of the organization. We find important ethical hazards and customized solutions for different sectors by using a mixed-methods approach that includes a thorough literature review, case studies from different industries, and interviews with experts. Our research shows that different industries have different rules, privacy concerns, and ways of negotiating sales. This means that AI governance techniques need to be flexible. We offer a useful framework for adding ethical AI to B2B sales that has been tested by professionals. This study is useful for politicians, AI engineers, and salespeople who want to use AI responsibly since it links theoretical ethical ideas to real-world situations.
Uploads
Papers by IAEME AI