International Research Journal of Engineering and Technology, 2024
In an era where cyber threats are escalating in sophistication and frequency, the need for robust... more In an era where cyber threats are escalating in sophistication and frequency, the need for robust and responsive security measures has never been greater. This paper presents an innovative cyber security detecting and alerting device designed to provide a comprehensive approach to threat detection and mitigation. Our integrated system leverages advanced machine learning algorithms, real-time data analysis, and automated response mechanisms to identify and neutralize potential threats before they can inflict damage. By combining anomaly detection, behavioural analysis, and signature-based techniques, the device ensures multi-layered protection against a wide range of cyber threats. Kexx`y features include rapid threat detection, real-time alerts, and automated mitigation processes, all tailored to adapt to evolving security landscapes. The system's effectiveness is demonstrated through rigorous testing in various scenarios, highlighting its capability to safeguard critical infrastructure and sensitive information. This innovative device represents a significant advancement in cyber security, offering enhanced protection and peace of mind for organizations and individuals alike.
Journal of Emerging Technologies and Innovative Research, 2024
The increasing reliance on machine learning models has prompted growing concerns regarding the pr... more The increasing reliance on machine learning models has prompted growing concerns regarding the privacy of sensitive information used in the training process. As a result, using differential privacy techniques has become a viable paradigm for attaining strong privacy preservation without sacrificing the models' usefulness. This study investigates and summarises important approaches in the field of machine learning differential privacy. Adding controlled noise at various points in the machine learning pipeline is the first class of approaches. To avoid unintentionally revealing private information, Laplace and Gaussian noise are deliberately added to training data, predictions, and model parameters. By employing strategies like randomised response mechanisms, data perturbation can enhance privacy for each individual without compromising the model's quality. Collaborative model training is made easier by privacy-preserving aggregation techniques like Secure Multi-Party Computation (SMPC), which protects raw data. By adding noise to gradients during training, Differential Privacy Stochastic Gradient Descent (DP-SGD) provides privacy guarantees during the optimization stage. When differential privacy is combined with federated learning, it allows for decentralized model training across devices while maintaining the security and localization of sensitive data. By allowing computations on encrypted data or safely aggregating model updates, advanced cryptographic approaches like homomorphic encryption and secure aggregation protocols give another degree of privacy. When taken as a whole, these methods add to a thorough framework for machine learning differential privacy that strikes a balance between the need to protect individual privacy and the drive to create accurate models.
The increasing reliance on machine learning models has prompted growing concerns regarding the pr... more The increasing reliance on machine learning models has prompted growing concerns regarding the privacy of sensitive information used in the training process. As a result, using differential privacy techniques has become a viable paradigm for attaining strong privacy preservation without sacrificing the models' usefulness. This study investigates and summarises important approaches in the field of machine learning differential privacy. Adding controlled noise at various points in the machine learning pipeline is the first class of approaches. To avoid unintentionally revealing private information, Laplace and Gaussian noise are deliberately added to training data, predictions, and model parameters. By employing strategies like randomised response mechanisms, data perturbation can enhance privacy for each individual without compromising the model's quality. Collaborative model training is made easier by privacy-preserving aggregation techniques like Secure Multi-Party Computation (SMPC), which protects raw data. By adding noise to gradients during training, Differential Privacy Stochastic Gradient Descent (DP-SGD) provides privacy guarantees during the optimization stage. When differential privacy is combined with federated learning, it allows for decentralized model training across devices while maintaining the security and localization of sensitive data. By allowing computations on encrypted data or safely aggregating model updates, advanced cryptographic approaches like homomorphic encryption and secure aggregation protocols give another degree of privacy. When taken as a whole, these methods add to a thorough framework for machine learning differential privacy that strikes a balance between the need to protect individual privacy and the drive to create accurate models.
Journal of Emerging Technologies and Innovative Research, Dec 2023
In an era of increasing reliance on data-driven insights, the need to protect the pursuit of know... more In an era of increasing reliance on data-driven insights, the need to protect the pursuit of knowledge and privacy has become even more important. This research paper reflects on the field of privacy-preserving data analytics, and data obfuscation techniques play an important role in achieving this delicate balance. We present a comprehensive overview of known data perturbation techniques like Randomized Response, Homomorphic Encryption, and Secure Multi-Party Computation, each designed to obfuscate sensitive data while facilitating meaningful analysis. A comparative analysis highlights the inherent advantages and disadvantages of these privacy protection methods, considering factors such as the level of privacy protection, ease of implementation, impact on data accuracy, and scale.
International Journal of Research in Advent Technology, 2013
The internet world today stands on the pillars of the security principles and Cryptography. It is... more The internet world today stands on the pillars of the security principles and Cryptography. It is very important to be able to preserve the privacy and confidentiality of critical data. In this paper we address the privacy preservation problem against unauthorized secondary use of information. To do so, we have tried to study the various data perturbation and Reconstruction Based Techniques which ensures that the mining process will not violate privacy up to a certain degree of security. This is done by Perturb ting the data and adding randomization and transforming the data through translating the data, rotating it and adding some noise.
International Journal of Engineering Development and Research, 2014
It is very important to be able to find out useful information from huge amount of data. In this ... more It is very important to be able to find out useful information from huge amount of data. In this paper we address the privacy problem against unauthorized secondary use of information. To do so, we introduce a family of geometric data transformation methods (GDTMs) which ensure that the mining process will not violate privacy up to a certain degree of security. We focus primarily on privacy preserving data classification methods. Our proposed methods distort only sensitive numerical attributes to meet privacy requirements, while preserving general features for classification analysis. Our experiments demonstrate that our methods are effective and provide acceptable values in practice for balancing privacy and accuracy. This paper focuses on Geometric Data Perturbation to analyse large data sets.
International Journal of Computer Trends and Technology, Nov 2013
It is very important to be able to find out useful information from huge amount of data.In this p... more It is very important to be able to find out useful information from huge amount of data.In this paper we address the privacy problem against unauthorized secondary use of information. To do so, we introduce a family of Geometric Data Transformation Methods (GDTMs) which ensure that the mining process will not violate privacy up toa certain degree of security. We focus primarily on privacy preserving data classification methods. Our proposed methods distort only sensitive numerical attributes to meet privacy requirements,while preserving general features for classification analysis. Our experiments demonstrate that our methods are effective andprovide acceptable values in practice for balancing privacy and accuracy. This paper focuses on Geometric Data Perturbation to analyze large data sets. Keywords— Data mining;Privacy preserving; data perturbation ; ra domization; cryptography; Geometric Data Perturbation
Uploads
Papers by Keyur Dodiya
information used in the training process. As a result, using differential privacy techniques has become a viable paradigm for attaining
strong privacy preservation without sacrificing the models' usefulness. This study investigates and summarises important
approaches in the field of machine learning differential privacy. Adding controlled noise at various points in the machine learning
pipeline is the first class of approaches. To avoid unintentionally revealing private information, Laplace and Gaussian noise are
deliberately added to training data, predictions, and model parameters. By employing strategies like randomised response
mechanisms, data perturbation can enhance privacy for each individual without compromising the model's quality. Collaborative
model training is made easier by privacy-preserving aggregation techniques like Secure Multi-Party Computation (SMPC), which
protects raw data. By adding noise to gradients during training, Differential Privacy Stochastic Gradient Descent (DP-SGD)
provides privacy guarantees during the optimization stage. When differential privacy is combined with federated learning, it allows
for decentralized model training across devices while maintaining the security and localization of sensitive data. By allowing
computations on encrypted data or safely aggregating model updates, advanced cryptographic approaches like homomorphic
encryption and secure aggregation protocols give another degree of privacy. When taken as a whole, these methods add to a thorough
framework for machine learning differential privacy that strikes a balance between the need to protect individual privacy and the
drive to create accurate models.