Papers by Claudia Feregrino

Modern cellular networks allow users to transmit information at high data rates, have access to I... more Modern cellular networks allow users to transmit information at high data rates, have access to IP-based networks deployed around the world, and access to sophisticated services. In this context, not only is it necessary to develop new radio interface technologies and improve existing core networks to reach success, but guaranteeing confidentiality and integrity during transmission is a must. The KASUMI block cipher lies at the core of both the f8 data confidentiality algorithm and the f9 data integrity algorithm for Universal Mobile Telecommunications System networks. KASUMI implementations must reach high performance and have low power consumption in order to be adequate for network components. This paper describes a specialized processor core designed to efficiently perform the KASUMI algorithm. Experimental results show two orders of magnitude performance improvement over software only based implementations. We describe the used design technique that can also be applied to implement other Feistel-like ciphering algorithms. The proposed architecture was implemented on a FPGA, results are presented and discussed.

Software radios are communication devices with different configurations that enable to operate in... more Software radios are communication devices with different configurations that enable to operate in different communication networks. Considering the OSI model, the main development of these radios is focused on the lower layers, which are implemented in hardware. Security is a key element for using software radios, because they can enter to different wireless networks and use the air like transmission medium, being vulnerable to possible attacks to the transmission of data. Several security architectures have been standardized for different networks, such as IEEE 802.11i-2004 for WLANs (Wireless Local Area Networks) and IEEE 802.16e-2005 for WMANs (Wireless Metropolitan Area Networks), operating on the MAC (Medium Access Control) sublayer. In this work, hardware implementations of these architectures are evaluated in terms of FPGA implementation costs and performance to be considered in a reconfigurable hardware platform, which supports both security architectures, working on the MAC...
This article appeared in a journal published by Elsevier. The attached copy is furnished to the a... more This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
Lecture Notes in Business Information Processing, 2019
False ownership claims are carried on through additive and invertibility attacks and, as far as w... more False ownership claims are carried on through additive and invertibility attacks and, as far as we know, current relational watermarking techniques are not always able to solve the ownership doubts raising from the latter attacks. In this paper, we focus on additive attacks. We extend a conventional image-based relational data watermarking scheme by creating a non-colluded backup of the data owner marks, the so-called secondary marks positions. The technique we propose is able to identify the data owner beyond any doubt.

Information hiding techniques have been useful for passing secret messages unnoticed since old ti... more Information hiding techniques have been useful for passing secret messages unnoticed since old times, but nowadays it also has been purposeful to prove ownership of digital assets. The increment of the internet services has provoked easy accessing to illegal or unauthorized copies of datasets, so the piracy is at its best. With watermarking emerging as a tool for ownership proof, traitor tracing, etc., there have been several techniques for multimedia data but no so over relational data. Due to the differences of these data types, another angle is necessary to the conceptions of its watermarking schemes also to deal with new problems that have emerged. With our research, we seek to develop a robust technique based on meaningful signals oriented to watermarking relational data. The watermark must be resilient against common updates but also, it must be resilient against bit level attacks that tries to destroy the watermark.

PLOS ONE, 2020
Several areas, such as physical and health sciences, require the use of matrices as fundamental t... more Several areas, such as physical and health sciences, require the use of matrices as fundamental tools for solving various problems. Matrices are used in real-life contexts, such as control, automation, and optimization, wherein results are expected to improve with increase of computational precision. However, special attention should be paid to ill-conditioned matrices, which can produce unstable systems; an inadequate handling of precision might worsen results since the solution found for data with errors might be too far from the one for data without errors besides increasing other costs in hardware resources and critical paths. In this paper, we make a wake-up call, using 2 × 2 matrices to show how ill-conditioning and precision can affect system design (resources, cost, etc.). We first demonstrate some examples of real-life problems where ill-conditioning is present in matrices obtained from the discretization of the operational equations (ill-posed in the sense of Hadamard) that model these problems. If these matrices are not handled appropriately (i.e., if ill-conditioning is not considered), large errors can result in the computed solutions to the systems of equations in the presence of errors. Furthermore, we illustrate the generated effect in the calculation of the inverse of an ill-conditioned matrix when its elements are approximated by truncation. We present two case studies to illustrate the effects on calculation errors caused by increasing or reducing precision to s digits. To illustrate the costs, we implemented the adjoint matrix inversion algorithm on different field-programmable gate arrays (FPGAs), namely, Spartan-7, Artix-7, Kintex-7, and Virtex-7, using the full-unrolling hardware technique. The implemented architecture is useful for analyzing trade-offs when precision is increased; this also helps analyze performance, efficiency, and energy consumption. By means of a detailed description of the trade-offs among these metrics, concerning precision and ill-conditioning, we conclude that the need for resources seems to grow not linearly when precision is increased. We also conclude that, if error is to be reduced below a certain threshold, it is necessary to determine an optimal precision point. Otherwise, the system becomes more sensitive to measurement errors and a better alternative would be to choose precision

IEEE Access, 2020
Relational data watermarking techniques using virtual primary key schemes try to avoid compromisi... more Relational data watermarking techniques using virtual primary key schemes try to avoid compromising watermark detection due to the deletion or replacement of the relation's primary key. Nevertheless, these techniques face the limitations that bring high redundancy of the generated set of virtual primary keys, which often compromises the quality of the embedded watermark. As a solution to this problem, this paper proposes double fragmentation of the watermark by using the existing redundancy in the set of virtual primary keys. This way, we guarantee the right identification of the watermark despite the deletion of any of the attributes of the relation. The experiments carried out to validate our proposal show an increment between 81.04% and 99.05% of detected marks with respect to previous solutions found in the literature. Furthermore, we found out that our approach takes advantage of the redundancy present in the set of virtual primary keys. Concerning the computational complexity of the solution, we performed a set of scalability tests that show the linear behavior of our approach with respect to the processes runtime and the number of tuples involved, making it feasible to use no matter the amount of data to be protected.

PLOS ONE, 2018
Self-recovery schemes identify and restore tampering, using as a reference a compressed represent... more Self-recovery schemes identify and restore tampering, using as a reference a compressed representation of a signal embedded into itself. In addition, audio self-recovery must comply with a transparency threshold, adequate for applications such as on-line music distribution or speech transmission. In this manuscript, an audio self-recovery scheme is proposed. Auditory masking properties of the signals are used to determine the frequencies that better mask the embedding distortion. Frequencies in the Fourier domain are mapped to the intDCT domain for embedding and extraction of reference bits for signal restoration. The contribution of this work is the use of auditory masking properties for the frequency selection and the mapping to the intDCT domain. Experimental results demonstrate that the proposed scheme satisfies a threshold of-2 ODG, suitable for audio applications. The efficacy of the scheme, in terms of its restoration capabilities, is also shown.
Expert Systems with Applications, 2019
This work propose metrics to allow a precise measuring of the quality of the Virtual Primary Keys... more This work propose metrics to allow a precise measuring of the quality of the Virtual Primary Keys (VPK) generated by any VPK scheme proposed so far, without requiring to perform the watermark embedding, so wasting time can be avoided in case of low-quality detection. We also analyze the main aspects to design the ideal VPK scheme, seeking the generation of high-quality VPK sets adding robustness to the process. Finally, a new scheme is presented along with the experiments carried out to validate and compare the results with the rest of the schemes proposed in the literature. We believe that these findings will be of interest to the readers of Elsevier Expert Systems with Applications Journal. This manuscript is original, has not been published before and is not currently being considered for publication elsewhere.

Journal of Intelligent Information Systems, 2017
Frequent Itemsets Mining has been applied in many data processing applications with remarkable re... more Frequent Itemsets Mining has been applied in many data processing applications with remarkable results. Recently, data streams processing is gaining a lot of attention due to its practical applications. Data in data streams are transmitted at high rates and cannot be stored for offline processing making impractical to use traditional data mining approaches (such as Frequent Itemsets Mining) straightforwardly on data streams. In this paper, two single-pass parallel algorithms based on a tree data structure for Frequent Itemsets Mining on data streams are proposed. The presented algorithms employ Landmark and Sliding Window Models for windows handling. In the presented paper, as in other revised papers, if the number of frequent items on data streams is low then the proposed algorithms perform an exact mining process. On the contrary, if the number of frequent patterns is large the mining process is approximate with no false positives produced. Experiments conducted demonstrate that the presented algorithms outperform the processing time of the hardware architectures reported in the state-of-the-art.
In this chapter, we propose a new algorithm for mining frequent itemsets. This algorithm is named... more In this chapter, we propose a new algorithm for mining frequent itemsets. This algorithm is named AMFI (Algorithm for Mining Frequent Itemsets), it compresses the data while maintains the necessary semantics for the frequent itemsets mining problem and, for this task, it is more efficient than other algorithms that use traditional compression algorithms. The AMFI efficiency is based on a compressed vertical binary representation of the data and on a very fast support count. AMFI introduces a novel way to use equivalence classes of itemsets by performing a breadth first search through them and by storing the class prefix support in compressed arrays. We compared our proposal with an implementation that uses the PackBits algorithm to compress the data.

2014 International Conference on ReConFigurable Computing and FPGAs (ReConFig14), 2014
Feature selection in pattern recognition is a problem whose space complexity grows exponentially ... more Feature selection in pattern recognition is a problem whose space complexity grows exponentially regarding the number of attributes in a dataset. There are several hardware implementations of algorithms for overcoming this complexity. These hardware architectures relay on a software component for filtering irreducible features subsets, which is a computationally complex task. In this paper, a new hardware module for the filtering process is presented. The main advantage of this new architecture is that no additional time is required for hardware execution whilst the software component is no longer needed. Experimental results show that the runtime magnitude order for software is the same as for hardware in some cases. The proposed architecture is algorithm independent and may lead to smaller hardware realizations than previous architectures.
Proceedings of the Fifth Mexican International Conference in Computer Science, 2004. ENC 2004.
We present a hardware architecture for an Elliptic Curve Cryptography System performing the three... more We present a hardware architecture for an Elliptic Curve Cryptography System performing the three basic cryptographic schemes: DH key generation, encryption and digital signature. The architecture is described by using hardware description languages, specifically Handel C and VHDL. Because of the sequential nature of the cryptographic algorithms, they are written in Handel C language. The critical part of the cryptosystem is a module performing the scalar multiplication operation. This module has been written in VHDL to let further improvements. The points of the elliptic curve are represented in projective coordinates working over the two-characteristic finite field and using polynomial basis. A prototype of this hardware architecture is implemented on a Xilinx Virtex II FPGA device.
2009 Mexican International Conference on Computer Science, 2009
This paper presents an original approach for watermarking of digital images using Iterated functi... more This paper presents an original approach for watermarking of digital images using Iterated function Systems (IFS) to generate positions maps used by Least Significant Bit method (LSB). The new approach exploits the main feature of fractals (generated by IFS): infinite magnification. The map generated by only one IFS can be used in images of different sizes. Furthermore, to avoid the image distortion by the embedding process, the data are inserted in nonhomogeneous regions, to obtain this behavior, the Harris feature detector was modified. Obtaining a watermarking scheme robust to visual attack .
IEICE Transactions on Information and Systems, 2008
Ignacio ALGREDO-BADILLO †a) , Student Member, Claudia FEREGRINO-URIBE †b) , René CUMPLIDO †c) , N... more Ignacio ALGREDO-BADILLO †a) , Student Member, Claudia FEREGRINO-URIBE †b) , René CUMPLIDO †c) , Nonmembers, and Miguel MORALES-SANDOVAL †d) , Member SUMMARY MD5 is a cryptographic algorithm used for authentication. When implemented in hardware, the performance is affected by the data dependency of the iterative compression function. In this paper, a new functional description is proposed with the aim of achieving higher throughput by mean of reducing the critical path and latency. This description can be used in similar structures of other hash algorithms, such as SHA-1, SHA-2 and RIPEMD-160, which have comparable data dependence. The proposed MD5 hardware architecture achieves a high throughput/area ratio, results of implementation in an FPGA are presented and discussed, as well as comparisons against related works.

Computers & Electrical Engineering, 2008
Modern cellular networks allow users to transmit information at high data rates, have access to I... more Modern cellular networks allow users to transmit information at high data rates, have access to IP-based networks deployed around the world, and access to sophisticated services. In this context, not only is it necessary to develop new radio interface technologies and improve existing core networks to reach success, but guaranteeing confidentiality and integrity during transmission is a must. The KASUMI block cipher lies at the core of both the f8 data confidentiality algorithm and the f9 data integrity algorithm for Universal Mobile Telecommunications System networks. KASUMI implementations must reach high performance and have low power consumption in order to be adequate for network components. This paper describes a specialized processor core designed to efficiently perform the KASUMI algorithm. Experimental results show two orders of magnitude performance improvement over software only based implementations. We describe the used design technique that can also be applied to implement other Feistel-like ciphering algorithms. The proposed architecture was implemented on a FPGA, results are presented and discussed.

Computers & Electrical Engineering, 2010
Applications of wireless communications networks are emerging continuously. To offer a good level... more Applications of wireless communications networks are emerging continuously. To offer a good level of security in these applications, new standards for wireless communications propose solutions based on cryptographic algorithms working on special modes of operation. This work presents a custom hardware architecture for the AES-CCM Protocol (AES-CCMP) which is the basis for the security architecture of the IEEE 802.11i standard. AES-CCMP is based on the AES-CCM algorithm that performs the Advanced Encryption Standard (AES) in CTR with CBC-MAC mode (CCM mode), plus specialized data formatting modules, providing different security services through iterative and complex operations. Results of *Manuscript Click here to view linked References implementing the proposed architecture targeting FPGA devices are presented and discussed. A comparison against similar works shows significant improvements in terms of both throughput and efficiency.
On the Design and Implementation of an FPGA-based Lossless Data Compressor
ccc.inaoep.mx

PloS one, 2018
Security is a crucial requirement in the envisioned applications of the Internet of Things (IoT),... more Security is a crucial requirement in the envisioned applications of the Internet of Things (IoT), where most of the underlying computing platforms are embedded systems with reduced computing capabilities and energy constraints. In this paper we present the design and evaluation of a scalable low-area FPGA hardware architecture that serves as a building block to accelerate the costly operations of exponentiation and multiplication in [Formula: see text], commonly required in security protocols relying on public key encryption, such as in key agreement, authentication and digital signature. The proposed design can process operands of different size using the same datapath, which exhibits a significant reduction in area without loss of efficiency if compared to representative state of the art designs. For example, our design uses 96% less standard logic than a similar design optimized for performance, and 46% less resources than other design optimized for area. Even using fewer area re...

FPGA implementation and performance evaluation of an RFC 2544 compliant Ethernet test set
International Journal of High Performance Systems Architecture, 2009
ABSTRACT With the constant and rapid advances in microelectronics and networking technology, netw... more ABSTRACT With the constant and rapid advances in microelectronics and networking technology, network service providers' needs for tuning up services, in order to attract more subscribers, have become more important. Ethernet technology has improved in terms of communication speed and has established itself as a standard enabling more recently throughput rates in the range of 1-100 Gbps. However, the need for quality services requires Ethernet testers to be not only standard compliant, but also meet performance criteria as specified by the standard. Performance criteria are difficult to prove and typically cannot be accomplished by software due to the limitations of the underlying general purpose hardware as well as the existence of many software layers. In this paper, we propose a design, an implementation and the performance verification achievements of an Ethernet tester compliant with the throughput and latency tests specified by the RFC 2544 for 10/100 Mpbs Ethernet networks. The results showed that the device designed achieved the performance criteria defined by the RFC while it was implemented in a Commercial Off-The-Shelf (COTS) low cost FPGA board. The performance was compared to an existent software implementation and the results showed that the usual limitations added by several hardware and software layers can be overcome by implementing a frame generator, monitor and media access (MAC layer 2) directly in an FPGA device.
Uploads
Papers by Claudia Feregrino