The accumulation of litter is increasing in many places and is consequently becoming a problem th... more The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another...
This paper presents a method that can be used for the efficient detection of small maritime objec... more This paper presents a method that can be used for the efficient detection of small maritime objects. The proposed method employs aerial images in the visible spectrum as inputs to train a categorical convolutional neural network for the classification of ships. A subset of those filters that make the greatest contribution to the classification of the target class is selected from the inner layers of the CNN. The gradients with respect to the input image are then calculated on these filters, which are subsequently normalized and combined. Thresholding and a morphological operation are then applied in order to eventually obtain the localization. One of the advantages of the proposed approach with regard to previous object detection methods is that it is only required to label a few images with bounding boxes of the targets to be trained for localization. The method was evaluated with an extended version of the MASATI (MAritime SATellite Imagery) dataset. This new dataset has more than 7 000 images, 4 157 of which contain ships. Using only 14 training images, the proposed approach achieves better results for small targets than other well-known object detection methods, which also require many more training images. INDEX TERMS Artificial neural networks, learning systems, object detection, remote sensing.
This paper presents a method that can be used for the efficient detection of small maritime objec... more This paper presents a method that can be used for the efficient detection of small maritime objects. The proposed method employs aerial images in the visible spectrum as inputs to train a categorical convolutional neural network for the classification of ships. A subset of those filters that make the greatest contribution to the classification of the target class is selected from the inner layers of the CNN. The gradients with respect to the input image are then calculated on these filters, which are subsequently normalized and combined. Thresholding and a morphological operation are then applied in order to eventually obtain the localization. One of the advantages of the proposed approach with regard to previous object detection methods is that it is only required to label a few images with bounding boxes of the targets to be trained for localization. The method was evaluated with an extended version of the MASATI (MAritime SATellite Imagery) dataset. This new dataset has more than 7 000 images, 4 157 of which contain ships. Using only 14 training images, the proposed approach achieves better results for small targets than other well-known object detection methods, which also require many more training images. INDEX TERMS Artificial neural networks, learning systems, object detection, remote sensing.
The accumulation of litter is increasing in many places and is consequently becoming a problem th... more The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.
The International Journal of Advanced Manufacturing Technology, 2023
The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked ... more The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, providing statistical data in real time with which to assist human operators in decision-making. We tested the precision-performance of the system at the Smurfit Kappa facilities. Our proposal attained an Average Precision (AP) of 0.93 at 14 Frames Per Second (FPS), losing only 1% of detections. Our system is, therefore, able to optimise and speed up the process of logistic distribution.
Informatics in Control, Automation and Robotics, 2019
The task of robotic grasping brings together several challenges. Among them, we focus on the calc... more The task of robotic grasping brings together several challenges. Among them, we focus on the calculus of where gripper plates should be placed over an object's surface in order to grasp it. To do this, we have developed a method based on visual information. The main goal is to geometrically analyse a single 3D point cloud, where the object is present, to find the best pair of contact points so a gripper can perform a stable grasp of the object. Our proposal is to find these points near a perpendicular cutting plane to the object's main axis through its centroid. We have found that this method shows promising experimental results fast and accurate enough to be used on real service robots.
IEEE Transactions on Geoscience and Remote Sensing, 2018
This paper presents a system for the detection of ships and oil spills using side-looking airborn... more This paper presents a system for the detection of ships and oil spills using side-looking airborne radar (SLAR) images. The proposed method employs a two-stage architecture composed of three pairs of convolutional neural networks (CNNs). Each pair of networks is trained to recognize a single class (ship, oil spill, and coast) by following two steps: a first network performs a coarse detection, and then, a second specialized CNN obtains the precise localization of the pixels belonging to each class. After classification, a postprocessing stage is performed by applying a morphological opening filter in order to eliminate small look-alikes, and removing those oil spills and ships that are surrounded by a minimum amount of coast. Data augmentation is performed to increase the number of samples, owing to the difficulty involved in obtaining a sufficient number of correctly labeled SLAR images. The proposed method is evaluated and compared to a single multiclass CNN architecture and to previous state-of-the-art methods using accuracy, precision, recall, F-measure, and intersection over union. The results show that the proposed method is efficient and competitive, and outperforms the approaches previously used for this task.
The International Journal of Advanced Manufacturing Technology
The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked ... more The paper industry manufactures corrugated cardboard packaging, which is unassembled and stacked on pallets to be supplied to its customers. Human operators usually classify these pallets according to the physical features of the cardboard packaging. This process can be slow, causing congestion on the production line. To optimise the logistics of this process, we propose a visual recognition and tracking pipeline that monitors the palletised packaging while it is moving inside the factory on roller conveyors. Our pipeline has a two-stage architecture composed of Convolutional Neural Networks, one for oriented pallet detection and recognition, and another with which to track identified pallets. We carried out an extensive study using different methods for the pallet detection and tracking tasks and discovered that the oriented object detection approach was the most suitable. Our proposal recognises and tracks different configurations and visual appearance of palletised packaging, pro...
Este artículo analiza diferentes experiencias docentes que tienen como finalidad el aprendizaje d... more Este artículo analiza diferentes experiencias docentes que tienen como finalidad el aprendizaje de la robótica en el mundo universitario. Estas experiencias se plasman en el desarrollo de varios cursos y asignaturas sobre robótica que se imparten en la Universidad de Alicante. Para el desarrollo de estos cursos, los autores han empleado varias plataformas educativas, algunas de implementación propia, otras de libre distribución y código abierto. El objetivo de estos cursos es enseñar el diseño e implementación de soluciones robóticas a diversos problemas que van desde el control, programación y manipulación de brazos robots de ámbito industrial hasta la construcción y/o programación de mini-robots con carácter educativo. Por un lado, se emplean herramientas didácticas de última generación como simuladores y laboratorios virtuales que flexibilizan el uso de brazos robots y, por otro lado, se hace uso de competiciones y concursos para motivar al alumno haciendo que ponga en práctica l...
Latest trends in robotic grasping combine vision and touch for improving the performance of syste... more Latest trends in robotic grasping combine vision and touch for improving the performance of systems at tasks like stability prediction. However, tactile data are only available during the grasp, limiting the set of scenarios in which multimodal solutions can be applied. Could we obtain it prior to grasping? We explore the use of visual perception as a stimulus for generating tactile data so the robotic system can "feel" the response of the tactile perception just by looking at the object.
2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2019
The goal of this paper is to predict 3D object shape to improve the visual perception of robots i... more The goal of this paper is to predict 3D object shape to improve the visual perception of robots in grasping and manipulation tasks. The planning of image-based robotic manipulation tasks depends on the recognition of the object's shape. Mostly, the manipulator robots usually use a camera with configuration eye-in-hand. This fact limits the calculation of the grip on the visible part of the object. In this paper, we present a 3D Deep Convolutional Neural Network to predict the hidden parts of objects from a single-view and to accomplish recovering the complete shape of them. We have tested our proposal with both previously seen objects and novel objects from a well-known dataset.
This paper presents an AI system applied to location and robotic grasping. Experimental setup is ... more This paper presents an AI system applied to location and robotic grasping. Experimental setup is based on a parameter study to train a deep-learning network based on Mask-RCNN to perform waste location in indoor and outdoor environment, using five different classes and generating a new waste dataset. Initially the AI system obtain the RGBD data of the environment, followed by the detection of objects using the neural network. Later, the 3D object shape is computed using the network result and the depth channel. Finally, the shape is used to compute grasping for a robot arm with a two-finger gripper. The objective is to classify the waste in groups to improve a recycling strategy.
This work was funded by the Spanish MCYT project “Diseno, implementacion y experimentacion de esc... more This work was funded by the Spanish MCYT project “Diseno, implementacion y experimentacion de escenarios de manipulacion inteligentes para aplicaciones de ensamblado y desensamblado automatico (DPI2005- 06222)”.
Sometimes, the presence of objects difficult the observation of other neighboring objects. This i... more Sometimes, the presence of objects difficult the observation of other neighboring objects. This is because part of the surface of an object occludes partially the surface of another, increasing the complexitiy in the recognition process. Therefore, the information which is acquired from scene to describe the objects is often incomplete and depends a great deal on the view point of the observation. Thus, when any real scene is observed, the regions and the boundaries which delimit and dissociate objects from others are not perceived easily. In this paper, a method to discern objects from others, delimiting where the surface of each object begins and finishes is presented. Really, here, we look for detecting the overlapping and occlusion zones of two or more objects which interact among each other in a same scene. This is very useful, on the one hand, to distinguish some objects from others when the features like texture colour and geometric form are not sufficient to separate them wi...
Latest trends in robotic grasping combine vision and touch for improving the performance of syste... more Latest trends in robotic grasping combine vision and touch for improving the performance of systems at tasks like stability prediction. However, tactile data are only available during the grasp, limiting the set of scenarios in which multimodal solutions can be applied. Could we obtain it prior to grasping? We explore the use of visual perception as a stimulus for generating tactile data so the robotic system can ”feel” the response of the tactile perception just by looking at the object.
2019 International Joint Conference on Neural Networks (IJCNN), 2019
Tactile sensors provide useful contact data during the interaction with an object which can be us... more Tactile sensors provide useful contact data during the interaction with an object which can be used to accurately learn to determine the stability of a grasp. Most of the works in the literature represented tactile readings as plain feature vectors or matrix-like tactile images, using them to train machine learning models. In this work, we explore an alternative way of exploiting tactile information to predict grasp stability by leveraging graph-like representations of tactile data, which preserve the actual spatial arrangement of the sensor's taxels and their locality. In experimentation, we trained a Graph Neural Network to binary classify grasps as stable or slippery ones. To train such network and prove its predictive capabilities for the problem at hand, we captured a novel dataset of ∼ 5000 three-fingered grasps across 41 objects for training and 1000 grasps with 10 unknown objects for testing. Our experiments prove that this novel approach can be effectively used to predict grasp stability.
The International Journal of Advanced Manufacturing Technology, 2021
In this paper, we present a robotic workcell for task automation in footwear manufacturing such a... more In this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with differ...
Uploads
Papers by Pablo Gil