Papers by Pierre Grussenmeyer

Several scientific communities concentrate their efforts using photogrammetric techniques in the ... more Several scientific communities concentrate their efforts using photogrammetric techniques in the field of modelling urban areas. On the other side – with the access to high resolution satellites – also remote sensing in urban areas is present since a few years. At the same time, the shift from analytical to digital photogrammetry is now effective. Therefore, remote sensing and photogrammetry become closer as never before and this tendency is also remarkable in the corresponding software. Remote sensing is an invaluable tool in fields where spatial and spectral information is required. For topics like land use evaluation or change detection at regional scales, image processing techniques used for optical satellite imagery have proved their potential. Familiarised with the problem of modelling urban areas using photogrammetric techniques, our team extended its research topics by integrating knowledge resulting from satellite imagery processing. For this purpose, investigations with ph...
Marcigny C., Dujardin L., Grussenmeyer P., Guillemin S., Burens A., Mazet S., Carozza L. et Vipar... more Marcigny C., Dujardin L., Grussenmeyer P., Guillemin S., Burens A., Mazet S., Carozza L. et Vipard L., 2024 - Les archéologues peuvent-ils produire de nouvelles sources pour les historiens des conflits récents ? L’exemple de la carrière-refuge de la brasserie Saingt à Fleury-sur-Orne (Calvados), in. Billard C., Carpentier V., Jacquemont S., Landolt M., Legendre J.P. et Marcigny C., 2024 – Archéologie des conflits contemporains, Méthodes, apports et enjeux d’une archéologie en construction, Actes des Colloques de Verdun à Caen (2018-2019), RAO, supplément 13, Publication des Presses universitaires de Rennes, Rennes, 2024, p. 407-430.

The international archives of the photogrammetry, remote sensing and spatial information sciences/International archives of the photogrammetry, remote sensing and spatial information sciences, Feb 14, 2024
In recent years novel 3D reconstruction methods have been developed to improve the conventional i... more In recent years novel 3D reconstruction methods have been developed to improve the conventional image-based point cloud generation techniques. These novel methods generally attempt to address various challenges encountered in conventional methods, namely, the reconstruction of reflective surfaces and the amount of processing time required, both of which are major bottlenecks in heritage documentation and especially those related to large and complex objects. In this paper, we identified three types of 3D image-based reconstruction techniques and tested their usage on heritage datasets, namely (1) conventional multi-view stereo (MVS), (2) learningbased MVS, and (3) neural radiance fields (NeRF). The aim of this study is to determine the capabilities of these methods in reconstruction of three different heritage-related datasets with different challenges. Our results show that conventional MVS is nowadays a reliable solution for 3D reconstruction, in many instances recording good results relative to the reference terrestrial laser scans (TLS) when properly deployed. When applied to a challenging highly reflective scene, conventional MVS fared well using the PatchMatch algorithm (reaching an object completeness rate of 99.05%), while NeRF's best performance was 99.98%. However, NeRF suffered from noisy data, some of which may stem from its radiance field-to-point cloud conversion method. The results show that there is great potential in using specific methods for specific cases, and research in combining them may yield interesting results in the future.

Journal of Cultural Heritage, 2019
Three-dimensional (3D) model is a major form of cultural heritage documentation. In most cases, t... more Three-dimensional (3D) model is a major form of cultural heritage documentation. In most cases, the properties of digital artefacts (e.g. readability, coverage) are affected by the acquisition procedure (e.g. device, workflow, conditions) and the characteristics of the physical artefact (e.g. shape, size and materials). In this paper, we study how to combine two acquisition techniques to acquire detailed 3D models of large physical objects. Specifically, we combine two laser scanning instruments: Terrestrial Laser Scanning (TLS) and Structured Light Scanner (SLS). TLS provides millimeter-scale resolution with large field of view, while SLS provides sub-millimeter resolution for limited field of view. This paper focuses on the registration of SLS and TLS point clouds, a critical step which aims at aligning the acquired point clouds in a common frame. Existing registration systems mostly rely on manual post-processing or marker-based alignment. Manual registration is however time consuming and tedious, while markers increase the complexity of scanning and are not always acceptable in cultural site documentation. Therefore, we propose an automated markerless registration and fusion pipeline for point clouds. Firstly, we replace the marker-based coarse alignment by an automated registration of SLS and TLS point clouds; secondly, we refine the alignment of SLS point clouds on TLS data using the Iterative Corresponding Point algorithm; finally, we seamless stitch the SLS and TLS point clouds by globally regularizing the registration error for the all the point clouds at once. Our experiments show the efficiency of the proposed approach on two real-world cases, involving detailed point clouds correctly aligned without requiring markers or manual tuning. This paper provides an operational process reference for automated markerless registration of multi-source point clouds.

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Over the past decade, the use of machine learning and deep learning algorithms to support 3D sema... more Over the past decade, the use of machine learning and deep learning algorithms to support 3D semantic segmentation of point clouds has significantly increased, and their impressive results has led to the application of such algorithms for the semantic modeling of heritage buildings. Nevertheless, such applications still face several significant challenges, caused in particular by the high number of training data required during training, by the lack of specific data in the heritage building scenarios, and by the time-consuming operations to data collection and annotation. This paper aims to address these challenges by proposing a workflow for synthetic image data generation in heritage building scenarios. Specifically, the procedure allows for the generation of multiple rendered images from various viewpoints based on a 3D model of a building. Additionally, it enables the generation of per-pixel segmentation maps associated with these images. In the first part, the procedure is tested by generating a synthetic simulation of a real-world scenario using the case study of Spedale del Ceppo. In the second part, several experiments are conducted to assess the impact of synthetic data during training. Specifically, three neural network architectures are trained using the generated synthetic images, and their performance in predicting the corresponding real scenarios is evaluated.

Science And …, 2007
Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with ... more Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with high density 3D data over a city. Once the 3D Lidar data are available, the next task is the automatic data processing, with major aim to construct 3D building models. Among the numerous automatic reconstruction methods, the techniques allowing the detection of 3D building roof planes are of crucial importance. Three main methods arise from the literature: region growing, Hough-transform and Random Sample Consensus (RANSAC) paradigm. Since region growing algorithms are sometimes not very transparent and not homogenously applied, this paper focuses only on the Hough-transform and the RANSAC algorithm. Their principles, their pseudocode-rarely detailed in the related literature-as well as their complete analyses are presented in this paper. An analytic comparison of both algorithms, in terms of processing time and sensitivity to cloud characteristics, shows that despite the limitation encountered in both methods, RANSAC algorithm is still more efficient than the first one. Under other advantages, its processing time is negligible even when the input data size is very large. On the other hand, Hough-transform is very sensitive to the segmentation parameters values. Therefore, RANSAC algorithm has been chosen and extended to exceed its limitations. Its major limitation is that it searches to detect the best mathematical plane among 3D building point cloud even if this plane does not always represent a roof plane. So the proposed extension allows harmonizing the mathematical aspect of the algorithm with the geometry of a roof. At last, it is shown that the extended approach provides very satisfying results, even in the case of very weak point density and for different levels of building complexity. Therefore, once the roof planes are successfully detected, the automatic building modelling can be carried out.

Remote Sensing of Environment, Jul 1, 2020
The first national product of Surface Water Dynamics in France (SWDF) is generated on a monthly t... more The first national product of Surface Water Dynamics in France (SWDF) is generated on a monthly temporal scale and 10-m spatial scale using an automatic rule-based superpixel (RBSP) approach. The current surface water dynamic products from high resolution (HR) multispectral satellite imagery are typically analyzed to determine the annual trend and related seasonal variability. Annual and seasonal time series analyses may fail to detect the intra-annual variations of water bodies. Sentinel-2 allows us to investigate water resources based on both spatial and temporal high-resolution analyses. We propose a new automatic RBSP approach on the Google Earth Engine platform. The RBSP method employs combined spectral indices and superpixel techniques to delineate the surface water extent; this approach avoids the need for training data and benefits large-scale, dynamic and automatic monitoring. We used the proposed RBSP method to process Sentinel-2 monthly composite images covering a two-year period and generate the monthly surface water extent at the national scale, i.e., over France. Annual occurrence maps were further obtained based on the pixel frequency in monthly water maps. The monthly dynamics provided in SWDF products are evaluated by HR satellite-derived water masks at the national scale (JRC GSW monthly water history) and at local scales (over two lakes, i.e., Lake Der-Chantecoq and Lake Orient, and 200 random sampling points). The monthly trends between SWDF and GSW were similar, with a coefficient of 0.94. The confusion matrix-based metrics based on the sample points were 0.885 (producer's accuracy), 0.963 (user's accuracy), 0.932 (overall accuracy) and 0.865 (Matthews correlation coefficient). The annual surface water extents (i.e., permanent and maximum) are validated by two HR satellite image-based water maps and an official database at the national scale and small water bodies (ponds) at the local scale at Loiret-Cher. The results show that the SWDF results are closely correlated to the previous annual water extents, with a coefficient > 0.950. The SWDF results are further validated for large rivers and lakes, with extraction rates of 0.929 and 0.802, respectively. Also, SWDF exhibits superiority to GSW in small water body extraction (taking 2498 ponds in Loir-et-Cher as example), with an extraction rate improved by approximately 20%. Thus, the SWDF method can be used to study interannual, seasonal and monthly variations in surface water systems. The monthly dynamic maps of SWDF improved the degree of land surface coverage by 25% of France on average compared with GSW, which is the only product that provides monthly dynamics. Further harmonization of Sentinel-2 and Landsat 8 and the introduction of enhanced cloud detection algorithm can fill some gaps of nodata regions.

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Feb 23, 2017
A complete documentation and conservation of a historic timber roof requires the integration of g... more A complete documentation and conservation of a historic timber roof requires the integration of geometry modelling, attributional and dynamic information management and results of structural analysis. Recently developed as-built Building Information Modelling (BIM) technique has the potential to provide a uniform platform, which provides possibility to integrate the traditional geometry modelling, parametric elements management and structural analysis together. The main objective of the project presented in this paper is to develop a parametric modelling tool for a timber roof structure whose elements are leaning and crossing beam frame. Since Autodesk Revit, as the typical BIM software, provides the platform for parametric modelling and information management, an API plugin, able to automatically create the parametric beam elements and link them together with strict relationship, was developed. The plugin under development is introduced in the paper, which can obtain the parametric beam model via Autodesk Revit API from total station points and terrestrial laser scanning data. The results show the potential of automatizing the parametric modelling by interactive API development in BIM environment. It also integrates the separate data processing and different platforms into the uniform Revit software.
Restauration hydromorphologique fonctionnelle d'une anastomose rhénane : trajectoire temporelle, monitoring pré-restauration, modélisation (Réserve Naturelle du Rohrschollen, Bauerngrundwasser)
HAL (Le Centre pour la Communication Scientifique Directe), Dec 4, 2014
International audienc
Functional restoration of a Rhine anastomosing channel: temporal trajectory, initial state, post-restoration monitoring, modelling (Upper Rhine, France, Rohrschollen island)
HAL (Le Centre pour la Communication Scientifique Directe), Jun 22, 2015
Suivi scientifique du projet de restauration de la dynamique des habitats alluviaux de l’île du Rohrschollen : fonctionnement de pré- et post-restauration, perspectives de modélisation interdisciplinaire
HAL (Le Centre pour la Communication Scientifique Directe), Nov 26, 2014
International audienc

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Jun 15, 2016
In this paper, we discuss the potential of integrating both semantically rich models from Buildin... more In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM) and Geographical Information Systems (GIS) to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D), time and nonarchitectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.
Cours de topographie en ligne

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, May 30, 2022
Mangrove forests play an important role in the balance of biodiversity. However, they are threate... more Mangrove forests play an important role in the balance of biodiversity. However, they are threatened by agriculture, aquaculture, urbanization and global warming. That's why it is imperative to monitor this ecosystem and understand how it evolves in the face of these threats in order to better preserve it. The traditional methods are invasive and time consuming. Besides, it is often difficult to get into mangroves because of the particular structure of some species, so measurements cannot be taken in those areas. That's why it is very interesting to use aerial data provided by unmanned aerial vehicles (UAVs) photos or airborne laser scanning systems (ALS). Moreover, some representative elements of mangroves are only a few tens of centimeters high. This is the case of pneumatophores. Traditional measurements would be much too long. In this case, it is interesting to use terrestrial laser scanning systems (TLS) to make measurements and to follow them. A research project began in 2021 to try to understand how urban mangroves develop in semi-arid regions, using remote sensing techniques (photogrammetry, airborne and terrestrial laser scanning). The purpose of this paper is first to present the project and the issues of monitoring mangrove forests. Then, it proposes a state of the art of the methodologies used to record mangrove. Finally, it presents the different acquisitions made as well as the first results of species classification based on photogrammetric point cloud processing. The assessment based on ground truth shows already promising results.

Solid images for geostructural mapping and key block modeling of rock discontinuities
Computers & Geosciences, Apr 1, 2016
Rock mass characterization is obviously a key element in rock fall hazard analysis. Managing risk... more Rock mass characterization is obviously a key element in rock fall hazard analysis. Managing risk and determining the most adapted reinforcement method require a proper understanding of the considered rock mass. Description of discontinuity sets is therefore a crucial first step in the reinforcement work design process. The on-field survey is then followed by a structural modeling in order to extrapolate the data collected at the rock surface to the inner part of the massif. Traditional compass survey and manual observations can be undoubtedly surpassed by dense 3D data such as LiDAR or photogrammetric point clouds. However, although the acquisition phase is quite fast and highly automated, managing, handling and exploiting such great amount of collected data is an arduous task and especially for non specialist users. In this study, we propose a combined approached using both 3D point clouds (from LiDAR or image matching) and 2D digital images, gathered into the concept of ''solid image''. This product is the connection between the advantages of classical true colors 2D digital images, accessibility and interpretability, and the particular strengths of dense 3D point clouds, i.e. geometrical completeness and accuracy. The solid image can be considered as the information support for carrying-out a digital survey at the surface of the outcrop without being affected by traditional deficiencies (lack of data and sampling difficulties due to inaccessible areas, safety risk in steep sectors, etc.). Computational tools presented in this paper have been implemented into one standalone software through a graphical user interface helping operators with the completion of a digital geostructural survey and analysis. 3D coordinates extraction, 3D distances and area measurement, planar best-fit for discontinuity orientation, directional roughness profiles, block size estimation, and other tools have been experimented on a calcareous quarry in the French Alps. A combined approach using both 3D point cloud and 2D digital images is discussed.Standalone software helping with the completion of a digital geostructural survey.Computational tools for solid images exploitation are tested on a case study.Results show the value of combining 3D data and 2D imaging for structural analysis.

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, May 30, 2022
Dense point clouds acquired with a mobile laser scanning system (MLS) device become usual raw dat... more Dense point clouds acquired with a mobile laser scanning system (MLS) device become usual raw data for different surveyor applications: topographic maps, 3D models, road inventories, risk assessment of vegetation on road or railroads. Thanks to important evolutions in technologies, MLS devices became powerful and very popular. In the meantime, the need for point cloud automatic processing tools is growing. However, the available tools have not yet reached a sufficient level of maturity. Using MLS point clouds to produce topographic maps, BIM model or other deliverables, requires very often manual vectorization (or digitalization) work. In the road context, the transition from point cloud to road map that consists in delineating curb or road edges, road markings, pole, trees, facades etc. is currently performed manually. To reduce these time-consuming operations, several solutions have been proposed in the literature. In this paper we present the first results of a method consisting in vectorizing urban point cloud scene. The originality of this work is to propose a global approach aiming to detect and vectorize simultaneously multiple objects. The developed algorithm uses cross-section analysis to detect road curbs and vertical objects. The first results are promising, since an F-score higher than 80% has been reached, even before applying road logic rules or additional knowledge. The detection and extraction of vertical objects including facades, trees, and poles, is more challenging but the detections also present a recall greater than 85%.

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Dec 8, 2022
Depth information is a key component that allows a computer to reproduce human vision in plenty o... more Depth information is a key component that allows a computer to reproduce human vision in plenty of applications from manufacturing, to robotics and autonomous driving. The Microsoft Kinect has brought depth sensing to another level resulting in a large number of low cost, small form factor depth sensors. Although these sensors can efficiently produce data over a wide dynamic range of sensing applications and within different environments, most of them are rather suitable for indoor applications. Operating in outdoor areas is a challenge because of undesired illumination, usually strong sunlight or surface scattering, which degrades measurement accuracy. Therefore, after presenting the different working principle of existing depth cameras, our study aims to evaluate where two very recent sensors, the AD-FXTOF1-EBZ and the flexx2, stand towards the issue of outdoor environment. In particular, measurement tests will be performed on different types of materials subjected to various illumination in order to evaluate the potential accuracy of such sensors.

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, May 30, 2018
Building Information Modelling (BIM) technique has been widely utilized in heritage documentation... more Building Information Modelling (BIM) technique has been widely utilized in heritage documentation and comes to a general term Historical/Heritage BIM (HBIM). The current HBIM project mostly employs the scan-to-BIM process to manually create the geometric model from the point cloud. This paper explains how it is possible to shape from the mesh geometry with reduced human involvement during the modelling process. Aiming at unbuilt heritage, two case studies are handled in this study, including a ruined Roman stone architectural and a severely damaged abbey. The pipeline consists of solid element modelling based on documentation data using Autodesk Revit, a common BIM platform, and the successive modelling from these geometric primitives using Autodesk Dynamo, a visual programming built-in plugin tool in Revit. The BIM-based reconstruction enriches the classic visual model from computer graphics approaches with measurement, semantic and additional information. Dynamo is used to develop a semi-automated function to reduce the manual process, which builds the final BIM model from segmented parametric elements directly. The level of detail (LoD) of the final models is dramatically relevant with the manual involvement in the element creation. The proposed outline also presents two potential issues in the ongoing work: combining the ontology semantics with the parametric BIM model, and introducing the proposed pipeline into the as-built HBIM process.

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Dec 8, 2022
Nowadays, Mobile Laser Scanning (MLS) systems are more and more used to realize extended topograp... more Nowadays, Mobile Laser Scanning (MLS) systems are more and more used to realize extended topographic surveys of roads. Most of them provide for each measured point an attribute corresponding to a return signal strength, so called intensity value. This value enables to easily understand uncolored MLS as it helps to differentiate materials based on their albedo. In a road context, this intensity information allows to distinguish, among others, the main subject of this paper, i.e. road markings. However, this task is challenging. Road marking detection from dense MLS point cloud is widely studied by the research community. It might concern road management and diagnosis, intelligent traffic systems, high-definition maps, location and navigation services. Dense MLS point clouds provided by surveyors are not processed online, they are thus not directly applicable to autonomous driving, but those dense and precise data can be for instance used for the generation of HD reference maps. This paper presents a review of the different processing chains published in the literature. It underlines their contributions and highlights their potential limitations. Finally, a discussion and some suggestions of improvement are given. * Corresponding author (2.3), is followed by a refinement of the results (2.4). Next, a classification of the resulting markings is performed (2.5) before a final vectorization and the export of the results (2.7). Some Deep-Learning approach are presented apart from the others in the section (2.6). Finally, a summary table is presented at the end of the paper (Table 1). 2.1 Pre-processing Unlike images, point clouds are unstructured data i.e. no spatial relationship between two points of the cloud can be assumed without any prior calculations. Detection of road markings belongs generally to a larger processing chain leading to road modeling. For computational purpose, massive MLS point clouds are segmented in blocks of small areas. This decomposition is not without consequences, since the spatial continuity of the scanned surfaces is broken. To overcome this problem, a certain overlap between the blocks is kept. Mi et al. (2021) propose an overlap distance corresponding to the longest expected marking. Another drawback of this necessary decomposition is that the results of the different blocks must be merged afterwards. To reduce the data volume from the start, Soilán et al. (2017) remove all points farer than 10 m from the sensor, considering they are irrelevant. On top of that, and in the more general case,

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, May 30, 2018
Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording technique... more Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording techniques are used in a wide range of applications. After recording several individual 3D datasets known in local systems, one of the first crucial processing steps is the registration of these data into a common reference frame. To perform such a 3D transformation, commercial and open source software as well as programs from the academic community are available. Due to some lacks in terms of computation transparency and quality assessment in these solutions, it has been decided to develop an open source algorithm which is presented in this paper. It is dedicated to the simultaneous registration of multiple point clouds as well as their georeferencing. The idea is to use this algorithm as a start point for further implementations, involving the possibility of combining 3D data from different sources. Parallel to the presentation of the global registration methodology which has been employed, the aim of this paper is to confront the results achieved this way with the above-mentioned existing solutions. For this purpose, first results obtained with the proposed algorithm to perform the global registration of ten laser scanning point clouds are presented. An analysis of the quality criteria delivered by two selected software used in this study and a reflexion about these criteria is also performed to complete the comparison of the obtained results. The final aim of this paper is to validate the current efficiency of the proposed method through these comparisons.
Uploads
Papers by Pierre Grussenmeyer