Automatically and accurately reconstructing the overhead wires of railway from airborne laser scanning (ALS) data are an efficient way of railway monitoring to ensure stable and safety transportation services. However, due to the complex structure of the overhead wires, it is challenging to extract these wires using the existing methods. This work proposes a workflow for railway overhead wire reconstruction using deep learning for wire identification collaborating with the RANdom SAmple Consensus (RANSAC) algorithm for wire reconstruction. First, data augmentation and ground points down-sampling are performed to facilitate the issues caused by insufficient and non-uniformity of LiDAR points. Then, a network incorporating with PointNet model is proposed to segment wires, pylons and ground points. The proposed network is composed of a Geometry Feature Extraction (GFE) module and a Neighborhood Information Aggregation (NIA) module. These two modules are introduced to encode and describe the local geometric features. Therefore, the capability of the model to discriminate geometric details is enhanced. Finally, a wire individualization and multi-wire fitting algorithm is proposed to reconstruct the overhead wires. A number of experiments are conducted using ALS point cloud data of railway scenarios. The results show that the accuracy and MIoU for wire identification are 96.89% and 82.56%, respectively, which demonstrates a better performance compared to the existing methods. The overall reconstruction accuracy is 96% over the study area. Furthermore, the presented strategy also demonstrated its applicability to high-voltage powerline scenarios.
Plot-level reconstruction of 3D tree models for aboveground biomass estimation
Guangpeng Fan, Zhenyu Xu, Jinhu Wang, Liangliang Nan, Huijie Xiao, and
2 more authors
Complexity of forest structure is an important factor contributing to uncertainty in aboveground biomass estimates. In this study, we present a new method for reducing uncertainty in forest aboveground biomass (AGB) estimation based on plot-level terrestrial laser scanner (TLS) point clouds reconstruction. The method estimates the total AGB of plots with complex structures after automatically performing the steps of ground point filtering, single tree segmentation, and three-dimensional (3D) structure reconstruction. We used plot data from temperate and tropical forest ecosystems to verify the effectiveness of the method, reconstructing a 1300 m2 temperate plantation plot and a 5000 m2 mingled forest plot, respectively. The total biomass of 153 trees in the plantation plot was overestimated by 17.12 %, and the total biomass of 61 trees in the mingled forest plot was underestimated by 10.88 %. We found that the uncertainty of aboveground biomass estimation in tropical forests with more complex structures is not necessarily greater than in plantations. Therefore, in large-scale remote sensing observations of forest biomass, the number or area of plots can be increased to reduce the uncertainty of the results caused by the complex structure. The focus of this study is to explore TLS point clouds modeling methods to reduce the uncertainty in AGB estimation caused by the complexity of forest structures, and to provide reference cases for plot-level point clouds reconstruction methods. Forest ecologists can use this method to regularly observe forest growth and obtain indicators related to forest ecology without destroying trees.
Road marking detection and extraction method based on neighborhood density and Kalman Filter
Xiaoyu Li, Mei Zhou, Jinhu Wang, and Qiangqiang Yao
Journal of University of Chinese Academy of Sciences 2022
This paper presents a methodology for detection and extraction of road marking based on neighborhood point density and Kalman Filter from mobile laser scanning in three steps: 1) Segmenting road point clouds into scan lines; 2) Generating convolution kernel based on point density in neighborhood for extracting road marking contour points; 3)Fitting contour line using the least squares algorithm and completing omitted road markings using Kalman Filter. A quantitative validation shows that the average deviation of center points and orientation are 0.04 m and 0.04 respectively, and the average completeness of results are 99.69%. The proposed method reduces the influence of unevenly distributed points on road markings extraction and improves the overall extraction completeness.
2021
Methodology for extraction of tunnel cross-section using dense point cloud data
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period. Terrestrial Laser Scanning (TLS) can collect high density and high accuracy point cloud in a few minutes as an innovation technique, which provides promising applications in tunnel deformation monitoring. Here, an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed. First, the tunnel orientation is determined using principal component analysis (PCA) in Euclidean plane. Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m. Next, a z-score method is introduced to detect and remove outliers. Because the tunnel cross-section’s standard shape is round, the circle fitting is implemented using the least-squares method. Afterward, the convergence analysis is made at the angles of 0, 30 and 150 degrees. The proposed approach’s feasibility is tasted on a TLS point cloud of a Nanjing subway tunnel acquired using a Faro X330 laser scanner. The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm, which is also in agreement with the measurements acquired by a total station instrument. The proposed methodology provides new insights and references for the application of TLS in tunnel deformation monitoring, which can also be extended to other engineering applications.
2019
Automatic extraction of power lines from UAV LiDAR point clouds using a novel spatial feature
Mei Zhou, Kuangyu Li, Jinhu Wang, Chuanrong Li, Geer Teng, and
6 more authors
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2019
UAV LiDAR systems have unique advantage in acquiring 3D geo-information of the targets and the expenses are very reasonable; therefore, they are capable of security inspection of high-voltage power lines. There are already several methods for power line extraction from LiDAR point cloud data. However, the existing methods either introduce classification errors during point cloud filtering, or occasionally unable to detect multiple power lines in vertical arrangement. This paper proposes and implements an automatic power line extraction method based on 3D spatial features. Different from the existing power line extraction methods, the proposed method processes the LiDAR point cloud data vertically, therefore, the possible location of the power line in point cloud data can be predicted without filtering. Next, segmentation is conducted on candidates of power line using 3D region growing method. Then, linear point sets are extracted by linear discriminant method in this paper. Finally, power lines are extracted from the candidate linear point sets based on extension and direction features. The effectiveness and feasibility of the proposed method were verified by real data of UAV LiDAR point cloud data in Sichuan, China. The average correct extraction rate of power line points is 98.18%.
Analysis of influencing factors of curve matching based geometric calibration for ZY3-02 altimeter data
Mei Zhou, Linshen Chen, Jinhu Wang, Geer Teng, Chuanrong Li, and
2 more authors
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2019
High-precision on-orbit geometric calibration of spaceborne laser altimetry data is essential to its effective applications. Firstly, the existing calibration methods for laser altimeter data are analyzed. Then, a geometric calibration method based on curve matching is proposed. Compared to the existing methods, the proposed method does not rely on ground calibration field. Thus, it is efficiency in expense and time. Notably, three factors, i.e. matching method, initial control point selection and the step size of matching step, which significantly affect the results of calibration are analyzed respectively. The analysis was validated based on the original laser altimetry data obtained by ZY3-02 satellite. According to the results, the following conclusions can be drawn preliminarily: (1) Both the correlation coefficient maximum (COR) criterion and the mean square error minimum (MSD) criterion in the curve matching can be used to correct the systematic error in altimetry data. (2) The initial control points of the selected track should have a significant change trend and the slope within the laser footprints should be less than 15°. (3) Current experimental data show that the best step size for matching search is 10 m. The relevant conclusions can provide reference for the research of geometrical calibration and data processing of the same type of laser altimetry satellite.
Book Chapter
Laser Scanning : An Emerging Technology in Structural Engineering
Roderik Lindenbergh, Sylvie Soudarissanane, Jinhu Wang, Abdul Nurunnabi, Adriaan Natijne, and
1 more author
The last two decades have witnessed increasing awareness of the potential of terrestrial laser scanning (TLS) in forest applications in both public and commercial sectors, along with tremendous research efforts and progress. It is time to inspect the achievements of and the remaining barriers to TLS-based forest investigations, so further research and application are clearly orientated in operational uses of TLS. In such context, the international TLS benchmarking project was launched in 2014 by the European Spatial Data Research Organization and coordinated by the Finnish Geospatial Research Institute. The main objectives of this benchmarking study are to evaluate the potential of applying TLS in characterizing forests, to clarify the strengths and the weaknesses of TLS as a measure of forest digitization, and to reveal the capability of recent algorithms for tree-attribute extraction. The project is designed to benchmark the TLS algorithms by processing identical TLS datasets for a standardized set of forest attribute criteria and by evaluating the results through a common procedure respecting reliable references. Benchmarking results reflect large variances in estimating accuracies, which were unveiled through the 18 compared algorithms and through the evaluation framework, i.e., forest complexity categories, TLS data acquisition approaches, tree attributes and evaluation procedures. The evaluation framework includes three new criteria proposed in this benchmarking and the algorithm performances are investigated through combining two or more criteria (e.g., the accuracy of the individual tree attributes are inspected in conjunction with plot-level completeness) in order to reveal algorithms’ overall performance. The results also reveal some best available forest attribute estimates at this time, which clarify the status quo of TLS-based forest investigations. Some results are well expected, while some are new, e.g., the variances of estimating accuracies between single-/multi-scan, the principle of the algorithm designs and the possibility of a computer outperforming human operation. With single-scan data, i.e., one hemispherical scan per plot, most of the recent algorithms are capable of achieving stem detection with approximately 75% completeness and 90% correctness in the easy forest stands (easy plots: 600 stems/ha, 20 cm mean DBH). The detection rate decreases when the stem density increases and the average DBH decreases, i.e., 60% completeness with 90% correctness (medium plots: 1000 stem/ha, 15 cm mean DBH) and 30% completeness with 90% correctness (difficult plots: 2000 stems/ha, 10 cm mean DBH). The application of the multi-scan approach, i.e., five scans per plot at the center and four quadrant angles, is more effective in complex stands, increasing the completeness to approximately 90% for medium plots and to approximately 70% for difficult plots, with almost 100% correctness. The results of this benchmarking also show that the TLS-based approaches can provide the estimates of the DBH and the stem curve at a 1–2 cm accuracy that are close to what is required in practical applications, e.g., national forest inventories (NFIs). In terms of algorithm development, a high level of automation is a commonly shared standard, but a bottleneck occurs at stem detection and tree height estimation, especially in multilayer and dense forest stands. The greatest challenge is that even with the multi-scan approach, it is still hard to completely and accurately record stems of all trees in a plot due to the occlusion effects of the trees and bushes in forests. Future development must address the redundant yet incomplete point clouds of forest sample plots and recognize trees more accurately and efficiently. It is worth noting that TLS currently provides the best quality terrestrial point clouds in comparison with all other technologies, meaning that all the benchmarks labeled in this paper can also serve as a reference for other terrestrial point clouds sources.
Validating a workflow for tree inventory updating with 3D point clouds obtained by mobile laser scanning
Urban trees are an important component of our environment and ecosystem. Trees are able to combat climate change, clean the air and cool the streets and city. Tree inventory and monitoring are of great interest for biomass estimation and change monitoring. Conventionally, parameters of trees are manually measured and documented in situ, which is not efficient regarding labour and costs. Light Detection And Ranging (LiDAR) has become a well-established surveying technique for the acquisition of geo-spatial information. Combined with automatic point cloud processing techniques, this in principle enables the efficient extraction of geometric tree parameters. In recent years, studies have investigated to what extend it is possible to perform tree inventories using laser scanning point clouds. Give the availability of a city of Delft Open data tree repository, we are now able to present, validate and extend a workflow to automatically obtain tree data from tree location until tree species. The results of a test over 47 trees show that the proposed methods in the workflow are able to individual urban trees. The tree species classification results based on the extracted tree parameters show that only one tree was wrongly classified using k-means clustering.
Scalable individual tree delineation in 3D point clouds
Abstract Manually monitoring and documenting trees is labour intensive. Lidar provides a possible solution for automatic tree-inventory generation. Existing approaches for segmenting trees from original point cloud data lack scalable and efficient methods that separate individual trees sampled by different laser-scanning systems with sufficient quality under all circumstances. In this study a new algorithm for efficient individual tree delineation from lidar point clouds is presented and validated. The proposed algorithm first resamples the points using cuboid (modified voxel) cells. Consecutively connected cells are accumulated by vertically traversing cell layers. Trees in close proximity are identified, based on a novel cell-adjacency analysis. The scalable performance of this algorithm is validated on airborne, mobile and terrestrial laser-scanning point clouds. Validation against ground truth demonstrates an improvement from 89% to 94% relative to a state-of-the-art method while computation time is similar.
2017
Change analysis in structural laser scanning point clouds: The baseline method
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
PhD Thesis
Scalable information extraction from point cloud data obtained by mobile laser scanner
The rise of intelligent transportation, autonomous driving and 3D virtual cities demands highly accurate and regularly updated 2D and 3D maps. However, traditional surveying and mapping techniques are inadequate as they are labor-intensive and cost inefficient. Mobile Laser Scanning (MLS) systems, which combine Light Detection and Ranging (LiDAR) with navigation techniques, are able to acquire highly accurate 3D measurements of road environments.
SigVox - A 3D feature matching algorithm for automatic street object
recognition in mobile laser scanning point clouds
Urban road environments contain a variety of objects including different
types of lamp poles and traffic signs. Its monitoring is traditionally
conducted by visual inspection, which is time consuming and expensive.
Mobile laser scanning (MLS) systems sample the road environment efficiently
by acquiring large and accurate point clouds. This work proposes
a methodology for urban road object recognition from MLS point clouds.
The proposed method uses, for the first time, shape descriptors of
complete objects to match repetitive objects in large point clouds.
To do so, a novel 3D multi-scale shape descriptor is introduced,
that is embedded in a workflow that efficiently and automatically
identifies different types of lamp poles and traffic signs. The workflow
starts by tiling the raw point clouds along the scanning trajectory
and by identifying non-ground points. After voxelization of the non-ground
points, connected voxels are clustered to form candidate objects.
For automatic recognition of lamp poles and street signs, a 3D significant
eigenvector based shape descriptor using voxels (SigVox) is introduced.
The 3D SigVox descriptor is constructed by first subdividing the
points with an octree into several levels. Next, significant eigenvectors
of the points in each voxel are determined by principal component
analysis (PCA) and mapped onto the appropriate triangle of a sphere
approximating icosahedron. This step is repeated for different scales.
By determining the similarity of 3D SigVox descriptors between candidate
point clusters and training objects, street furniture is automatically
identified. The feasibility and quality of the proposed method is
verified on two point clouds obtained in opposite direction of a
stretch of road of 4km. 6 types of lamp pole and 4 types of road
sign were selected as objects of interest. Ground truth validation
showed that the overall accuracy of the 170 automatically recognized
objects is approximately 95%. The results demonstrate that the proposed
method is able to recognize street furniture in a practical scenario.
Remaining difficult cases are touching objects, like a lamp pole
close to a tree.
2016
A concealed car extraction method based on full-waveform LiDAR data
Chuanrong Li, Mei Zhou, Menghua Liu, Lian Ma, and Jinhu Wang
Concealed cars extraction from point clouds data acquired by airborne laser scanning has gained its popularity in recent years. However, due to the occlusion effect, the number of laser points for concealed cars under trees is not enough. Thus, the concealed cars extraction is difficult and unreliable. In this paper, 3D point cloud segmentation and classification approach based on full-waveform LiDAR was presented. This approach first employed the autocorrelation G coefficient and the echo ratio to determine concealed cars areas. Then the points in the concealed cars areas were segmented with regard to elevation distribution of concealed cars. Based on the previous steps, a strategy integrating backscattered waveform features and the view histogram descriptor was developed to train sample data of concealed cars and generate the feature pattern. Finally concealed cars were classified by pattern matching. The approach was validated by full-waveform LiDAR data and experimental results demonstrated that the presented approach can extract concealed cars with accuracy more than 78.6% in the experiment areas.
High-precision 3D geolocation of persistent scatterers with one single-Epoch GCP and LIDAR DSM data
Mengshi Yang, Prabu Dheenathayalan, Ling Chang, Jinhu Wang, Roderik C. Lindenbergh, and
2 more authors
In Proceedings of Living Planet Symposium 2016 Aug 2016
In persistent scatterer (PS) interferometry, the relatively poor 3D geolocalization precision of the measurement points (the scatterers) is still a major concern. It makes it difficult to attribute the deformation measurements unambiguously to (elements of) physical objects. Ground control points (GCP’s), such as corner reflectors or transponders, can be used to improve geolocalization, but only in the range-azimuth domain. Here, we present a method which uses only one GCP, visible in only one single radar acquisition, in combination with a digital surface model (DSM) data to improve the geolocation precision, and to achieve an object snap by projecting the scatterer position to the intersection with the DSM model, in the metric defined by the covariance matrix (i.e. error ellipsoid) of every scatterer.
Coarse point cloud registration by EGI matching of voxel clusters
Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, ie coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.
2015
Automated large scale parameter extraction of road-side trees sampled by a laser mobile mapping system
In urbanized Western Europe trees are considered an important component of the built-up environment. This also means that there is an increasing demand for tree inventories. Laser mobile mapping systems provide an efficient and accurate way to sample the 3D road surrounding including notable roadside trees. Indeed, at, say, 50 km/h such systems collect point clouds consisting of half a million points per 100m. Method exists that extract tree parameters from relatively small patches of such data, but a remaining challenge is to operationally extract roadside tree parameters at regional level. For this purpose a workflow is presented as follows: The input point clouds are consecutively downsampled, retiled, classified, segmented into individual trees and upsampled to enable automated extraction of tree location, tree height, canopy diameter and trunk diameter at breast height (DBH). The workflow is implemented to work on a laser mobile mapping data set sampling 100 km of road in Sachsen, Germany and is tested on a stretch of road of 7km long. Along this road, the method detected 315 trees that were considered well detected and 56 clusters of tree points were no individual trees could be identified. Using voxels, the data volume could be reduced by about 97% in a default scenario. Processing the results of this scenario took 2500 seconds, corresponding to about 10 km/h, which is getting close to but is still below the acquisition rate which is estimated at 50 km/h.
Evaluating voxel enabled scalable intersection of large point clouds
Laser scanning has become a well established surveying solution for obtaining 3D geo-spatial information on objects and environment. Nowadays scanners acquire up to millions of points per second which makes point cloud huge. Laser scanning is widely applied from airborne, carborne and stable platforms, resulting in point clouds obtained at different attitudes and with different extents. Working with such different large point clouds makes the determination of their overlapping area necessary but often time consuming. In this paper, a scalable point cloud intersection determination method is presented based on voxels. The method takes two overlapping point clouds as input. It consecutively resamples the input point clouds according to a preset voxel cell size. For all non-empty cells the center of gravity of the points in contains is computed. Consecutively for those centers it is checked if they are in a voxel cell of the other point cloud. The same process is repeated after interchanging the role of the two point clouds. The quality of the results is evaluated by the distance to the pints from the other data set. Also computation time and quality of the results are compared for different voxel cell sizes. The results are demonstrated on determining he intersection between an airborne and carborne laser point clouds and show that the proposed method takes 0.10%, 0.15%, 1.26% and 14.35% of computation time compared the the classic method when using cell sizes of of 10, 8, 5 and 3 meters respectively.
IQPC 2015 Track: Tree separation and classification in mobile mapping LiDAR data
The European FP7 project IQmulus yearly organizes several processing contests, where submissions are requested for novel algorithms for point cloud and other big geodata processing. This paper describes the set-up and execution of a contest having the purpose to evaluate state-of-the-art algorithms for Mobile Mapping System point clouds, in order to detect and identify (individual) trees. By the nature of MMS these are trees in the vicinity of the road network (rather than in forests). Therefore, part of the challenge is distinguishing between trees and other objects, such as buildings, street furniture, cars etc. Three submitted segmentation and classification algorithms are thus evaluated.
2014
Geometric road runoff estimation from laser mobile mapping data
Mountain roads are the lifelines of remote areas but are often situated in complicated settings and prone to landslides, rock fall, avalanches and damages due to surface water runoff. The impact and likelihood of these types of hazards can be partly assessed by a detailed geometric analysis of the road environment. Field measurements in remote areas are expensive however. A possible solution is the use of a Laser Mobile Mapping System (LMMS) which, at high measuring rate, captures dense and accurate point clouds. This paper presents an automatic approach for the delineation of both the direct environment of a road and the road itself into local catchments starting from a LMMS point cloud. The results enable a user to assess where on the road most water from the surroundings will assemble, and how water will flow over the road after eg heavy snow melt or rainfall. To arrive at these results the following steps are performed. First outliers are removed and point cloud data is gridded at a uniform width. Local surface normal and gradient of each grid point are determined. The relative smoothness of the road is used as a criterion to identify the road’s outlines. The local gradients are input for running the so-called D8 method, which simply exploits that surface water follows the direction of steepest descent. This method first enables the identification of sinks on the roadside, ie the locations where water flow accumulates and potentially enters the road. Moreover, the method divides the road’s direct neighbourhood into catchments, each consisting of all grid cells having runoff to the same sink. In addition the method is used to analyse the surface. The new method is demonstrated on a piece of 153 meters long Galician mountain road as sampled by LMMS data.
2013
Automatic estimation of excavation volume from laser mobile mapping data for mountain road widening
Roads play an indispensable role as part of the infrastructure of
society. In recent years, society has witnessed the rapid development
of laser mobile mapping systems (LMMS) which, at high measurement
rates, acquire dense and accurate point cloud data. This paper presents
a way to automatically estimate the required excavation volume when
widening a road from point cloud data acquired by an LMMS. Firstly,
the input point cloud is down-sampled to a uniform grid and outliers
are removed. For each of the resulting grid points, both on and off
the road, the local surface normal and 2D slope are estimated. Normals
and slopes are consecutively used to separate road from off-road
points which enables the estimation of the road centerline and road
boundaries. In the final step, the left and right side of the road
points are sliced in 1-m slices up to a distance of 4 m, perpendicular
to the roadside. Determining and summing each sliced volume enables
the estimation of the required excavation for a widening of the road
on the left or on the right side. The procedure, including a quality
analysis, is demonstrated on a stretch of a mountain road that is
approximately 132 m long as sampled by a Lynx LMMS. The results in
this particular case show that the required excavation volume on
the left side is 8% more than that on the right side. In addition,
the error in the results is assessed in two ways. First, by adding
up estimated local errors, and second, by comparing results from
two different datasets sampling the same piece of road both acquired
by the Lynx LMMS. Results of both approaches indicate that the error
in the estimated volume is below 4%. The proposed method is relatively
easy to implement and runs smoothly on a desktop PC. The whole workflow
of the LMMS data acquisition and subsequent volume computation can
be completed in one or two days and provides road engineers with
much more detail than traditional single-point surveying methods
such as Total Station or GPS profiling. A drawback is that an LMMS
system can only sample what is within the view of the system from
the road.
2012
A comparison of two differnt approaches of point cloud classification based on full-waveform LiDAR data
Jinhu Wang, Chuanrong Li, Lingli Tang, Mei Zhou, and Jingmei Li
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Aug 2012
In this paper, two different point cloud classification approaches were applied based on the full-waveform LiDAR data. At the beginning, based on the full-waveform LiDAR data, we decomposed the backscattered pulse waveform and abstracted each component in the waveform after the pre-processing of noise detection and waveform smoothing. And by the time flag of each component acquired in the decomposition procedure we calculated the three dimension coordination of the component. Then the components’ waveform properties, including amplitude, width and cross-section, were uniformed respectively and formed the Amplitude/Width/Section space. Then two different approaches were applied to classify the points. First, we selected certain targets and trained the parameters, after that, by the supervised classification way we segmented the study area point. On the other hand, we apply the IHSL colour transform to the above space to find a new space, RGB colour space, which has a uniform distinguishability among the parameters and contains the whole information of each component in Amplitude/Width/Section space. Then the fuzzy C-means algorithm is applied to the derived RGB space to complete the LiDAR point classification procedure. By comparing the two different segmentation results, which may of substantial importance for further targets detection and identification, a brief discussion and conclusion were brought out for further research and study.