Machine learning, classification of 3D UAV-SFM point clouds in the University of KwaZulu-Natal (Howard College)
dc.contributor.advisor | Forbes, Angus Mcfarlane. | |
dc.contributor.author | Ntuli, Simiso Siphenini. | |
dc.date.accessioned | 2022-10-19T08:24:08Z | |
dc.date.available | 2022-10-19T08:24:08Z | |
dc.date.created | 2020 | |
dc.date.issued | 2020 | |
dc.description | Masters Degrees. University of KwaZulu- Natal, Durban. | en_US |
dc.description.abstract | Three-dimensional (3D) point clouds derived using cost-effective and time-efficient photogrammetric technologies can provide information that can be utilized for decision-making in engineering, built environment and other related fields. This study focuses on the use of machine learning to automate the classification of points in a heterogeneous 3D scene situated in the University of KwaZulu-Natal, Howard College Campus sports field. The state of the camera mounted on the unmanned aerial vehicle (UAV) was evaluated through the process of camera calibration. Nadir aerial images captured using a UAV were used to generate a 3D point cloud employing the structure-from-motion (SfM) photogrammetric technique. The generated point cloud was georeferenced using natural ground control points (GCPs). Supervised and unsupervised classification approaches were used to classify points into three classes: ground, high vegetation and building. The supervised classification algorithm used a multi-scale dimensionality analysis to classify points. A georeferenced orthomosaic was used to generate random points for cross-validation. The accuracy of classification was evaluated, employing both qualitative and quantitative analysis. The camera calibration results showed negligible discrepancies when a comparison was made between the results obtained and the manufacturer’s specifications in parameters of the camera lens; hence the camera was in the excellent state of being used as a measuring device. Site visits and ground truth surveys were conducted to validate the classified point cloud. An overall root-mean-square (RMS) error of 0.053m was achieved from georeferencing the 3D point cloud. A root-mean-square error of 0.032m was achieved from georeferencing the orthomosaic. The multi-scale dimensionality analysis classified a point cloud and achieved an accuracy of 81.3% and a Kappa coefficient of 0.70. Good results were also achieved from the qualitative analysis. The classification results obtained indicated that a 3D heterogeneous scene can be classified into different land cover categories. These results show that the classification of 3D UAV-SfM point clouds provides a helpful tool for mapping and monitoring complex 3D environments. | en_US |
dc.identifier.uri | https://researchspace.ukzn.ac.za/handle/10413/20959 | |
dc.language.iso | en | en_US |
dc.subject.other | Photogrammetric technologies. | en_US |
dc.subject.other | Heterogeneous 3D scene. | en_US |
dc.subject.other | Unmanned aerial vehicle. | en_US |
dc.subject.other | Georeferenced orthomosaic. | en_US |
dc.subject.other | Root-Mean-Square. | en_US |
dc.title | Machine learning, classification of 3D UAV-SFM point clouds in the University of KwaZulu-Natal (Howard College) | en_US |
dc.type | Thesis | en_US |