Advances in Environmental and Engineering Research (AEER) is an international peer-reviewed Open Access journal published quarterly online by LIDSEN Publishing Inc. This periodical is devoted to publishing high-quality peer-reviewed papers that describe the most significant and cutting-edge research in all areas of environmental science and engineering. Work at any scale, from molecular biology through to ecology, is welcomed.

Main research areas include (but are not limited to):

  • Atmospheric pollutants
  • Air pollution control engineering
  • Climate change
  • Ecological and human risk assessment
  • Environmental management and policy
  • Environmental impact and risk assessment
  • Environmental microbiology
  • Ecosystem services, biodiversity and natural capital
  • Environmental economics
  • Control and monitoring of pollutants
  • Remediation of polluted soils and water
  • Fate and transport of contaminants
  • Water and wastewater treatment engineering
  • Solid waste treatment

Advances in Environmental and Engineering Research publishes a variety of article types (Original Research, Review, Communication, Opinion, Comment, Conference Report, Technical Note, Book Review, etc.). We encourage authors to be succinct; however, authors should present their results in as much detail as necessary. Reviewers are expected to emphasize scientific rigor and reproducibility.

Indexing: 

Publication Speed (median values for papers published in 2023): Submission to First Decision: 6.1 weeks; Submission to Acceptance: 16.1 weeks; Acceptance to Publication: 9 days (1-2 days of FREE language polishing included)

Current Issue: 2024  Archive: 2023 2022 2021 2020
Open Access Research Article

Assessment of Factors Affecting the Use of Drones in Map Production

Omar Alkhalil *

Faculty of Civil Engineering, Department of Topographic Engineering, Tishreen University, Latakia, Syrian Arab Republic

Correspondence: Omar Alkhalil

Academic Editor: Zed Rengel

Special Issue: Hyperspectral Remote Sensing

Received: May 23, 2022 | Accepted: July 29, 2022 | Published: August 11, 2022

Adv Environ Eng Res 2022, Volume 3, Issue 3, doi:10.21926/aeer.2203029

Recommended citation: Alkhalil O. Assessment of Factors Affecting the Use of Drones in Map Production. Adv Environ Eng Res 2022; 3(3): 029; doi:10.21926/aeer.2203029.

© 2022 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited.

Abstract

Multi-rotor and fixed-wing drones are extensively used to collect the data needed for producing large-scale topographic maps and plans. Several types of drone products are available, and the most important one for surveyors is an orthophoto. Flight planning, the quality of the control data, the assessment of drone products, and the image processing software need to be considered when using thes e drones. In this study, we explained these concepts and discussed their theoretical and practical importance in providing standards and tools that might help drone users, surveyors, and others, in obtaining the products they desire. We also proposed a detailed methodology for evaluating the accuracy of the image processing products of the drones as aerial triangulation and orthophoto. We discussed the importance of the Ground Sampling Distance (GSD) and the ground control points to assess the absolute accuracy of the aerial triangulation of the images and the importance of performing statistical tests before evaluating the absolute horizontal accuracy of the orthophoto. The checkpoints measured using more accurate methods than drone photogrammetry were used for the assessment. In this study, we evaluated the dependence of the differences between the coordinates of these points and the coordinates of the corresponding points measured on the orthophoto based on a normal distribution, and the correlation between them, before applying international standards for determining the absolute horizontal accuracy of the orthophoto. An orthophoto covering 6.57 hectares with a horizontal accuracy of 0.362 $m$ was produced. This accuracy is suitable for producing 1/1500 scale map that can be used to extract features with a sub-decimeter accuracy.

Keywords

Drone; aerial triangulation; orthophoto; fight planning; control points; statistical testing

1. Introduction

Drones equipped with high-resolution digital cameras can record accurate data that might be used for various photogrammetric tasks [1]. Drones are used as an alternative to traditional surveying methods for updating or making large-scale topographic maps for regional planning, real estate applications, monitoring urban expansion, and creating digital surface models (DSM) [2].

While using drones, the location and orientation of the camera are determined by the Inertial Measurement Unit (IMU) and the airborne GPS unit. These data are then used in the image orientation process [3].

Drones have different types of remote sensing sensors. Those that are dedicated to remote sensing and mapping missions are usually equipped with one or several sensors, including electro-optical sensors (cameras), infrared sensors, and laser sensors. Along with the laser source, LiDAR systems use GPS and Inertial Measurement Units (IMU) for precise geolocation of a point cloud or terrain mapping. Laser ranging, combined with auxiliary sensors, such as GPS and IMU, can be used as a laser-based terrain mapping system called Light Detection and Ranging (LiDAR).

For surveyors, the orthophoto is the most important cartographic product of drones, because it is corrected for distortions caused by the camera tilt or due to lens aberrations and shifts due to elevations in the photographed scene. Thus, it can be used as a map [4]. For producing orthophotos, inputs such as a vertical aerial photograph, the inner exterior orientation parameters of the camera, and a digital terrain model (DTM) for the region are required.

When using drones for generating maps, the applied procedures are very similar to those applied while taking aerial photographs regarding image planning, ground control work, image processing, and assessing the accuracy of aerial triangulation [5]. Here, normal case conditions were applied during image acquisition (images were captured from different camera poses so that the optical axes of the camera systems were perpendicular to the base vector between them and parallel to each other), as well as, during the use of control data (using airborne GPS or ground control points) in the aerial triangulation process [5].

The advent of drones led to the development of new concepts on the inputs (digital images), the control data, the forms of products (dense point clouds, digital surface models, and orthophotos), and their evaluation mechanisms. In this study, we discussed some of these concepts and explained them from a theoretical and practical perspective to help users of aircraft, surveyors, and others in obtaining the products they desire.

In this study, we discussed the following points:

1-
The factors affecting flight planning that help to achieve the required accuracy
2-
The digital sensors used in image acquisition
3-
The control data used in image processing (aerial triangulation)
4-
The SfM-based (Structure from Motion) processing software used for geometric data acquisition from images in the form of point clouds
5-
The nature of the resulting products and mechanisms used for evaluating their accuracy

We aimed to clarify the previous points before proceeding with any photogrammetry surveying for map production. This was because many non-specialists started using drones for producing maps, and some guides might be helpful in their work.

To evaluate the feasibility of using drones for producing topographic plans, the results were compared to a topographic plan produced by traditional surveying (expensive and time-consuming).

2. Materials and Methods

In the following sections, the theoretical principles applied in drone mission planning, control data for aerial triangulation of drone imagery drone imaging sensor, and the assessment of drone products, were discussed. We then applied these principles our project.

2.1 Drone Mission Planning

Theoretically, the inputs to this process are generally the following [6]: the specifications of the drone's camera, the required scale of the map, the size of the studied area, and the proportions of the forward lap and side lap of images.

Regarding the camera specifications, we recorded the size of the captured images in pixels, the size of the image in millimeters, and the focal length of the camera.

For the flight plan, the following information is required:

  • The size of the CCD in pixels (n × m).
  • The size of the pixel in millimeters ($\text{pixel}_\text{size}$).
  • The size of the CCD in millimeters (w × h).
  • The focal length of the camera lens f. 
  • The size and shape of the area to be photographed (length × width).
  • The extent of the forward lap and side lap (PL (%) × PS (%))
  • The scale of the required map (1/Nmap).

A flight plan for an aerial imagery mission must be developed to obtain the appropriate flight altitude, the number of flight lines, the air base (distance between two consecutive photos), the number of images in each flight line, and the total number of images.

2.1.1 Flight Altitude Computations

Flight altitude is the altitude of a certain datum above which the drone flies during data acquisition. The two main data used are either the average (mean) ground elevation or the mean sea level. The flight altitude relative to the mean sea level is calculated using the focal length of the lens and image scale as follows:

\[ \frac{1}{N_{image }}=\frac{f}{H-h_{ {avg }}} \rightarrow H=f \times N_{image }+h_{ {avg }} \tag{1} \]

Here, $\text{h}_\text{avg}$indicates the mean ground level. Before applying Equation (1), the value of the image scale must be determined as a function map scale based on the principle that the scale of the map obtained from the image must be equal to five times the scale of the image [6], that is:

\[ N_{ {map }}=N_{ {photo }} \times 5 \]

2.1.2 Image Ground Coverage

The ground coverage of an image is the area on the ground. This coverage can be determined by calculating the distances covered by the image width (w) and the image height (h) on the ground (parallel and perpendicular to the flight direction, respectively) using the following equations:

\[ D_{1}=w \times N_{{photo }} \]

\[ D_{2}=h \times N_{{photo }} \tag{2} \]

Where,

\[ w={ pixel }_{s i z e}({mm}) \times n \]

\[ h={ pixel }_{ {size }}(m m) \times m \tag{3} \]

2.1.3 Flight Lines and Image Number Computations

Before starting the computations of the flight lines and image numbers, it should be noted that for a rectangular project, the smallest dimension of the project area is used to lay out the flight lines. This results in fewer flight lines and lesser turns between flight lines.

To determine the number of flight lines needed to cover the project area, the air base is computed using the following equation:

\[ B=\frac{D_{1} \times(100-P L)}{100} \tag{4} \]

Then, the flight line spacing is evaluated:

\[ W=\frac{D_{2} \times(100-P S)}{100} \tag{5} \]

The number of flight lines is computed as:

\[ N_{S}=\frac{W i d t h}{W}+1 \tag{6} \]

The number of flight lines is always rounded up. The number of images per flight line is calculated as:

\[ N_{L}=2+\frac{ { Long }}{B}+2 \tag{7} \]

The number of images is rounded up, and two images are added at the beginning of the flight line before entering the project area, and two images are added upon exiting the project area [6]. This is performed to ensure continuous stereo coverage. Finally, the total number of images needed for the project is calculated as:

\[ N_L \times N_S \tag{8} \]

Many software and applications are available that enable the drone to automatically plan and implement the flight process. Some of these software packages include Lichi, Autopilot, Drone link, Drone Harmony, DJI pilot, and PIX4DCapture (Figure 1). By using the above-mentioned software, surveyors and photographers can apply the full potential of their drones. However, implementing the plan proposed by these programs requires a good GPS signal, which is not always possible.

Click to view original image

Figure 1 The free drone flight planning app. (Source: https://www.pix4d.com/product/pix4dcapture)

2.2 Drone Imaging Sensor

In a drone survey mission, the choice of the appropriate imaging sensor depends heavily on the exact application. Satisfactory results can be obtained using a high-resolution RGB camera combined with proper mission planning and post-processing. Ground sample distance (GSD) refers to the spatial resolution of the camera. This might be considered to be the ‘accuracy limit’ of aerial surveying using the camera [7]. GSD can be computed by the following equation:

\[ G S D={ Pixel }_{ {size }} \times N_{{photo }} \tag{9} \]

It can also be calculated as a function of image width in millimeters (w), flight height (H), focal length (f), and the width of the image in pixels (n) as follows:

\[ G S D=\frac{w \times H}{f \times n} \tag{10} \]

The GSD is very important because it can be used to evaluate the accuracy of aerial triangulation, as we will explain later.

It is preferable to use a calibrated camera before image acquisition. Camera calibration is performed to recover the interior parameters and lens distortion parameters of the camera, which are needed for image coordinate corrections before performing digital image processing [8]. These distortions affect the accuracy of the final products based on drone images. Camera calibration is performed by capturing the convergence image of a test field, which consists of several control points. These images are then processed using stand-alone camera calibration software. Camera self-calibration during image processing is also possible in most SfM photogrammetry software [9].

A mechanical shutter can also improve the accuracy of aerial triangulation to avoid distortion of the images because it reduces the rolling shutter effect.

2.3 Control Data for Aerial Triangulation of Drone Imagery

Aerial Triangulation (AT) represents the mathematical process of establishing precise and accurate relationships between individual image coordinate systems and a defined datum and projection (ground). Generally, the ground control points measured by conventional surveying methods or with GPS are used for AT. Aerial Triangulation helps to refine the exterior orientation parameters for each image and achieve the desired accuracy while generating products (i.e., Orthophotos and DSMs). When using drones, exterior orientation parameters are provided by the onboard Inertial Measurement Unit (IMU) and Global Positioning System (GPS). These parameters are stored as the EXIF data in the metadata of the images and used by the processing software for AT applications [10,11].

In drone photogrammetry, the following types of reference data are usually available [8]:

  1. Control data, measured by the onboard GPS in the RTK (real-time kinematic) mode. This mode of operation allows the drone to receive corrections for the real-time GPS positions from GPS correction services. 
  2. Control data, measured by the onboard GPS in the PPK (post-processed kinematic) mode. This mode of operation does not require corrections for the real-time GPS positions as the acquired GPS data can be post-processed later.

Studies have shown that [9,10]:

  1. Ground control points need to be used for computing AT when using the RTK mod (no post-corrections to GPS signals are affected by obstructions and weather conditions) to determine the exterior orientation parameters. Some studies have shown that the absolute accuracy of AT and drone products can be improved significantly when ground control points are used. 
  2. Ground control points need to be used for computing AT when using the PPK mod (application of post-corrections to GPS signals) to determine the exterior orientation parameters. The absolute accuracy of AT and drone products is not affected much when using control points.
  3. Achieving a high level of positional accuracy is impossible when using a drone with a non-metric camera and without any ground control points.

2.4 Drone Image Processing Software

Drone images are processed using SfM-based software to automatically determine the location of the images and extract geometric data from images in the form of point clouds (DSM), 3D models, and orthophotos [12].

SfM-based software differ in the flexibility of importing control points and measuring them on images, providing the capability to operate on the resulting clouds and generate 3D models in vector format, the availability of cloud editing tools, and the contents of the processing report [13]. Despite these differences, all SfM-based software follow a common workflow shown in Figure 2 [14].

Click to view original image

Figure 2 The general workflow for SfM-based software.

When selecting the SfM-based software for 3D modeling and map production from drone images, several choices are available. Many studies were conducted to evaluate the effectiveness of SfM-based software. Most studies recommend using the Pix4DMapper software [15,16].

2.5 Assessment of Drone Products

The accuracy of two main types of products of interest to the surveyors, including aerial triangulation and orthophotos, needs to be assessed.

2.5.1 Accuracy Assessment of Aerial Triangulation (AT)

Relative accuracy is not sufficient to evaluate the accuracy of AT. Hence, ground control points must be used to compute AT and evaluate its absolute accuracy. In this case, control points can be reprojected and considered as checkpoints. After performing aerial triangulation, the coordinates of the checkpoints are determined. For each checkpoint, the differences between the actual ground coordinates and the measured triangulated one are determined for the x, y, and z coordinates. After determining the differences in the x, y, and z directions for each checkpoint, the accuracy can be assessed using RMSE (Root Mean Square Error) If the horizontal accuracy of AT is RMSExy (including both the x and y directions) and its vertical accuracy is RMSEZ, the following equations can be used to assess the absolute accuracy of AT [17]:

\[ R M S E_{X Y} \leq(4 \rightarrow 6) \times G S D \]

\[ R M S E_{Z} \leq \frac{H}{B} \times \sigma_{X Y} \tag{11} \]

Here, B indicates the base vector between two cameras, H indicates the flight height above the datum, and GSD indicates the ground sampling distance.

2.5.2 Accuracy Assessment of Orthophoto

To assess the horizontal accuracy of the orthophoto, a set of checkpoints is first measured on the orthophoto. The coordinates of the same points are then obtained from a more accurate source (e.g., ground measurements, large-scale topographic plans, etc.). For each checkpoint, the differences between the previous coordinates are determined for the x and y coordinates. Finally, the horizontal accuracy is assessed using RMSEX and RMSEY computed from these differences. Assuming that the differences follow a normal distribution and are uncorrelated in the X and Y directions, we can then use a specific value (2.4477) to calculate the horizontal accuracy of the orthophoto within a 95% confidence interval [18].

\[ { Accuracy }_{r}=\frac{2.4477 \cdot \sqrt{R M S E_{X}^{2}+R M S E_{Y}^{2}}}{2} \tag{12} \]

3. Results

In the following sections, the previously mentioned theoretical principles were practically applied in map production using drone images.

3.1 Flight Planning and Data Acquisition

Aerial images (n = 28), with a forward overlap of 75% and a side overlap of 45%, of an area in the countryside of Damascus city were taken. The aerial sensor used was a 12-megapixel (3000 pixels × 4000 pixels) camera (FCC330) with a focal length of 3.61 mm, which was installed on a DJI Phantom 4 Pro system. The pixel size of the captured images was $0.00156192 \ mm$ and the average scale of these images was 1/30.000. The study area was rectangular, covered an area of 6.57 hectares (Figure 3) and was located approximately 800 m above the mean sea level.

Click to view original image

Figure 3 The study area.

The flight was planned before image acquisition. The elements of the flight plan are summarized in Table 1.

Table 1 The elements of the flight plan.

3.2 Map Scale, GSD, and Control Data

Since the scale of the map obtained from the image is equal to five times the scale of the image [6], the scale of the map was 1/6000. The GSD was calculated by applying Equation (9). We found that $ \text{GSD} = 4.76 \ cm/pixel $.

The control data used in our example represented the exterior orientation parameters of the images measured by the RTK mod and no ground control points were available. The availability of the control points can improve the accuracy of the results. However, in many cases, it is not possible to get these points (for example, in hard-to-reach areas, marine environments, or dangerous areas). Thus, assessment of the absolute accuracy of AT was not possible. However, we determined the relative accuracy of AT by evaluating the differences between the measured values of the exterior orientation parameters of the images and their computed values.

3.3 Image Processing

Pix4DMapper was used to process the images. Although this software has a default workflow and various parameters that can be changed manually as per user requirements, we used a default workflow, based on the assumptions that many users would choose the default settings, and that these settings were selected by the manufacturer because they provided more consistent results.

The images were imported with their exterior orientation parameters measured by the onboard GPS. The coordinate system WGS84/UTM zone 37N, corresponding to the study area, was selected. Then, AT was performed using the default processing level based on the external orientation parameters of the images and key points.

In the second step, the key points were densified to produce a dense point cloud (Figure 4). This was performed using the default processing level (a low level that uses a quarter of the resolution of the original images). The processing results are summarized in Table 2.

Click to view original image

Figure 4 A 3D view of the 3D dense point cloud of the study area, with camera locations (in blue) and image frames (in green).

Table 2 The results of image processing using Pix4D Mapper.

For evaluating the accuracy, the software recalculated the exterior orientation parameters of the images and computed the differences between these parameters and the actual ones (measured GPS) with RMSEs. These RMSEs are shown in Table 3.

Table 3 The RMSE on the exterior parameters of the images.

Finally, using the dense points, the orthophoto of the study area was generated (Figure 5) with a geometric resolution equal to the GSD.

Click to view original image

Figure 5 The orthophoto of the study area.

4. Discussion

4.1 Assessing the Accuracy of AT

The RMSEs of AT are shown in Table 3 and can be used to evaluate the relative accuracy only of AT and are not enough to assess its absolute accuracy. A set of control points was measured in the study area to assess the absolute accuracy of AT by applying Equation (11). Unfortunately, we could not assess the absolute accuracy due to the lack of control points in the study area. However, all recalculated exterior orientation parameters had sub-meter RMSE for the x, y, horizontal, and vertical directions.

4.2 Assessing the Accuracy of the Orthophoto

The obtained orthophoto and a large-scale topographic plan (1/1000) of the study area were used to locate 33 checkpoints distributed over the entire orthophoto (Figure 6). Those points were then used to assess the horizontal accuracy of the orthophoto. Using the coordinates of the checkpoints digitized on the topographic plan as a reference and the coordinates of the corresponding digitized checkpoints on the orthophoto as observed data, the errors in the x and y directions were calculated (Table 4).

Click to view original image

Figure 6 The checkpoints measured on the orthophoto and the topographic plan are shown.

Table 4 The coordinates of the checkpoints measured on the orthophoto and the topographic plan and their differences.

Before using Equation (12) to evaluate the accuracy of the orthophoto, we ensured that the coordinate differences were not correlated and were normally distributed. These statistical tests were performed using the SPSS software.

4.2.1 Test of Correlation Between Coordinates' Differences

Pearson’s correlation coefficient was computed using the SPSS software to measure the strength and direction of the linear relationship between the two studied variables DX and DY. The results of the correlation test are summarized in Table 5.

Table 5 Pearson’s correlation test of differences.

Pearson's correlation coefficient was found to be –0.059 (Table 5). This indicated that the differences in the X and Y directions had an inverse relationship of medium strength. Thus, the hypothesis that they are independent can be almost accepted.

4.2.2 Normality Test for Differences in the X-Coordinate

The results of the normality test for differences in the X-coordinate are shown in Table 6.

Table 6 The results of the normality test for the differences in the X-coordinate.

We found that the value of "sig" in the Kolmogorov-Smirnova test was 0.200, which was greater than the significance level of 0.05 (Table 6). We also found that the value of "sig" in the Shapiro-Wilk test was 0.831, which was greater than the significance level of 0.05. These results showed that the value of "sig" was located in the acceptance region, and therefore, we accepted the null hypothesis that the data were distributed normally [19]. The distribution histogram for the differences in the X-coordinate is shown in Figure 7.

Click to view original image

Figure 7 The distribution histogram for the differences in the X-coordinate (errors on the coordinates in the X-axis).

4.2.3 Normality Test for the Differences in the Y-Coordinate

The results of the normality test for the differences in the Y-coordinate are shown in Table 7.

Table 7 The results of the normality test for the differences in the Y-coordinate.

We found that the value of "sig" in the Kolmogorov-Smirnova test was 0.145, which was greater than the significance level of 0.05 (Table 7). We also found that the value of "sig" in the Shapiro-Wilk test was 0.283, which was greater than the significance level of 0.05. These results showed that the value of "sig" was located in the acceptance region, and therefore, we accepted the null hypothesis that the data were distributed normally. The distribution histogram for the differences in the Y-coordinate is shown in Figure 8.

Click to view original image

Figure 8 The distribution histogram for the differences in the Y-coordinate (errors on the coordinates in the Y-axis).

Based on the previous results, we applied Equation (12) to determine the spatial accuracy of the resulting orthophoto. The equation yielded the results as shown below.

\[ { Accuracy }_{r}=\frac{2.4477 \times 0.296}{2}=\pm 0.362 {m} \]

The horizontal accuracy was suitable for producing a 1/1500 scale map from the orthophoto [20].

5. Conclusion

Drones are powerful tools in the mapping and surveying industry, but surveyors need to understand the new concepts that have emerged with this new technology. In this study, the most important concepts were highlighted theoretically and practically.

Theoretically, we showed that any field drone surveying operation needs precise flight planning for achieving the desirable dataset required to process the captured imagery using suitable software, GPS, and data on ground control points to perform precise aerial triangulation and obtain the orthophoto. Ground control points need to be used for assessing the absolute accuracy of aerial triangulation.

In this study, we found that the assessment of the absolute horizontal accuracy of the drone-based orthophoto needs the collected data to be compared to data of higher accuracy (a topographic map in our case). Before applying the accuracy standards, statistical tests need to be conducted to ensure that the coordinate differences follow a normal distribution and are not correlated.

The results showed that the orthophoto generated by drone surveys can be considered to be an accurate alternative to traditional surveying topographic plans. We found that the horizontal accuracy of the orthophoto obtained in this study was suitable for producing a 1/1500 scale map.

Author Contributions

The author did all the research work of this study.

Competing Interests

The author has declared that no competing interests exist.

References

  1. Colomina I, Blázquez M, Molina P, Parés ME, Wis M. Towards a new paradigm for high-resolution low-cost photogrammetryand remote sensing. Proceedings of the ISPRS XXI Congress; 2008 July 3-11; Beijing, China.
  2. Nex F, Remondino F. UAV for 3D mapping applications: A review. Appl Geomat. 2014; 6: 1-15. [CrossRef]
  3. Irschara A, Kaufmann V, Klopschitz M, Bischof H, Leberl F. Towards fully automatic photogrammetric reconstruction using digital images taken from UAVs. Int Arch Photogramm Remote Sens Spatial Inf Sci. 2010; XXXVIII: 65-70.
  4. Anderson K, Westoby MJ, James MR. Low-budget topographic surveying comes of age: Structure from motion photogrammetry in geography and the geosciences. Prog Phys Geogr. 2019; 43: 163-173. [CrossRef]
  5. Laporte-Fauret Q, Marieu V, Castelle B, Michalet R, Bujan S, Rosebery D. Low-cost UAV for high-resolution and large-scale coastal dune change monitoring using photogrammetry. J Mar Sci Eng. 2019; 7: 63. [CrossRef]
  6. Wolf PR, Dewitt BA, Wilkinson BE. Elements of photogrammetry with applications in GIS. 4th ed. New York: McGraw-Hill Education; 2014.
  7. d’Oleire-Oltmanns S, Marzolff I, Peter KD, Ries JB. Unmanned aerial vehicle (UAV) for monitoring soil erosion in Morocco. Remote Sens. 2012; 4: 3390-3416. [CrossRef]
  8. Remondino F, Fraser C. Digital camera calibration methods: Considerations and comparisons. Int Arch Photogramm Remote Sens Spatial Inf Sci. 2006; XXXVI: 266-272.
  9. Murtiyoso A, Grussenmeyer P, Börlin N, Vandermeerschen J, Freville T. Open source and independent methods for bundle adjustment assessment in close-range UAV photogrammetry. Drones. 2018; 2: 3. [CrossRef]
  10. Shahbazi M, Sohn G, Théau J, Menard P. Development and evaluation of a UAV-photogrammetry system for precise 3D environmental modeling. Sensors. 2015; 15: 27493-27524. [CrossRef]
  11. Yuan X, Fu J, Sun H, Toth C. The application of GPS precise point positioning technology in aerial triangulation. ISPRS J Photogramm Remote Sens. 2009; 64: 541-550. [CrossRef]
  12. Burns JHR, Delparte D. Comparison of commercial structure-from-motion photogrammety software used for underwater three-dimensional modeling of coral reef environments. Int Arch Photogramm Remote Sens Spatial Inf Sci. 2017; XLII: 127-131. [CrossRef]
  13. Haala N, Cramer M, Rothermel M. Quality of 3D point clouds from highly overlapping UAV imagery. Int Arch Photogramm Remote Sens Spatial Inf Sci. 2013; XL: 183-188. [CrossRef]
  14. Alkhalil O, Ali A. Analyzing the effectiveness of commercial low-cost photogrammetry systems in the 3D modeling of historical monuments. Tishreen Uni J Eng Sci Ser. 2021; 43: 31-48.
  15. Sona G, Pinto L, Pagliari D, Passoni D, Gini R. Experimental analysis of different software packages for orientation and digital surface modelling from UAV images. Earth Sci Inform. 2014; 7: 97-107. [CrossRef]
  16. Alidoost F, Arefi H. Comparison of UAS-based photogrammetry software for 3D point cloud generation: A survey over a historical site. ISPRS Ann Photogramm Remote Sens Spatial Inf Sci. 2017; IV: 55-61. [CrossRef]
  17. Casella V, Chiabrando F, Franzini M, Manzino AM. Accuracy assessment of a photogrammetric UAV block by using different software and adopting diverse processing strategies. Proceedings of GISTAMB 2019; 2019 May 3-5; Heraklion, Greece. [CrossRef]
  18. American Society for Photogrammetry and Remote Sensing. ASPRS accuracy standards for large-scale maps. Photogramm Eng Remote Sensing. 1990; 56: 1068-1070.
  19. Denis DJ. SPSS data analysis for univariate, bivariate, and multivariate statistics. Hoboken: John Wiley & Sons; 2018. [CrossRef]
  20. Li L, Qiang Y, Zheng Z, Zhang Jy. Research on the relationship between the spatial resolution and the map scale in the satellite remote sensing cartographies. Adv Intell Syst Res. 2019; 168: 194-199. [CrossRef]
Newsletter
Download PDF Download Citation
0 0

TOP