1. Introduction
Bridge inspection plays an important role in the construction and infrastructure sector, since it is required to maintain the structures’ safe operation, extend their useful life and ensure reliability through sustainable processes that guarantee efficiency of resources [1]. Inspections consist of periodically checking the bridges to perform Structural Health Monitoring (SHM) with Non-Destructive Testing (NDT) techniques, which seek to detect failures and discontinuities in structure materials and components without physically affecting the examined components. This is in order to repair and refurbish the bridge if required, ensuring its continuous operation under safety standards.
Currently, SHM of bridges is mostly accomplished through visual inspection supported by using sensors and specialized cameras. Its objective is to obtain data and images that allow accurately determining the variations of physical characteristics, possible defects, and discontinuities in the structural components of bridges. The described inspections are performed by inspectors using manual techniques, accessing the bridge with ladders, scaffolding, vehicles with lifts, or climbing while using ropes and harnesses. These activities create potentially unsafe conditions for people. According to [1,2], during visual inspections, the exhaustive studies and detailed evaluations of bridge conditions are expensive, technically complex, and require a great deal of time. This is especially true during image acquisition and data processing, which are the most demanding activities. The purpose of visually inspecting a bridge is to detect defects, such as cracks, fractures, corrosion, pores, delaminations and others. High-resolution images are required in order to precisely detect these defects. The images must be taken at a distance determined by the specifications and characteristics of the cameras and sensors that are used, also considering the geometric and physical characteristics of the bridge’s structure. These aspects, according to [2], occasionally restrict, make difficult, or prevent inspectors’ access or approximation to specific parts of the bridge, affecting the quantity, quality and clarity of the images an inspector can obtain. Such a scenario produces subjectivity in the results, which leads to decision-making based on the analysis of one or a few images. Considering the mentioned limitations, using Unmanned Aircraft Systems (UAS) as platforms for observing and acquiring data and high-resolution images is applicable, turning them into an innovative, simple, cheap, efficient and safe choice for inspecting and monitoring bridge conditions. These are significant advantages compared to traditional methods and using manned aircraft. This review provides useful information to better understand using UAS by contributing elements to develop future research projects, academic processes, equipment selection and appropriate techniques for inspecting and evaluating the structural conditions of bridges. In the last decade, the use of UAS for structure inspections has increased, but significant technological development has not been evident. In addition, the mathematical and scientific literature that covers the subject is scarce in comparison with that of traditional techniques, which makes this field a relevant subject for researching and analyzing potential applications.
According to [3], the most common name for these vehicles is drones, referring to the drone bee, since the bee’s particular sound resembles that of the airborne vehicles. Over the past 30 years, the term has evolved from Unmanned Aircraft Vehicle (UAV) to more precise terms, such as Remote Piloted Aircraft System (RPAS) and Unmanned Aircraft System (UAS), terms and acronyms embraced by the scientific and academic community, government aviation regulators [4,5,6] and companies dedicated to manufacturing or servicing these vehicles. A UAS is considered a system because it integrates three subsystems: i) the unmanned aircraft, ii) the ground control station, and iii) the communications link between the aircraft and the ground station [7-9]. These subsystems are synergistically linked to each other to achieve autonomous, controlled and stable flight. The UAS can be remotely controlled by a human on the ground, fly autonomously under the control of a computer, or through a combination of both methods. This leads to a system with different degrees of automation and operational autonomy that, according to the National Highway Traffic Safety Administration (NHTSA) of the United States [10], can be classified in six levels: i) level 0, the pilots have manual and full control of vehicle navigation, ii) level 1, there is a certain degree of automation applied to two flight modes. The first one corresponds to holding some altitude during dynamic flight and the second to static and sustained flight. iii) Level 2, the UAS navigates based on several flight modes programmed by the pilot, maintaining its route autonomously if there are no unexpected changes in the flight environment, iv) level 3, unlike the previous level, the UAS understands the changes in the flight environment and controls the flight modes to navigate in the new environment, v) level 4, the UAS can adapt and react when there is some anomaly in the system, an accident or a sudden collision with any object, and vi) level 5, a UAS can navigate autonomously in all environments and situations. A more detailed classification sort by levels is proposed in [11], as follows: i) remotely operated vehicle, ii) vehicle with the capacity of completing a mission iii) robust real-time response to failures or events, iv) adaptable vehicle during failures or events, v) real-time coordination between vehicles, vi) real-time cooperation between vehicles and vii) fully autonomous aircraft.
UAS are also classified into two types according to their takeoff and landing features. The first type is Horizontal Takeoff and Landing (HTOL), characterized by having fixed wings, covering long distances and reaching high speeds. The second type is Vertical Takeoff and Landing (VTOL), characterized by having one or more rotating wings and possessing the ability to perform sustained and stable static flight [12,13], which is considered an advantage with respect to bridge inspection. VTOLs have less speed than HTOLs, but, on the other hand, they are smaller, lighter, and cheaper. According to their configuration and number of engines, VTOLs are divided into helicopters and multi-rotors. The latter are widely used for civil purposes due to their good maneuverability, good controllability and lower acquisition cost. They are called tricopters if equipped with three engines, quadcopters if equipped with four engines, hexacopters if equipped with six engines, and octocopters if equipped with eight engines [14].
Multi-rotors have five basic components [15,16]: i) a frame that can be made of plastic, carbon fiber, wood, or aluminum, ii) a motor-helix assembly, in which the propellers are fixed pitch propellers (pitch is the distance traveled in the air during a complete 360-degree rotation of the propeller), which are coupled to brushless electric motors located in the arms of the frame, iii) an Electronic Speed Controller (ESC) that manages current flow to the motor according to the required RPM depending on the multi-rotor operation. ESC is controlled by Pulse Wide Modulation (PWM). iv) A Flight Controller (FC), considered the brain of the drone, which is in charge of sending control signals to the ESCs. These signals are generated based on commands received by the FC through a signal receiver (Rx) transmitted by the ground station (Tx), and the signals received from various types of sensors, both internal and/or external to the FC [17]. Sensors are generally devices, such as gyroscopes, accelerometers, barometers, and magnetometers, which allow the FC to determine the attitude, altitude, speed and position of the aircraft, supported by a satellite navigation system such as GPS or GLONASS. v) A Lithium Polymer (LiPo) battery with high electrical power and energy density that feeds the UAV’s electronic components [18].
2. Materials and methods
This article was created through a systematic review, as described in [19]. SCOPUS and Google Scholar were used as research tools. On SCOPUS, the search was performed in two different time slots: articles released during the last five years (January 2015 to February 2020) and articles released before 2015. The keywords used for the search were: i) drones, ii) unmanned aircraft vehicle, iii) unmanned aircraft system, iv) remotely piloted aircraft system and v) bridge inspection. A total of 257 were found and they were reduced to 112 after filtering the results according to their relevance (determined by number of citations). The guiding questions described in [19] were used in order to perform the analysis. As a result, 55 articles obtained from the search performed on SCOPUS were selected. This group of papers was called the Academic Relevant Space (ARS) [19]. The same key words used for SCOPUS were used for Google Scholar, and a total of 11 references were chosen. These references added to the ARS from SCOPUS comprise the 66 references taken into consideration for writing this review.
3. Studies performed with unmanned aircraft systems for bridge inspection
Bridges are structures of vital importance for the transportation of people and goods. For this reason, their periodic inspection is pertinent and necessary to ensure continuous and safe operation, as well as to extend their useful life. This is extremely important for bridges that are subject to structural degradation, aging and mechanical damage due to fatigue due to loads, thermal expansion and/or contraction and delamination in concrete and cracks. The latter is considered one of the most important parameters for monitoring and evaluating structural conditions [20].
Although there are a variety of bridges, they are usually divided into three major sections: i) foundation, ii) substructure and iii) superstructure. The foundation contains the piles that provide support and a solid base for the bridge. They also transmit the weight and loads to the terrain. The caps, which are made of concrete and contribute to transferring loads to the ground, are located above the piles. The abutments are found in the substructure. These are vertical walls located at the ends of bridges, which retain the soil around the bridge [21]. If the bridge is composed of several sections, as shown in Fig. 1, pier and pier caps are located at the ends of each section to support the sections and disperse vibrations produced by traffic crossing the bridge. The decks that directly support the traffic loads are located in the section called the superstructure. These elements are attached to the pier caps through bearings. The last ones transfer loads from the decks to the substructure.
Inspections performed manually on bridges by means of visual inspection techniques are expensive, risky, time-consuming and require the expertise of highly qualified inspectors, which produces a high degree of subjectivity in the data analysis process and decision-making for maintenance. Additionally, equipment, such as ladders, ropes and lifting baskets mounted on land or water vehicles are required to inspect areas that are difficult to access [3,21,33]. Due to the large size of certain structures, there is a high risk derived from working at heights. For this reason, [20] presents the need for an intelligent and precise technique to perform a Structural Health Monitoring (SHM) study. Methods based on emerging technologies and digital techniques, combined with the use of UAS equipped with cameras and sensors of various types, allow evaluating and monitoring bridges' structural conditions through image-based approaches as a source of information.
Using multirotor UAS for bridge inspections has demonstrated significant development in the last 10 years, since these vehicles are smaller and more maneuverable compared to manned and unmanned fixed-wing aircraft. Additionally, they have a certain degree of trajectory control and flight autonomy, facilitating their use for inspecting complex areas that are difficult to access. On the other hand, a variety of equipment and sensors for inspecting structures can be equipped on UAS, such as high-resolution digital cameras, thermographic cameras [22,23], Light Detection and Ranging or Laser Imaging (LIDAR) devices for terrain characterization [24], radiation detectors [25] and humidity and temperature sensors [26], among others that make up the UAS payload. These elements allow inspectors to obtain the necessary information to detect and analyze various types of defects and discontinuities in bridge structure components and materials.
3.1 Methodology to perform bridge inspection with UAS
A detailed methodology for acquiring data autonomously from images obtained using UAS is presented in [20]. The proposed methodology consists of ten macro-processes, as follows: i) task definition, ii) criteria assessment, iii) mission preparation and control, iv) flight path generation, v) data acquisition, vi) photogrammetric and 3D reconstruction, vii) 3D modeling and visualization, viii) anomaly detection, ix) mechanical interpretation and x) structural condition assessment. Inspection parameters and criteria are defined in processes i and ii, determining what properties and quantities are required to be obtained, such as structure geometry, anomalies, defects and discontinuities, in order to be detected and evaluated. The requirements and criteria for rejecting and accepting these anomalies are also established. Legal and safety specifications are defined during process iii, such as minimum approach distance between the object and the drone, as well as minimum and maximum flight heights. The technical specifications of the equipment on the drone, such as flight systems, cameras, lenses, and sensors are also considered. Processes iv and v include planning of flight path by following determined waypoints. Camera orientation is set in order to obtain high-resolution images that have a percentage of overlap between each other, These processes are also described in [22,28]. At this point, a flight to obtain images of the component to be analyzed is performed. As a result of steps iv and v, the flight path has been optimized with the specific points to be analyzed. A set of raw images is then obtained, with the camera’s real position and orientation and time stamps of each image. Processes vi and vii consists of pre-processing the images acquired with the UAS through radiometric and geometric enhancement. A 3D model is built to perform a georeferenced photogrammetric analysis using a simple meshing of the inspected component. 3D reconstruction is performed using the Structure From Motion (SFM) technique [20,33] used to obtain 3D models from 2D images. Georeferencing is accomplished by adjusting the gathered data with a Dense Stereo Matching [27]. As a result of steps vi and vii, a dense cloud of georeferenced 3D points is finally acquired [17,24,28]. Process viii focuses on executing an automated analysis of the images, identifying and sizing the anomalies through their metrics and characterization, quantifying variables such as pixel size and resolution of the object. In parallel, a mesh and texture is created to build a 3D surface model, achieving data integration aimed at mapping the anomalies. Step ix includes analyzing the point cloud and recording the geometric changes in the structure and the changes in the location of the anomalies, as well as changes in their dimensions. The anomalies are interpreted based on the evaluation criteria and mechanical properties of the materials that make up the inspected component. Finally, the structural evaluation of the bridge’s condition is completed in process x.
3.2 Equipment, software and techniques used in bridge inspection with UAS
The research developed in [28] analyzed the abutments, piers and pylons of an aged bridge to detect, characterize and quantify cracks. An Inspire 2 quadcopter equipped with a Zen muse X5S camera with a 20.8-megapixel resolution was used to do so. The methodology consisted of the following steps: i) acquiring the images using the camera mounted on the drone, ii) generating a point cloud to build a damage map or 3D inspection map by means of the Pix4D Mapper commercial software [20,28]. This process took 150 min. iii) Detecting cracks through deep learning methods [29], such as region analysis with Convolutional Neural Network (CNN) [30,31]. CNN is a deep learning algorithm that has an image as input and weighs the importance of various aspects or objects on the input image, differentiating them from each other. iv) Quantifying cracks by image processing to detect cracks by binarization, in order to convert a Red-Green-Blue (RGB) image [22,28,38,39] into a binary one with AutoCAD 2017 software. This process took 30 min. Noise was filtered in order to clean the image. v) Displaying the images on an inspection map using the Sobel [32] algorithm for edge and contour detection. This methodology was applied to the region of interest (ROI) of the bridge, which was the lateral part of the decks and pier caps in this case. The UAS was manually operated at a distance of two meters to avoid losing GPS signal and to ensure a sufficient Field of View (FOV) of the camera. A total of 384 images were obtained, revealing twelve cracks between 0.55 mm and 1.92 mm thick and 8.32 mm and 78.43 mm long. After comparing the mentioned results to an analysis of the same cracks by traditional methods, an error of between 1 and 2% was found.
An inspection of the Placer River Bridge in Alaska was conducted in [33]. The structure is 85 meters long and has a wooden superstructure. The research compared a traditional inspection developed with the LIDAR method applied manually by an inspector to a hybrid autonomous method performed by a UAS. During the study, the effectiveness and efficiency of the method was validated, comparing the number of points, point density and noise level in the images. The pictures were gathered using a DJI S800 hexacopter equipped with a 24.3 megapixel SONY NEX 7 camera and a GoPro Hero 3 camera. Data acquisition and flight path planning were performed using the Mission Planner [34] software from 3D Robotics, allowing the researchers to obtain 2626 images and 20 videos. Subsequently, the team created the 3D reconstruction of the bridge from the images using the Dense Structure From Motion (DFSM) technique [35] jointly with the Hierarchical PointCloud Generation (HPCG) technique described in detail in [36]. By employing this hybrid method, 1,412,060,890 points with a density of 5,656,185 points per cubic meter and a noise level (distortion) of 4.5 mm of the image were obtained. On the other hand, using the traditional LIDAR method, 202,790,259 points with a density of 1,478,099 points per cubic meter and a noise level of 1.8 mm were obtained. The increase in points in the hybrid technique increases the density and consequently increases the geometric resolution of the image, which benefits the detectability of possible defects or discontinuities in the ROI or damage region (DR). Nevertheless, the increase in the total number of processed pixels increases the complexity and processing time of creating the model.
The objective of the study performed by [37] was to inspect a steel bridge to detect damage through a critical members’ fracture analysis (FCM). Two aspects were analyzed and compared: i) Maximum Crack to Camera distance (MCC) or maximum distance for detection and ii) Achievable Crack to Platform distance (ACP). Two experiments were conducted: i) an external field inspection on a 120 m long bridge specimen simulating a section of a steel bridge and ii) a real inspection on a bridge in Utah City. During the experiment, the following equipment was used: i) DJI MAVIC quadcopter equipped with a 12-megapixel resolution camera, ii) 3DR IRIS quadcopter equipped with a 12 megapixel resolution GoPro Hero 4 camera and iii) quadcopter drone assembled for the researchers, equipped with a 16 megapixel resolution Nikon COOLPIX L830 camera. The measurements were made under the lighting conditions generally found during bridge inspections, which are: i) dark conditions under the bridge on a cloudy day, ii) intermediate lighting conditions under the bridge on a clear day and iii) artificial lighting conditions using electric light lamps. The results revealed that cracks might be detected at a greater MCC distance if illumination increases. In darkness, the MCC distances increased from 0.2m to 0.6m with the GoPro and 0.4m to 1.10m with the DJI camera. On the other hand, a 1.10m MCC was obtained with the DJI camera in artificial lighting conditions, while the Nikon camera went from 0.3m MCC in dark conditions to 1m in artificial lighting conditions. Another aspect that affects the efficiency of crack detection is the camera's ability to increase ISO sensitivity. It considers the amount of light that must pass through the lens in low light conditions. ISO sensitivity values for the Nikon camera ranged between 280 and 1600 and between 480 and 1600 for the Mavic DJI camera, while it was always 400 for the GoPro because it does not vary as lighting conditions change. This makes GoPros less suitable for crack detection. In the absence of a GPS signal, the DJI MAVIC quadcopter, unlike the other two multirotors, used an alternate positioning system based on stereovision and sonar to maintain altitude, which guaranteed a 0.25m ACP and a 0.25m MCC with clear crack detection in real-time and in post-flight image analysis.
Another objective of [37] was to determine the effects of wind on detecting cracks and fractures using UAS. To do so, four beams, the abutments and two girders of a bridge located over the Fall River in the city of Ashton in Idaho were inspected. The procedure was performed near midday under wind speeds of between 7 m/s and 11 m/s. With the drone assembled by the researchers, it was not possible to achieve control or maneuverability within the mentioned wind speed range, while with the IRIS drone, they achieved an ACP distance of 0.6 m with no real-time crack detection, which was obtained later during post-flight image analysis. In contrast, with the MAVIC DJI drone, an ACP distance of 0.25 m was achieved and cracks were detected both in real-time and during post-flight analysis for speeds near 7 m/s at all lighting conditions. However, at speeds near 11 m/s, no cracks were detected. Besides, it was possible to detect other defects and anomalies of interest in real time, such as corrosion at the bottom of the south beam, efflorescence, cracks in the concrete, possible delamination in the abutment and minor corrosion in the splice plate of one of the beams.
[37] analyzed the effectiveness of the probability of detection (POD) of cracks in a 120-meter bridge test tube located at Purdue University. The specimen is used to train inspectors because it has many previously characterized cracks. The results of the inspection on the test tube with three drones (Mavic DJI, Inspire 1, DJI Phantom 3) were compared to the average results obtained by 30 human inspectors. The effectiveness was evaluated by quantifying the number of hits during an inspection versus the actual number of cracks in the test tube. The procedure was performed with a wind speed of 4 m/s and, for evaluation purposes, the following parameters were considered: i) number of cracks reported (call), ii) number of true positives (hit), iii) number of false positives (fallout), iv) number of false negatives (misses), v) hit/call ratio, vi) true positive ratio (TPR) calculated by dividing the hits by the sum of hits and misses and vii) false-positive ratio (FPR), calculated by dividing the false positives by the number of calls. The evaluation showed that UAS-assisted inspections lasted between 1.5 and 3 times longer than real-time human inspections and approximately 2 times more calls were obtained in UAS inspection compared to human inspections. Either way, there was no representative difference in this item during post-flight image analysis. In the TPR index, there was only approximately a 10% difference between UAS and human inspection. UAS inspections produced between 10% and 20% less false positives than human inspections. Based on the results of all the experiments, the study concluded that UAS performance in FCM presents a quality similar to that of inspections performed by human inspectors.
In [38], the team conducted a study on the deck and girders of a bridge in the city of Idaho. The structure has a length of 675 m, a deck of approximately 8 m in width, and an area greater than 2,787 square meters. The inspection was performed by manual flight in First Person View (FPV) mode, with a DJI Phantom 3 quadcopter equipped with a 12 megapixel camera. The approach distance was 2 to 3 meters, which allowed taking 4k videos. The procedure's objective was to inspect the connections between components to evaluate the condition of bolts, rivets and the possible presence of rust on them. The bearings were also inspected for misalignment, bulging or tearing, as well as leaks, concrete spalling, steel loss and cracks in the joints. The project presented a limitation due to the loss of the GPS signal while approaching the structure or flying under it. Therefore, it was not possible to establish a flight path with viewpoints and/or waypoints. To overcome this limitation, [39] proposed a UAS navigation system using the Ultrasonic Beacons System (UBS). This system was developed to provide high precision positioning based on ultrasonic sensors, which allows applications in environments without GPS. It can be considered an alternative to generating a mapping and location system with centimeter accuracy. It is easy to integrate to UAS through low-cost hardware. Additionally, CNN was used as a method to detect cracks in the concrete of a bridge [40,41] and locate them accurately through a method called geotagging. Three drones were used during the research: two of them manufactured by the team itself and equipped with a Sony FDRX3000 camera and, a commercial Parrot Bebop 2 drone. The flight plan was made using Mission Planner. The study consisted of the following steps: i) manufacturing two multirotors instead of using a commercial multirotor, since the latter does not allow modifying the source code for autonomous navigation, ii) installing a mobile beacon in the drones to determine their 3D location, iii) modifying the source code of the flight controller firmware, iv) integrating ultrasound beacon system (UBS) with the autonomous flight controller and v) replacing the GPS coordinates with a signal in the data provided by the images for geotagging. The flight was precise, but there were some fluctuations in altitude. The cracks were detected in the concrete with an accuracy of 96.6%. After comparing the results of the UAS images to those obtained by manual collection, it was demonstrated that both images were highly accurate.
In [22], a multispectral UAS detection system was used for evaluating bridge decks to detect internal delamination in the concrete. A quadcopter assembled on an F550 frame was used to inspect a 31 ft. × 13 ft. × 8 in. concrete deck specimen. The payload that comprised the multispectral system consisted of a GoPro Hero3 RGB camera and a FLIR TAU 2 thermographic camera with an operating range of -40 °C to 80 °C. The IR-RGB multispectral system was used during the study to locate delamination in the subsoil by analyzing the thermograms obtained with the thermographic camera and to locate cracks in the surface through high-resolution RGB images. During the study, researchers detected regions with sub-surface delamination shown as hot spots in the thermograms that display temperature gradients by means of thermal contrasts. They are displayed on a scale that associates the temperature values to a color gradient.
Another research article that addresses the use of infrared thermography (IRT) linked to a UAS for bridge inspection is [23]. Its main objective was to assess the reliability of using a multirotor UAS equipped with an on-board thermographic camera in order to determine the condition of the reinforced concrete decks of a bridge in the city of London. The technique is based on evaluating certain properties of concrete, such as density, thermal conductivity and specific heat. The study was conducted in the following sequence: i) determining the UAS’ capacity for acquiring thermal images, ii) developing a procedure focused on images analysis, iii) creating a mosaic thermogram of the entire deck and iv) producing a condition map with the geometry and dimensions of the detected delaminations. The mentioned objectives were developed through the following methodological steps: i) using the applied passive IRT to evaluate two deteriorated decks, ii) improving the thermal contrast of images by means of ImageJ software, iii) overlaying the images using the Matlab software to produce the thermal gradient map of the decks, iv) identifying defects through the thermal contrasts achieved in step ii, which is caused by the interruption of the heat flow in the concrete, and v) quantifying the delaminated areas through the thermal contrasts of the images. This methodology is coordinated with the one described in ASTM D4788-03 [42], which defines the standard procedure and equipment required to conduct a passive infrared thermography test to detect delamination in concrete bridge decks. The researchers used a Inspire 1 Pro drone equipped with a Vue thermal pro camera. The experiments were conducted 6 hours after sunrise, under the following conditions: a temperature of 26 degrees Celsius, relative humidity of 22%, wind speed of 22 km/hr and dry decks. Four images were taken at a height of 10 m, with an overlap between images of 50%, and a spatial resolution of 2.5 cm in height. The total inspection time was 20 minutes. The images were enhanced through ImageJ software, and the team joined the overlapping images to obtain a 640 x 780 mosaic thermogram with 499,200 pixels using a Gaussian-smoothing filter [43]. It is worth mentioning that the authors developed a code in Matlab to extract the pixels from the images. Then, they generated a threshold classification to sort and choose the appropriate photos. The total percentage of delaminated areas on the bridge deck was determined by calculating the total percentage of pixels in the higher temperature areas. The results were compared to the ones obtained with traditional techniques, leading to the following data: by inspecting the deck using the traditional hammer technique, 17% of total delaminated zones in the deck of the bridge were identified. On the other hand, the total delaminated zone calculated from the study with UAS and IRT was 15.4%, which reflects only a difference of 1.6% between the two methods. Similarly, the total delaminated zones detected in a second bridge deck by the hammer drilling method and the IRT were 32% and 29.3% respectively, establishing a 2.7% difference between the two methods. These results demonstrate the feasibility of using articulated IRT with UAS for evaluating the condition of concrete bridge decks. In Table 1, the important and relevant aspects and parameters of each study are compared and listed.
3.3 Trajectory and mission planning for UAS
Flight planning is one of the most important factors for an inspection’s success and the quality of the images obtained with UAS. It is possible to generate flight trajectories with advanced techniques such as: image-based recognition [28,44], Simultaneous Localization and Mapping (SLAM) [45], a technique that allows mapping an unknown environment in real-time and simultaneously locates itself in that environment, and Lidar Odometry Mapping in Real-time (LOAM) [46]. This last technique estimates position through LIDAR. Commercial planning software, such as Mission Planner [33], DJI Ground Station Pro [47], Pix4d Capture [48] or Dronedeploy [49] are available to perform the mission. Mission planning software contains graphic interfaces that allow the user to define the trajectory and tasks of the UAS. The drones may have 3 types of trajectories: i) point to point control, consisting of going from point A to point B, regardless of the trajectory between the points, ii) trajectory tracking, which is when the drone is required to follow a certain trajectory, and iii) obstacles avoidance, which is when the drone is required to avoid obstacles during a certain trajectory. By using a path-planning method, the multirotor may achieve the ability to avoid obstacles, track targets and move from one point to another with precision and operational safety [11]. When a dynamic and mathematical model of the drone has been obtained, optimal flight paths can be defined by means of high or medium-level programming languages designed according to the user’s needs [50]. There are currently several techniques and models for planning optimal trajectories with minimum energy consumption, smooth transitions between states and minimization criteria, establishing initial conditions for the position, speed and acceleration of the path to follow [51]. These parameters are required to avoid obstacles.
3.4 Dynamic Modeling and Control Algorithms for UAS
The dynamic modeling of quadcopter drones is performed with two differential equations. One is related to translational movements and the other models rotational movements. These equations may or may not be linear. In any case, they may be linearized [52,53] if the quadcopter is operating at a specific point at low speeds [11]. Control strategy selection depends on the linearity or non-linearity of the modeled system, taking into account that linear strategies are less complex and easy to implement, but are, at the same time, very sensitive to disturbances. On the other hand, non-linear strategies are more complicated to implement but are less sensitive to disturbances. Two types of linear controllers for quadcopters can be found: Proportional, Integral and Derivative (PID) and Linear Quadratic Regulator (LCR) [54-57]. They are modeled by status feedback and using matrix inequalities DML [58,59]. There are also several non-linear models for quadcopters that allow good precision of the modeling but increase the complexity of the analysis, such as control based on neural networks, adaptive control [60], fault-tolerant control, robust control, backstepping control, control H [61], model prediction control and control based on disturbance observers [62-65].
4. Conclusions, limitations, and future challenges
Among the limitations to using UAS for bridge and structure inspection in general is the loss of signal connection to the Global Navigation Satellite System (GNSS) while operating vehicles under the structures or close to them. This condition forces manual operation because it makes it difficult or impossible to plan trajectories and/or autonomous missions. As a challenge to the industry, it is required to increase the power and efficiency of satellite navigation systems and develop simple and portable alternative methods to deal with the loss of satellite signal. Lighting conditions are another important factor that limits operations, since dark conditions, which are usually found under bridges reduce the detectability of flaws and affect the approach distances between the drone and the structure. This makes it necessary to use artificial lighting external to the UAS. A major challenge in the future of drones consists of the optimal adaptation of powerful autonomous lighting systems, since installing a powerful lighting system on UAS is currently inefficient due to the extra current consumption of the lights. In turn, that reduces the flight time. Another significant issue is the increase in the amount of payload capacity on multirotor UAS. The simultaneous use of sensors and cameras is currently limited. This highlights the need to research materials that reduce weight and improve aerodynamic efficiency. A critical subject is associated to flight times, since they only range between 20 to 35 minutes, limiting the vehicles to short periods of operation and requiring the availability of several batteries and access to charging points in the field. The mentioned scenario increases costs and inspection times. Therefore, developing components with high-energy efficiency and different alternative power sources or improving the LiPo batteries that currently power UAS is relevant. The difficulty of operating in confined spaces or very close to certain bridge components is another limitation, given the structural fragility of the propellers and drone arms. This risk has been mitigated by using protective baskets [66]. In spite of being functional, the protective baskets affect aerodynamic efficiency due to the extra drag they produce and the weight added to the system, as well as reducing maneuverability and controllability. Finally, the operational limitation produced by the wind must be overcome, since high-quality images cannot be acquired at wind speeds of higher than 7 m/s. Based on the results of the studies related to this review, it is prudent to ensure UAS are an efficient tool to complement and reduce the workload in traditional inspections performed by humans, increasing their operational and occupational safety. However, it is pertinent to clarify that, although a significant amount of research is being performed on the subject, there is still a lack of technological development in UAS and onboard equipment to execute fully autonomous missions for the inspection of bridges and general structures.