Experimental Analysis of a Visual-Recognition Control for an Autonomous Underwater Vehicle in a Towing Tank

: In this study, underwater recognition technology and a fuzzy control system were adopted to adjust the attitude and revolution speed of a self-developed autonomous underwater vehicle (AUV). To validate the functionality of visual-recognition control, an experiment was conducted in the towing tank at the Department of Systems and Naval Mechatronic Engineering, National Cheng Kung University. An underwater lighting box was towed by a towing carriage at low speed. By adding real-time contour approximation and a circle-ﬁtting algorithm to the image-processing procedure, the relationship between the AUV and the underwater lighting box was calculated. Both rudder plane angles and propeller revolution speeds were determined after the size and location of the lighting box was measured in the image. Finally, AUV performance with visual-recognition control was veriﬁed by controlling the target object in the image center during passage. presented for recognition and object tracking based on an and implement the UVRTS in self-developed


Introduction
In the mid-19th century, Robert Whitehead Silverstone, a British engineer, developed the first self-propelled torpedo, called the Whitehead torpedo [1]. Autonomous underwater vehicles (AUVs) are often developed for commercial and military purposes. The pioneer of AUVs for scientific research is the well-known EPAULARD, which was developed by the French Institute of Marine Development (Institut français de recherche pour l'exploitation de la mer, IFREMER) [2]. Over the last three decades, AUV development has been characterized by high security, high controllability, and low costs. In particular, AUVs are suitable for exploration, inspection, or monitoring in long-term, routine, reproducible, or high-risk missions, which include hydrological surveys, marine geomorphology mapping, seabed resource surveys and environmental assessments, pollutant monitoring, and deep-sea infrastructure and pipeline inspection [3][4][5][6][7].
Nguyen et al. [8] developed a real-time vision-based method for AUV guidance and control. Santos-Victor and Sentieiro [4] demonstrated that a vision-based system is suitable for short-distance measurements. In their study, the three-dimensional (3D) reconstructed images were obtained from a camera to extract cluster depth information, and the computed time and distance from the Kalman filter were used to control the AUV. Balasuriya et al. [9] used a single camera and an acoustic sensor to calculate the relative position of an AUV and underwater cable in a 3D environment for inspecting and maintaining artificial underwater structures. The camera provided two-dimensional planar information, and the acoustic sensor provided a third dimension. The 3D spatial information of the cable was used as a control parameter for AUV operation. To perform underwater positioning and pipeline inspection, Foresti [10] reconstructed 3D environmental information and then defined the AUV dynamics. It uses four sets of steerable machines that can be independently operated to control for roll, pitch, and yaw. The four-blade propeller is powered by a direct current (DC) brushless motor. The configuration of Bigeye Barracuda was designed in accordance with that of the Remote Environmental Monitoring Units developed by Massachusetts Institute of Technology (MIT) [21]. The internal units of Bigeye Barracuda are illustrated in Figure 1, and its basic specifications are listed in Table 1. The control system uses the embedded Windows operating system as the core controller and is supported by navigation, communication, sensing, batteries, and motion units. A block diagram of the embedded system is illustrated in Figure 2.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 3 of 17 management, motion control, and real-time integration management. The function of the aft part is mainly used to manage AUV dynamics. It uses four sets of steerable machines that can be independently operated to control for roll, pitch, and yaw. The four-blade propeller is powered by a direct current (DC) brushless motor. The configuration of Bigeye Barracuda was designed in accordance with that of the Remote Environmental Monitoring Units developed by Massachusetts Institute of Technology (MIT) [21]. The internal units of Bigeye Barracuda are illustrated in Figure 1, and its basic specifications are listed in Table 1. The control system uses the embedded Windows operating system as the core controller and is supported by navigation, communication, sensing, batteries, and motion units. A block diagram of the embedded system is illustrated in Figure 2.

Hardware
To develop underwater recognition and navigation technology, the UVRTS was integrated with imported visual sensors for environmental recognition. The image-processing module of Bigeye Barracuda has a ring-shaped light-emitting diode (LED) to provide underwater illumination in the fore part of the AUV, as illustrated in Figure 1. The image-processing module uses two vision devices: a wide-angle lens camera and a dual-lens camera. The wide-angle lens camera (9P006) has a resolution of 1980 (H) × 1080 (V) and was installed in front of the fore part to perform wide-area surveying. The dual-lens camera (OV9712) is a combination of two complementary metal-oxidesemiconductor image sensors for bottom surveying, and the resolution for each lens is 1280 (H) × 720 (V).
The control room, as illustrated in Figure 1, is in the parallel midbody and is equipped with attitude sensing, communication, power management, motion control, and real-time integrated management systems. This design is advantageous in the initial stage because it enables the rapid installation and configuration of various components, sensors, and wiring as well as rapid weight adjustment. The control room uses a fanless microcomputer as its core controller (Intel ATOM processor E3845 microprocessor embedded with the Windows operating system) connected to 2.4-G WiFi to run software, GPS, and INS, measure temperature and humidity, and perform power management, rudder control, and propulsion control. Power is supplied by lithium-ion high-performance batteries and multi-DC voltage group conversion to produce DC control voltage of 24, 12, 7, and 3.3 V. The battery unit is located below the center line of the control room to enhance AUV stability.
The aft part consists of a DC brushless motor and four servos that provide the main dynamics of the postdriver system, as depicted in Figure 1. The DC brushless motor uses a 24-V DC 7A power supply to provide 150-W propulsion, and the control driver provides forward and reverse motion. The four-blade propeller is right handed and has a diameter of 94 mm and a pitch ratio (pitch/diameter) of 0.8. The controls for roll, pitch, and yaw motion use four sets of independently controllable brushless servos. The servo provides a range of −30° to +30°and torque of 30 Kgf/cm. It has 6-degree-of-freedom maneuvering capabilities by directly driving the cruciform rudder.

Software
The software used for the integrated control system was developed with the user interface (UI) on the basis of the MS Windows operating system and by using Visual Studio C#. The UI of the administration system is illustrated in Figure 3a. The UI includes basic settings, communication settings, and the postdriver system, as depicted in Figure 3b, which includes image processing, task

Hardware
To develop underwater recognition and navigation technology, the UVRTS was integrated with imported visual sensors for environmental recognition. The image-processing module of Bigeye Barracuda has a ring-shaped light-emitting diode (LED) to provide underwater illumination in the fore part of the AUV, as illustrated in Figure 1. The image-processing module uses two vision devices: a wide-angle lens camera and a dual-lens camera. The wide-angle lens camera (9P006) has a resolution of 1980 (H) × 1080 (V) and was installed in front of the fore part to perform wide-area surveying. The dual-lens camera (OV9712) is a combination of two complementary metal-oxide-semiconductor image sensors for bottom surveying, and the resolution for each lens is 1280 (H) × 720 (V).
The control room, as illustrated in Figure 1, is in the parallel midbody and is equipped with attitude sensing, communication, power management, motion control, and real-time integrated management systems. This design is advantageous in the initial stage because it enables the rapid installation and configuration of various components, sensors, and wiring as well as rapid weight adjustment. The control room uses a fanless microcomputer as its core controller (Intel ATOM processor E3845 microprocessor embedded with the Windows operating system) connected to 2.4-G WiFi to run software, GPS, and INS, measure temperature and humidity, and perform power management, rudder control, and propulsion control. Power is supplied by lithium-ion high-performance batteries and multi-DC voltage group conversion to produce DC control voltage of 24, 12, 7, and 3.3 V. The battery unit is located below the center line of the control room to enhance AUV stability.
The aft part consists of a DC brushless motor and four servos that provide the main dynamics of the postdriver system, as depicted in Figure 1. The DC brushless motor uses a 24-V DC 7A power supply to provide 150-W propulsion, and the control driver provides forward and reverse motion. The four-blade propeller is right handed and has a diameter of 94 mm and a pitch ratio (pitch/diameter) of 0.8. The controls for roll, pitch, and yaw motion use four sets of independently controllable brushless servos. The servo provides a range of −30 • to +30 • and torque of 30 Kgf/cm. It has 6-degree-of-freedom maneuvering capabilities by directly driving the cruciform rudder.

Software
The software used for the integrated control system was developed with the user interface (UI) on the basis of the MS Windows operating system and by using Visual Studio C#. The UI of the administration system is illustrated in Figure 3a. The UI includes basic settings, communication settings, and the postdriver system, as depicted in Figure 3b, which includes image processing, task scheduling, and emergency mission stops.

Visual-Based System
Real-time motion control implemented in this study was based on the visual-based system for object recognition and marking. Therefore, the target object was fabricated using four white LED modules placed in a sealed hexahedron-shaped glass bottle, which remained illuminated underwater during the experiment.

Visual-Recognition Procedure
Because of the application of real-time dynamic recognition, image processing in this study incorporated two basic methods to improve recognition speed (i.e., color and morphological analyses). The calculation procedure is illustrated in Figure 4. The first step in the visual-recognition procedure is to obtain a continuous image from the image module in the fore part, as depicted in Figure 5a. The image color space is converted from red-blue-green (RGB) to hue-saturation-intensity (HSI), and the brightness dimension of the image is calculated as displayed in Figure 5b. Histogram equalization and image binarization is then performed [22], as depicted in Figure 5c. Subsequently, basic morphological erosion and dilation methods are applied [23] as presented in Figure 5d. The distinct features and contours of the target are obtained after optimization, as displayed in Figure 5e. At this point, the AUV has completed calculations for the real-time dynamic recognition of image features.

Visual-Based System
Real-time motion control implemented in this study was based on the visual-based system for object recognition and marking. Therefore, the target object was fabricated using four white LED modules placed in a sealed hexahedron-shaped glass bottle, which remained illuminated underwater during the experiment.

Visual-Recognition Procedure
Because of the application of real-time dynamic recognition, image processing in this study incorporated two basic methods to improve recognition speed (i.e., color and morphological analyses). The calculation procedure is illustrated in Figure 4.

Visual-Based System
Real-time motion control implemented in this study was based on the visual-based system for object recognition and marking. Therefore, the target object was fabricated using four white LED modules placed in a sealed hexahedron-shaped glass bottle, which remained illuminated underwater during the experiment.

Visual-Recognition Procedure
Because of the application of real-time dynamic recognition, image processing in this study incorporated two basic methods to improve recognition speed (i.e., color and morphological analyses). The calculation procedure is illustrated in Figure 4. The first step in the visual-recognition procedure is to obtain a continuous image from the image module in the fore part, as depicted in Figure 5a. The image color space is converted from red-blue-green (RGB) to hue-saturation-intensity (HSI), and the brightness dimension of the image is calculated as displayed in Figure 5b. Histogram equalization and image binarization is then performed [22], as depicted in Figure 5c. Subsequently, basic morphological erosion and dilation methods are applied [23] as presented in Figure 5d. The distinct features and contours of the target are obtained after optimization, as displayed in Figure 5e. At this point, the AUV has completed calculations for the real-time dynamic recognition of image features. The first step in the visual-recognition procedure is to obtain a continuous image from the image module in the fore part, as depicted in Figure 5a. The image color space is converted from red-blue-green (RGB) to hue-saturation-intensity (HSI), and the brightness dimension of the image is calculated as displayed in Figure 5b. Histogram equalization and image binarization is then performed [22], as depicted in Figure 5c. Subsequently, basic morphological erosion and dilation methods are applied [23] as presented in Figure 5d. The distinct features and contours of the target are obtained after optimization, as displayed in Figure 5e. At this point, the AUV has completed calculations for the real-time dynamic recognition of image features.

Visual Recognition and Object Tracking
AUVs equipped with cameras are described as underwater vehicles that can visualize objects. AUVs that can perform image feature extraction are referred to as having visuality functions. After calculating visual correspondence between images and the AUV, Bigeye Barracuda can perform visual recognition.
To develop immediate visual-control functions, a local calculus is used in the image to effectively reduce computational burden. The Huygens principle posits that every particle of a medium situated on a wavefront acts as a new wave source from which a new wave originates in a straight line centered on the light source [24]. According to this principle, the image of the point source diffuses in an approximately circular contour. Therefore, a least-squares circular fitting algorithm [25] is performed on the binary image on the basis of image feature extraction. Assume that = ( , … , ) is a vector with n unknowns and ( ) is considered as a nonlinear system with equations. If , the minimum value can be obtained as below:

Visual Recognition and Object Tracking
AUVs equipped with cameras are described as underwater vehicles that can visualize objects. AUVs that can perform image feature extraction are referred to as having visuality functions. After calculating visual correspondence between images and the AUV, Bigeye Barracuda can perform visual recognition.
To develop immediate visual-control functions, a local calculus is used in the image to effectively reduce computational burden. The Huygens principle posits that every particle of a medium situated on a wavefront acts as a new wave source from which a new wave originates in a straight line centered on the light source [24]. According to this principle, the image of the point source diffuses in an approximately circular contour. Therefore, a least-squares circular fitting algorithm [25] is performed on the binary image on the basis of image feature extraction. Assume that u = (u 1 , . . . , u n ) T is a vector with n unknowns and f (u) is considered as a nonlinear system with m equations. If m > n, the minimum value can be obtained as below: As shown in Figure 5f, by calculating the center of the fitting circle of the target object (light-source point) on the AUV visual coordinate plane and the area of the approximate circle, the relative relationship and distance between the target and the AUV can be estimated [26]. The visual coordinate and area can then be defined as object-tracking control parameters (i.e., the orientations and distance), as shown in Figure 6.
(light-source point) on the AUV visual coordinate plane and the area of the approximate circle, the relative relationship and distance between the target and the AUV can be estimated [26]. The visual coordinate and area can then be defined as object-tracking control parameters (i.e., the orientations and distance), as shown in Figure 6. Figure 6 depicts the target as a circle. The coordinate value and circle area value O(x, y, size) of the target on the visual coordinate plane is calculated to define the relative horizontal and vertical relationship between the AUV and the target and the relative distance between the AUV and the target. Figure 6a presents the benchmark visual image when the relative distance of the target matches the requirements of forward speed and heading angle set to zero. Compared with the image in Figure 6a, that of Figure 6b can be defined as an operation strategy of turning left and down when the target is in the lower-left corner of the AUV. The visual image displayed in Figure  6c can be identified as an operation strategy of backward motion when the size of the target i

Fuzzy Control System
In this study, controls for rotational motion (i.e., roll, pitch, and yaw) were based on the fuzzy control system, as illustrated in Figure 7. The design process for the input, fuzzy inference, and fuzzy rule base is detailed as follows:   Figure 6 depicts the target as a circle. The coordinate value and circle area value O(x, y, size) of the target on the visual coordinate plane is calculated to define the relative horizontal and vertical relationship between the AUV and the target and the relative distance between the AUV and the target. Figure 6a presents the benchmark visual image when the relative distance of the target matches the requirements of forward speed and heading angle set to zero. Compared with the image in Figure 6a, that of Figure 6b can be defined as an operation strategy of turning left and down when the target is in the lower-left corner of the AUV. The visual image displayed in Figure 6c can be identified as an operation strategy of backward motion when the size of the target is larger than that in Figure 6a.

Fuzzy Control System
In this study, controls for rotational motion (i.e., roll, pitch, and yaw) were based on the fuzzy control system, as illustrated in Figure 7. The design process for the input, fuzzy inference, and fuzzy rule base is detailed as follows: Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 17 As shown in Figure 5f, by calculating the center of the fitting circle of the target object (light-source point) on the AUV visual coordinate plane and the area of the approximate circle, the relative relationship and distance between the target and the AUV can be estimated [26]. The visual coordinate and area can then be defined as object-tracking control parameters (i.e., the orientations and distance), as shown in Figure 6. Figure 6 depicts the target as a circle. The coordinate value and circle area value O(x, y, size) of the target on the visual coordinate plane is calculated to define the relative horizontal and vertical relationship between the AUV and the target and the relative distance between the AUV and the target. Figure 6a presents the benchmark visual image when the relative distance of the target matches the requirements of forward speed and heading angle set to zero. Compared with the image in Figure 6a, that of Figure 6b can be defined as an operation strategy of turning left and down when the target is in the lower-left corner of the AUV. The visual image displayed in Figure  6c can be identified as an operation strategy of backward motion when the size of the target is larger than that in Figure 6a.

Fuzzy Control System
In this study, controls for rotational motion (i.e., roll, pitch, and yaw) were based on the fuzzy control system, as illustrated in Figure 7. The design process for the input, fuzzy inference, and fuzzy rule base is detailed as follows:

Definitions of input and output variables
Because of the cruciform rudder's design, the rudder control can be divided into horizontal and vertical rudder planes. The controls on the horizontal and vertical rudder planes manage pitch and the yaw motions, whereas the revolution speed of the propeller controls thruster force.

Fuzzification strategy determination
The present study defined an input variable of a target in the Y-direction of the visual plane and an output variable for the rudder angle. Figure   When the size of an input variable for the target object on the visual plane is defined, propeller revolution speed is regarded as an output variable, as presented in Figure 10. Figure 10a presents the input variable of the target object in the visual coordinate plane with nine defined fuzzy values: SSmall, VSmall, Small, LSmall, Middle, LBig, Big, VBig, and SBig. Figure 10b presents the propeller speed as an output variable ranging between 0 and 2000 rpm and nine fuzzy values: SFast, VFast, Fast, LFast, Keep, LSlow, Slow, VSlow, and STOP.

Design of the fuzzy control rule base
The fuzzy control rules in this study were developed on the basis of the AUV experiments. Tables 2-4 present the rules for the vertical rudder plane, horizontal rudder plane, and thruster revolution speed, respectively.

Fuzzy inference
The decision calculus was based on the fuzzy rule base. The system simulates human reasoning through this process called fuzzy inference. This study was based on t-conorm and t-norm methods [27]. T-conorm is a union operation between the IF and OR operation partial premises, whereas t-norm is an intersection operation between the IF and operation partial premises.

Defuzzification selection
Defuzzification converts the results of fuzzification obtained from reasoning into a crisp set. It is based on the weighted average formula (Equation (2)), where µ i (y) is the membership function of the output set, α i the weight of the ith rule, and N the total number of rules. Equation (3) represents the center of area, Equation (4) the center of sums, and Equation (5) the mean of maximum defuzzification-all of which are derived from the weighted average formula.

Fundamental Tests
A series of fundamental tests on Bigeye Barracuda was conducted in the towing tank at the Department of Systems and Naval Mechatronic Engineering, National Cheng Kung University ( Figure 11) before testing visual recognition and object tracking. The first of the fundamental tests was a static test that was performed in the stability water tank and included attitude adjustment, buoyancy balance, and water tightness inspection items. The second step was a dynamic test in the towing tank, which included remote control, floating, and submerging tests. These experimental procedures are described in the flowchart in Figure 12. After fundamental tests were completed, testing for visual recognition and object tracking were conducted.
( Figure 11) before testing visual recognition and object tracking. The first of the fundamental tests was a static test that was performed in the stability water tank and included attitude adjustment, buoyancy balance, and water tightness inspection items. The second step was a dynamic test in the towing tank, which included remote control, floating, and submerging tests. These experimental procedures are described in the flowchart in Figure 12. After fundamental tests were completed, testing for visual recognition and object tracking were conducted.

Visual-Recognition and Object-Tracking Tests
Visual-recognition and object-tracking experiments for a moving LED ( Figure 13) were conducted in the towing tank. Images were acquired using a dual-lens camera in the fore part of Bigeye Barracuda and were then transmitted to the controller for immediate calculation, object recognition, and marking. The coordinate and size data for the moving LED were displayed in the ( Figure 11) before testing visual recognition and object tracking. The first of the fundamental tests was a static test that was performed in the stability water tank and included attitude adjustment, buoyancy balance, and water tightness inspection items. The second step was a dynamic test in the towing tank, which included remote control, floating, and submerging tests. These experimental procedures are described in the flowchart in Figure 12. After fundamental tests were completed, testing for visual recognition and object tracking were conducted.

Visual-Recognition and Object-Tracking Tests
Visual-recognition and object-tracking experiments for a moving LED ( Figure 13) were conducted in the towing tank. Images were acquired using a dual-lens camera in the fore part of Bigeye Barracuda and were then transmitted to the controller for immediate calculation, object recognition, and marking. The coordinate and size data for the moving LED were displayed in the

Visual-Recognition and Object-Tracking Tests
Visual-recognition and object-tracking experiments for a moving LED ( Figure 13) were conducted in the towing tank. Images were acquired using a dual-lens camera in the fore part of Bigeye Barracuda and were then transmitted to the controller for immediate calculation, object recognition, and marking. The coordinate and size data for the moving LED were displayed in the visual coordinate system ( Figure 14). Furthermore, the relative relationship and distance between the target object and the AUV in water was estimated, as presented in Figure 15.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 17 visual coordinate system ( Figure 14). Furthermore, the relative relationship and distance

Discussion
The relationship between the AUV and the target object was defined by identifying the contour to mark the center point and area of the fitted circle in the visual coordinate plane. Coordinates of the target object in the Earth-fixed coordinate system were considered to be (Ox, Oy, Oz), whereas the image coordinates of the target in the visual coordinate system were (Vx, Vy). Vsize was the size

Discussion
The relationship between the AUV and the target object was defined by identifying the contour to mark the center point and area of the fitted circle in the visual coordinate plane. Coordinates of the target object in the Earth-fixed coordinate system were considered to be (Ox, Oy, Oz), whereas the image coordinates of the target in the visual coordinate system were (Vx, Vy). Vsize was the size Figure 15. Schematics of the visual, body-fixed, and Earth-fixed coordinate systems.

Discussion
The relationship between the AUV and the target object was defined by identifying the contour to mark the center point and area of the fitted circle in the visual coordinate plane. Coordinates of the target object in the Earth-fixed coordinate system were considered to be (Ox, Oy, Oz), whereas the image coordinates of the target in the visual coordinate system were (Vx, Vy). Vsize was the size of an approximate circle in the visual coordinate. The fuzzy rule base was used for fuzzy control by calculating the orientation and distance between the AUV and the target object, for example, (Vx, Vy, Vsize). Because of the design of the cruciform rudder, it could be easily decomposed into Vx to affect yaw angle and into Vy to affect pitch angle. Moreover, Vsize represents the relative distance as a judgement for controlling propeller revolution speed. Figure 16a,b present time histories of the object-tracking test for Vx against the vertical rudder plane angle and for Vy against the horizontal rudder plane angle, respectively. Both of the rudder plane angles have immediate responses to variations in the visual coordinate. In addition, Figure 17a-c depict the trajectories of the AUV, visual coordinates, and rudder angles in water, respectively. The AUV performed the object-tracking task completely in the towing tank by using the UVRTS. calculating the orientation and distance between the AUV and the target object, for example, (Vx, Vy, Vsize). Because of the design of the cruciform rudder, it could be easily decomposed into Vx to affect yaw angle and into Vy to affect pitch angle. Moreover, Vsize represents the relative distance as a judgement for controlling propeller revolution speed. Figures 16a,b present time histories of the object-tracking test for Vx against the vertical rudder plane angle and for Vy against the horizontal rudder plane angle, respectively. Both of the rudder plane angles have immediate responses to variations in the visual coordinate. In addition, Figures 17a-c depict the trajectories of the AUV, visual coordinates, and rudder angles in water, respectively. The AUV performed the object-tracking task completely in the towing tank by using the UVRTS. calculating the orientation and distance between the AUV and the target object, for example, (Vx, Vy, Vsize). Because of the design of the cruciform rudder, it could be easily decomposed into Vx to affect yaw angle and into Vy to affect pitch angle. Moreover, Vsize represents the relative distance as a judgement for controlling propeller revolution speed. Figures 16a,b present time histories of the object-tracking test for Vx against the vertical rudder plane angle and for Vy against the horizontal rudder plane angle, respectively. Both of the rudder plane angles have immediate responses to variations in the visual coordinate. In addition, Figures 17a-c depict the trajectories of the AUV, visual coordinates, and rudder angles in water, respectively. The AUV performed the object-tracking task completely in the towing tank by using the UVRTS.

Conclusions
Visual-recognition control is beneficial because of the ability to immediately compute spatial information rather than defining the coordinate system and training input conditions. This system enables AUVs to receive new commands or dynamic marker-recognition applications during tasks. This study presented a technique for image recognition and object tracking based on an optical image-processing module to verify and implement the UVRTS in our self-developed AUV. Body coordinates were mapped using the image coordinates to define the correspondence of the distance

Conclusions
Visual-recognition control is beneficial because of the ability to immediately compute spatial information rather than defining the coordinate system and training input conditions. This system enables AUVs to receive new commands or dynamic marker-recognition applications during tasks. This study presented a technique for image recognition and object tracking based on an optical image-processing module to verify and implement the UVRTS in our self-developed AUV. Body coordinates were mapped using the image coordinates to define the correspondence of the distance

Conclusions
Visual-recognition control is beneficial because of the ability to immediately compute spatial information rather than defining the coordinate system and training input conditions. This system enables AUVs to receive new commands or dynamic marker-recognition applications during tasks. This study presented a technique for image recognition and object tracking based on an optical image-processing module to verify and implement the UVRTS in our self-developed AUV. Body coordinates were mapped using the image coordinates to define the correspondence of the distance and the orientation between the target and AUV. Features of the target object were identified using contour approximation and circle-fitting algorithms. Moreover, the attitude and propeller revolution speed of the AUV could be adjusted automatically by establishing a fuzzy rule base. Simple control and image algorithms were used for AUV dynamic control and dynamic object recognition. Real-time recognition and control was implemented to achieve UVRTS operation tracking ability at a target speed of 0.5 m/s. Finally, autonomous dynamic control on the rudders and propeller was accurately verified by testing AUV maneuverability in a towing tank.

Conflicts of Interest:
The authors declare no conflicts of interest.