by Carlos Lopez-Franco1,2,*,
Javier Gomez-Avila1, Alma Y. Alanis 1, Nancy Arana-Daniel1 and Carlos Villaseño 1
1Centro Universitario de Ciencias Exactas e Ingenierías, Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Guadalajara C.P. 44430, Jalisco, Mexico
2Avenida Revolución 1500 Modulo “R”, Colonia Universitaria, Guadalajara C.P. 44430, Jalisco, Mexico
*Author to whom correspondence should be addressed.
Sensors 2017, 17(8), 1865; https://doi.org/10.3390/s17081865
Received: 1 July 2017 / Revised: 4 August 2017 / Accepted: 10 August 2017 / Published: 12 August 2017
(This article belongs to the Special Issue Models, Systems and Applications for Sensors in Cyber Physical Systems)
Abstract
In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.
Keywords:
unmanned aerial vehicle; hexarotor; visual servoing
1. Introduction
The use of Unmanned Aerial Vehicles (UAVs) has been increased over the last few decades. UAVs have shown satisfactory flight and navigation capabilities, which are very important in applications like surveillance, mapping, search and rescue, etc. The ability to move freely in a 3D space represents a great advantage over ground vehicles, especially when the robot is supposed to travel long distances or move in dangerous environments, like in search and rescue tasks. Commonly, UAVs have four rotors; however, having more than four gives them a higher lifting capacity. The hexarotor has some advantages over the highly popular quadrotor, such as their increased load capacity, higher speed and safety, because the two extra rotors allow the UAV landing even if it loses one of the motors. However, the hexarotor is a highly nonlinear and under actuated system because it has fewer control inputs than degrees of freedom and its Lagrangian dynamics contain feedforward nonlinearities; in other words, there are some acceleration directions that can only be produced by a combination of the actuators.
In contrast with ground vehicles, it is not possible to use sensors like encoders to estimate its position. A good alternative is to use visual information as a reference, due to the high amount of information that a camera can provide in contrast with their low power consumption and low weight. Since it is not possible to know the position of a hexarotor with common on-board sensors such as Inertial Measurement Units (IMU), some works use off board sensor systems [1,2,3,4,5]; however, this kind of control limits the application to indoor navigation and adds noise and delays because of the communication between the robot and the ground station.
For such reason, visual control of UAVs has been widely performed. Although stereo vision is extensively used in mapping applications [6,7], when used in UAV navigation like in [8,9], it requires 3D reconstruction or optical flow, which are computationally expensive algorithms. In this approach, monocular vision is used, and the feature error position in the image coordinate plane is related with the robot velocity vector that reduces this error [10,11,12,13,14,15]. Consequently, we can set the position of the robot based on the camera information to control its navigation not only in indoor environments [16,17]. Classical Image Based Visual Servo (IBVS) control stabilizes attitude and position separately [18], which is not possible for underactuated systems. In [18], an Image Based approach is used for an underactuated system but approximating the depth distance to the features.
In [19], a PID controller is implemented on an hexarotor and comparisons between quaternions and Euler angles are made. In [20], the authors propose a visual servoing algorithm combined with a proportional derivative (PD) controller. However, PID approaches are not effective on highly nonlinear systems with model uncertainties such as the hexarotor [21,22]. According to this, another approach is required. In this paper, we propose the use of a Neural Network based PID. The advantages of using neural networks to control nonlinear systems are that the controller will have the adaptability and learning capabilities of the neural network [23], making the system able to adapt to actuator faults such as loss of effectiveness, which is described in [24] and solves disadvantages of the traditional PID [25] such as uncertainties of the system, communication time-delay, parametric uncertainties, external disturbances, actuator saturations and unmodeled system dynamics, among others. If to all of these issues we add the complexity to integrate servo control algorithms with vision sensors and a neural PID in a real-time implementation, it is required to have a well-designed coordination between all of the elements of this implementation, requiring different processing stages with their respective communication architecture (software and hardware).
The rest of the paper is structured as follows: Section 2 describes the robot and its dynamics. In Section 3, the visual servo control approach is introduced. In Section 5, the design of the PID controller and weights adjustment are shown. Section 4 presents the relationship between the error signals from the visual algorithm and the control signals of the hexarotor. Section 6 and Section 7 present the simulation and experimental results of the proposed approach and its comparison with the conventional PID controller. Finally, the conclusions are given in Section 8.
2. Hexarotor Dynamic Modeling
The hexarotor consists of six arms connected symmetrically to the central hub. At the end of each arm, a propeller driven by a brushless Direct Current (DC) motor is attached. Each propeller produces an upward thrust and, since they are located outside the center of gravity, differential thrust is used to rotate the hexarotor. In addition, the rotation of the propellers also produces a torque in the opposite direction of the rotation of the motors; therefore, there must be two groups of rotors spinning in the opposite direction for the purpose of making this reaction torque equal to zero.
The pose of an hexarotor is given by its position ζ=[x,y,z]T and its orientation η=[ϕ,θ,ψ]T in the three Euler angles roll, pitch and yaw, respectively. For the sake of simplicity, sin(⋅) and cos(⋅) will be abbreviated s⋅ and c⋅. The transformation from world frame O to body frame (Figure 1) is given by
⎡⎣⎢xByBzB⎤⎦⎥=⎡⎣⎢cθcψsϕsθcψ−cϕsψcϕsθcψ+sϕsψcθsψsϕsθsψ+cϕcψcθsθsψ−sϕcψ−sθsϕcθcϕcθ⎤⎦⎥⎡⎣⎢xWyWzW⎤⎦⎥.
(1)
Figure 1. Structure of hexarotor and coordinate frames.
View full article here