The use of hybrid intelligent systems (HISs) is necessary to bring the behavior of intelligent autonomous vehicles (IAVs) near the human one in recognition, learning, adaptation, generalization, decision making, and action. First, the necessity of HIS and some navigation approaches based on fuzzy ArtMap neural networks (FAMNNs) are discussed. Indeed, such approaches can provide IAV with more autonomy, intelligence, and real-time processing capabilities. Second, an FAMNN-based navigation approach is suggested. Indeed, this approach must provide vehicles with capability, after supervised fast stable learning: simplified fuzzy ArtMap (SFAM), to recognize both target-location and obstacle-avoidance situations using FAMNN1 and FAMNN2, respectively. Afterwards, the decision making and action consist of two association stages, carried out by reinforcement trial and error learning, and their coordination using NN3. Then, NN3 allows to decide among the five (05) actions to move towards 30∘, 60∘, 90∘, 120∘, and 150∘. Third, simulation results display the ability of the FAMNN-based approach to provide IAV with intelligent behaviors allowing to intelligently navigate in partially structured environments. Finally, a discussion, dealing with the suggested approach and how its robustness would be if implemented on real vehicle, is given.
1. Introduction
The recent developments in autonomy requirements, intelligent components, multirobot systems, computational tools, and massively parallel computers have made intelligent autonomous vehicles (IAVs) very used in many terrestrial, underwater, and spatial applications [1–6]. In fact, IAV designers search to create dynamic systems able to navigate and achieve intelligent behaviors like human in real dynamic environments, where conditions are laborious.
To reach their targets while avoiding possibly encountered obstacles, in dynamic environments, IAV must have particularly the capability to achieve target-localization, obstacle-avoidance, decision-making, and action behaviors. More, current IAV requirements with regard to these behaviors are real time, autonomy, and intelligence. Thus, to acquire these behaviors while answering IAV requirements, IAV must be endowed with recognition, learning, adaptation, generalization, decision making, and action with real-time processing capabilities. To achieve this goal, classical approaches have been replaced by current ones on the basis of new computational tools which are far more effective in the design and development of intelligent dynamic systems than the predicate-logic-based methods of traditional artificial intelligence. These tools derive from a collection of methodologies known as soft computing which can deal with uncertain, imprecise, and inexact data. These technologies have been experiencing extremely rapid growth in the spatial, underwater, and terrestrial applications, where they have been shown to be very effective in solving real-world problems [6–9]. In fact, the essence of soft computing is aimed at an accomodation with the imprecision of the real world. Thus, the guiding principle of soft computing is to exploit the tolerance for imprecision, uncertainty, and partial truth in order to achieve tractability, robustness, low solution cost, and better rapport with reality. These capabilities are required for IAV to adapt to dynamic environments and then to accomplish a wide variety of intelligent behaviors under environmental constraints particularly the target-localization, obstacle-avoidance, decision-making, and action behaviors.
Thus, several navigation approaches for IAV have been developed using soft computing to achieve intelligent behaviors. Particularly, the fuzzy logic (FL), neural networks (NNs), and adaptive resonance theory (ART) have been used separately or in different combinations as hybrid intelligent systems (HISs) [1, 10–22].
This paper deals with the planning and intelligent control of IAV in partially structured environments. The aim of this work is to suggest an HIS-based navigation approach able to provide these vehicles with more autonomy, intelligence, and real-time processing capabilities. First, the necessity of HIS for IAV and some navigation approaches based on fuzzy ArtMap neural networks (FAMNNs) are discussed. Second, an FAMNN-based navigation approach is suggested. This approach has been developed in [20] for only three (03) possible movements of vehicles, while in the suggested approach, this number is increased to five (05) possible movements. Third, simulation results of IAV navigation based on the FAMNN approach are presented and discussed. Finally, a discussion, dealing with the suggested approach and how its robustness would be if implemented on real vehicle, is given.
2. HIS- and FAMNN-Based Navigation
Recent research on IAV has pointed out a promising direction for future research in mobile robotics where real time, autonomy, and intelligence have received considerably more attention than, for instance, optimality and completeness. Many navigation approaches have dropped the assumption that perfect environment knowledge is available. They have also dropped the explicit knowledge representation for an implicit one on the basis of acquisition of intelligent behaviors that enable the vehicle to interact effectively with its environment [2]. Consequently, IAV are facing with less predictable and more complex environments; they have to orient themselves, explore their environments autonomously, recover from failures, and perform whole families of tasks in real time. More, if vehicles lack initial knowledge about themselves and their environments, learning and adaptation become then inevitable to replace missing or incorrect environment knowledge by experimentation, observation, and generalization. Thus, in order to reach a goal, learning and adaptation of vehicles rely on the interaction with their environment to extract information [3].
Thus, the most of the navigation approaches currently developed are based on the acquisition, by learning and adaptation, of different behaviors necessary for an intelligent navigation (i.e., navigation with intelligent behaviors) such as target localization, target tracking, obstacle avoidance, and object recognition. One of the more recent trends in the intelligent control research for IAV leading to intelligent behaviors is the use of different combinations of soft computing technologies in HIS [7–9, 23].
Werbos [7] asserted that the relation between NN and FL is basically complementary rather than equivalent or competitive. In addition, HIS have been recently recognized to improve the learning, adaptation, and generalization capabilities related to variations in environments, where information is qualitative, inaccurate, uncertain, or incomplete [23]. Thus, many attempts have been made to combine FL and NN in order to achieve better performance in the learning, adaptation, generalization, decision-making, and action capabilities. Such a fusion into an integrated system will have the advantages of both NN (e.g., learning and optimization abilities) and FL (e.g., adaptation abilities and capability to cope with uncertainty). Two main combinations result from this fusion: the fuzzy neural networks (FNNs) [14, 22, 24, 25] and FAMNN [12, 17, 19, 20, 26–29]. In classification problems, FAMNN take advantage over FNN by their fast and stable learning, while the FNN trained with the gradient back-propagation learning is less faster and presents the well-known convergence problem to get stuck in local minima.
Several FAMNN-based navigation approaches have been developed. The navigation approach developed in [12] uses FAMNN to perform a perceptual space classification for the obstacle-avoidance behavior. FAMNN have been also used in a motion planning controller for path following to recognize camera images [17] and to learn a qualitative positioning of an indoor mobile robot equipped with ultrasonic sensors [19]. In these approaches, FAMNN have been used for their generalization capability, robustness, and fast and stable learning. FAMNN architecture achieves a synthesis of FL and ART-NN by exploiting a close formal similarity between the computations of fuzzy subsethood and ART category choice, resonance, and learning. This architecture performs a min-max learning rule that conjointly minimizes predictive error and maximizes code compression or generalization. This is achieved by a match traking process that increases the ART vigilance parameter ρ by the minimum amount needed to correct a predictive error.
By another way, ultrasonic sensors, infrared sensors, and camera images are very used for IAV obstacle-avoidance behavior, but their signals are often noisy giving incorrect data. FAMNN approaches with their inherent features of adaptivity and high fault and noise tolerance handle this problem making these approaches robust.
Thus, the use of HIS combining NN, FL, and ART in FAMNN is necessary to bring IAV behavior near the human one in recognition, learning, adaptation, generalization, decision making, and action.
3. FAMNN-Based Navigation Approach
To navigate in partially structured environments, IAV must reach their targets without collisions with possibly encountered obstacles; that is, they must have the capability to achive target-localization and obstacle-avoidance behaviors. In this approach, these two behaviors are acquired by the supervised fast stable learning: the simplified fuzzy ArtMap (SFAM) using FAMNN pattern classifiers. Target localization is based on FAMNN1 classifier which must recognize six (06) target-location situations, after learning, from data obtained by computing distance and orientation of vehicle-target using a temperature field strategy. Obstacle avoidance is based on FAMNN2 classifier which must recognize thirty (30) obstacle-avoidance situations, after learning, from ultrasonic sensor data giving vehicle-obstacle distances. Afterwards, the decision making and action consist of two association stages, carried out by reinforcement trial and error learning, and their coordination using an NN3 allowing then to decide the appropriate action.
3.1. Vehicles and Sensors
The vehicle movements are possible in five (05) directions; that is, five (05) possible actions Ai(i=1,…,5) are defined as actions to move towards 30°, 60°, 90°, 120°, and 150°, respectively, as shown in Figure 1. They are expressed by the action vector A=[A1,…,Ai,…,A5]. To detect possibly encountered obstacles, five (05) ultrasonic sensors (US) are necessary to get distances (vehicle obstacle) covering the area from 15° to 165°: US1 from 15° to 45°, US2 from 45° to 75°, US3 from 75° to 105°, US4 from 105° to 135°, and US5 from 135° to 165°, as shown in Figure 1.
To localize and reach targets, the temperature field strategy defined in [21, 30] is used leading to model the vehicle environment in six (06) areas corresponding to all target locations called target location situations as shown in Figure 2. These situations are defined with six (06) classes T1,…,Tj1,…,T6, where (j1=1,…,6).
Target-location situations T=[T1,…,Tj1,…,T6].
3.2.2. Obstacle-Avoidance Situations
Currently, most obstacle-avoidance approaches, in mobile robotics, are inspired from observations of human navigation behavior. Indeed, human navigators do not need to calculate the exact coordinates of their positions while navigating in environments (roads, hallways, etc.). The road-following or the hallway-following behavior exhibited by humans is a reactive behavior that is learned through experience. Given a goal, human navigators can focus attention on particular stimuli in their visual input and extract meaningful information very quickly. Extra information may be extracted from the scene during reactive behavior; this information (e.g., approaching an intersection) will usually be stored away and may be retrieved subsequently for higher level reasoning.
In partially structured environments, these observations have led to obstacle-avoidance approaches on the basis of the learning and adaptation. Such environments could be factories, passenger stations, harbors, and airports with static and dynamic obstacles. In fact, human perceives the spatial situations in such environments as topological situations: rooms, corridors, right turns, left turn, junctions, and so forth. Consequently, trying to capture the human obstacle-avoidance behavior in such environments, several approaches based on a recognition of topological situations have been developed [10, 21, 30–33].
Thus, IAV should have the capability of recognizing spatial obstacle-avoidance situations of partially structured environments and maneuvering through these situations on the basis of their own judgement to enable themselves to navigate from one point of space to a destination without collision with static obstacles. Such obstacle-avoidance behavior is acquired using soft computing-based pattern classifiers under supervised learning and adaptation paradigms which allow to recognize topological situations from sensor data giving vehicle-obstacle distances.
(a) Description of Possibly Encountered Obstacles
Partially structured environments are dynamic with static, intelligent dynamic, and nonintelligent dynamic obstacles. In reality, static obstacles for example, Obs1,…, and Obs4 in Figure 3(a), where Veh: vehicle, Obs: obstacle, and Tar: target, of different shapes represent walls, pillars, machines, desks, tables, chairs, and so forth. The intelligent dynamic obstacles (e.g., Veh1 with regard to Veh2 and conversely in Figure 3(a)) represent in reality IAV controlled by the same suggested FAMNN-based navigation approach, where each one considers the others as obstacles. The nonintelligent dynamic obstacles, oscillating horizontally (e.g., Obs5 in Figure 3(a)), or vertically (e.g., Obs6 in Figure 3(a)) between two fixed points, represent in reality preprogrammed, teleguided, or guided vehicles.
Partially structured environments: (a) Possibly encountered obstacles, and (b) obstacle-avoidance situations O=[O1,…,Oj2,…,O30] where directions shown, in each situation, correspond to those where obstacles exist.
(b) Possibly Encountered Obstacles Structured in Topological Situations
The possible vehicle movements lead us to structure possibly encountered obstacles in thirty (30) topological situations called obstacle-avoidance situations as shown in Figure 3(b), where the directions shown correspond to those where obstacles exist. These situations are defined with thirty (30) classes O1,…,Oj2,…,O30, where (j2=1,…,30).
3.3. FAMNN-Based Navigation System
During the navigation, each vehicle must built an implicit internal map (i.e., target, obstacles, and free spaces), allowing the recognition of both-target location and obstacle-avoidance situations. Then, it decides the appropriate action from two association stages and their coordination [20, 21, 30]. To achieve this, the FAMNN-based navigation system presented below is used where the only known data are the initial and final (i.e., target) positions of the vehicle.
3.3.1. System Structure
The system structure allowing to develop the suggested approach is built of three phases as shown in Figure 4. During the Phase 1, the vehicle learns to recognize target-location situations Tj1 using FAMNN1 classifier, while it learns to recognize obstacle-avoidance situations Oj2 using FAMNN2 classifier during the Phase 2. The Phase 3 decides the appropriate action Ai from two association stages and their coordination using NN3.
FAMNN based navigation system synopsis.
3.3.2. FAMNN Classifiers
They are networks which decide if one or several output nodes are required to represent a particular category. Indeed, these networks grow to represent the problem as it sees fit instead of being told by the network designer to function within the confines of some static architectures. In this paper, SFAM learning, which is a supervised fast stable learning, is used as detailed in [34]. It is specialized for pattern classification which can learn every single training pattern in only a handful of training iterations, starts with no connection weights but grows in size to suit the problem and contains only one user-selectable parameter.
Phase 1 (Target Localization).
It is based on FAMNN1 classifier which must recognize, after learning, each target-location situation Tj1. FAMNN1 is trained (see Section 4.1) from data obtained by computing distance and orientation of the vehicle-target using a temperature field strategy [21, 30]. In each step, this temperature field is defined in the vehicle environment, and the vehicle task is therefore to localize its target corresponding to the unique maximum temperature of this field that is, the situation Tj1 where the target is localized. Temperatures in the neighborhood of the vehicle are defined with a temperature field vector XT=[t30,t60,t90,t120,t150], where t30,t60,t90,t120, and t150 are the temperatures in the directions 30°, 60°, 90°, 120°, and 150°, respectively. These temperatures are computed using sine and cosine functions as detailed in [21]. These components, normalized within the range 0 and 1, constitute the input vector XT of FAMNN1 shown in Figure 5.
Architecture of FAMNN1 classifier (target-location situations).
After learning, for each input vector XT, FAMNN1 provides the vehicle with capability to decide its target localization, recognizing the target location-situation Tj1 expressed by the highly activated output Tj1.
Phase 2 (Obstacle Avoidance).
It is based on FAMNN2 Classifier which must recognize, after learning, each obstacle-avoidance situation Oj2. FAMNN2 is trained (see Section 4.1) from ultrasonic sensor data obtained from the environment giving vehicle-obstacle distances. These distances are defined, in each step, in the vehicle neighborhood with a distance vector XO=[d30,d60,d90,d120,d150], where d30,d60,d90,d120, and d150 are the distances in the directions 30°, 60°, 90°, 120°, and 150°, respectively. These components, normalized within the range 0 and 1, constitute the input vector XO of FAMNN2 shown in Figure 6.
Architecture of FAMNN2 classifier (obstacle-avoidance situations).
After learning, for each input vector XO, FAMNN2 provides the vehicle with capability to decide its obstacle avoidance, recognizing the obstacle-avoidance situation Oj2 expressed by the highly activated output Oj2.
Note that for both FAMNN1 and FAMNN2, the category proliferation is prevented by the normalization of the input vectors at the preprocessing stage and the choice of the baseline of the vigilance parameter ρ.
Phase 3 (Decision Making and Action).
In this phase, two association stages between each behavior and the favorable actions and their coordination are carried out by a multilayer feedforward network NN3. Then, NN3, allowing to decide the appropriate action among the five (05) possible actions, is built of two layers as shown in Figure 7. The five (05) outputs of the output layer are obtained by (1), where Ni is a random distribution variable over [0, β] and β is a constant
Ai=g(∑j1Tj1Uij1+∑j2Oj2Vij2)Ni,
with
g(x)={xifx>0,0otherwise.(a) Association Stages
Both situations Tj1 and Oj2 are associated separately in two independent stages, by reinforcement trial and error learning, with the favorable actions. The association between a situation and an action is usually carried out with the use of a signal provided by an outside process (e.g., a supervisor), giving the desired response. To achieve the correct association, the desired response is acquired through reinforcement trial and error learning. Learning, in this case, is guided only by a feedback process, that is, guided by a signal P provided by the supervisor. This signal causes a reinforcement of the association between a given situation and a favorable action if this latter leads to a favorable consequence to the vehicle; if not, the signal P provokes a dissociation. For this learning, the updating of weights Uij1 and Vij2, in the two association stages, is achieved by (3) given for weights Mij [21, 30] with τ time constant and α constant (α>0)
Mij(t)=-αe-(AiCj/τ)⋅t+(α-P).
Target-Localization Association: Target-location situations are associated with favorable actions in an obstacle-free environment (i.e., O=0), (see Section 4.1). Favorable actions are defined, for each situation Tj1, by the human expert (supervisor providing P1) which has traduced this fact with the vector Z=[Z1,Z2,Z3,Z4,Z5], where each Zi component is determined with regard to each possible action Ai. If Zi=1, then Ai is a favorable action, while if Zi=0, then Ai is an unfavorable action. For each situation Tj1, only favorable actions are represented in Figure 8.
Obstacle-Avoidance Association: Obstacle-avoidance situations are associated with favorable actions without considering the temperature field (i.e., T=0), (see Section 4.1). Favorable actions are defined, for each situation Oj2, by data sensors from the environment (supervisor providing P2). In each situation Oj2, favorable actions are those corresponding to directions where no obstacle is detected (no collision), while unfavorable actions are those corresponding to directions where an obstacle is detected (collision). For instance, in situation O23 shown in Figure 3(b): only A1 and A3 are considered as favorable actions while A2, A4, and A5 are considered as unfavorable actions.
(b) Coordination
This coordination must provide the vehicle with the capability to fulfill, in the same time, the two intelligent behaviors (target localization and obstacle avoidance) giving the appropriate action. To ensure the coordination of two association stages (see Section 4.1), actions Ai are computed by (1).
After learning of the two association stages and their coordination, NN3 provides the vehicle with capability to decide the appropriate action expressed by the highly activated output Ai.
Architecture of NN3 (decision making and action).
Representation of favorable actions: for each Tj1 situation, where only components Zi=1 of Z=[Z1,Z2,Z3,Z4,Z5] corresponding to favorable actions are represented.
4. Simulation Results
In this section, at first, the simulated learning (training) environments and training processes of FAMNN1, FAMNN2, and NN3 are described. Second, the simulated FAMNN-based navigation approach is described and simulation results are presented. Thus, the vehicles, ultrasonic sensors, and partially structured environments are simulated.
4.1. Training of FAMNN1, FAMNN2, and NN34.1.1. Training of FAMNN1
This training is achieved in the learning (training) environment shown in Figure 9(a). The vehicle moves along the paths (1,…,10) in an obstacle-free environment, where the target is positioned in the environment center. This allows the vehicle to be in different positions and orientations, and consequently, the target will be in different locations with regard to the vehicle. Then, each particular position and orientation corresponds to one training example for a particular target-location situation Tj1. Thus, training examples are defined by randomly selecting twenty four (24) positions and orientations (patterns). After only one (01) epoch, FAMNN1 sprouted n1=6 output nodes shown in Figure 5, to arrive at the desired result, with learning rate η1=1.0, α1=0.0000001, the baseline of the vigilance ρ1=0.4, and ε1=0.0001. Note that during the training, FAMNN learn every training example presented, either by incorporating it into an existing output node or creating a new output node for it.
Learning (training) environments: (a) Target-location situations Tj1—the vehicle moves along the paths (1,…,10) represented by arrows where the target is located in the environment center, and (b) obstacle-avoidance situations Oj2—the vehicle is simulated in a given position and orientation, where the simulated configuration of obstacles corresponds to one training example for an obstacle-avoidance situation e.g., the obstacle-avoidance situation O23.
4.1.2. Training of FAMNN2
The vehicle is simulated in a given position and orientation in the learning (training) environment, where a configuration of obstacles is simulated corresponding to one training example of a particular obstacle-avoidance situation Oj2 (e.g., the situation O23 shown in Figure 9(b)). Thus, training examples are defined by randomly selecting one hundred fifty (150) positions (patterns). After only one (01) epoch, FAMNN2 sprouted n2=30 output nodes shown in Figure 6, to arrive at the desired result, with learning rate η2=1.0, α2=0.0000001, the baseline of the vigilance ρ2=0.25, and ε2=0.0001.
4.1.3. Training of NN3
This training is achieved with the training of two association stages and their coordination; see [21] for more details.
(a) Target-Localization Association
In this stage, the updating weights is achieved by (3), where Mij=Uij1,Cj=Tj1,and(j1=1,…,6) and P defined in (4). The training to obtain Uij1 is achieved in an obstacle-free environment (i.e., O=0). Thus, the training set consists of six (06) examples using FAMNN1 outputs as NN3 inputs; see Figure 7P={P1ifZi=0,0ifZi=1.withP1>α,
(b) Obstacle-Avoidance Association
The updating weights is achieved by (3), where Mij=Vij2,Cj=Oj2, and(j2=1,…,30) and P defined in (5). The training to obtain Vij2 is achieved without considering the temperature field (i.e., T=0). Thus, the training set consists of thirty (30) examples using FAMNN2 outputs as NN3 inputs; see Figure 7. P={P2if collision,0if no collision.withP2>α,
Thus, Uij1 and Vij2 are adjusted to obtain the reinforced actions among favorable actions shown in Figure 10(a) and Figure 10(b), respectively. Solid circles correspond to positive weights which represent favorable actions, indicating reinforced association, where values are proportional to the area of circles and the most reinforced action is the one having the great positive weight. Hollow circles correspond to negative weights which represent dissociated actions.
Association matrices: (a) Matrix of target localization association—solid circles correspond to positive weights which represent favorable actions, indicating reinforced association with different reinforcement degrees, where values are proportional to the area of circles and the most reinforced action is the one having the great positive weight; hollow circles correspond to negative weights which represent actions leading to a dissociation. (b) Matrix of obstacle-avoidance association—solid circles represent reinforced actions (with different reinforcement degrees) and hollow circles represent dissociated actions.
(c) Coordination
The detection of the maximum temperature must be interpreted as the vehicle goal, while the generated actions by the presence of obstacles must be interpreted as the vehicle reflex. Then, actions generated by obstacle avoidance must have precedence over those generated by target localization; that is, P1 and P2 constants must be defined such as P2>P1, while β and α must be coupled such as 0<β<α. Thus, the used values of different constants are: β=1,α=5,P1=7, and P2=9.
4.2. FAMNN-Based Navigation Approach
To reflect the vehicle behaviors acquired by learning and to demonstrate the learning, adaptation, and generalization capabilities of the suggested FAMNN-based navigation approach, the vehicle navigation is simulated in different static and dynamic partially structured environments.
Each simulated vehicle has only two known data: its initial and final (i.e., target) positions. From these data, it must reach its target while avoiding possibly encountered obstacles using the suggested FAMNN-based navigation approach. In this simulation, the vehicle controls only its heading, and consequently, when obstacles are detected, in the same time, in its five (05) movement directions, it must be stopped. Also, each nonintelligent dynamic obstacle is assumed to have a velocity inferior or equal to the vehicle one.
4.2.1. Static Obstacles
Tested in an environment containing static obstacles, as illustrated in Figure 11 (where Veh: vehicle and Tar: target), the vehicle succeeds to avoid the static obstacles and reach its target.
Case of static obstacles.
4.2.2. Intelligent Dynamic Obstacles
In the case illustrated in Figure 12, the four vehicles Veh1, Veh2, Veh3, and Veh4 try to reach their respective targets, while each one avoids the others.
Case of intelligent dynamic obstacles.
4.2.3. Nonintelligent Dynamic Obstacles
In the case of two nonintelligent dynamic obstacles, oscillating vertically and horizontally between two fixed points, illustrated in Figure 13, the vehicle avoids them and reaches its target successfully.
Case of nonintelligent dynamic obstacles.
4.2.4. Complex Environments
In the case illustrated in Figure 14, the three vehicles reach their targets without collisions with static and dynamic obstacles.
Case of a complex environment.
5. Discussion and Conclusion
In this paper, the intelligent behaviors, acquired by learning and adaptation, of the target localization, obstacle avoidance, decision making, and action necessary to the navigation of IAV in partially structured environments have been suggested. Indeed, the HIS, namely, FAMNN1 and FAMNN2 under supervised fast stable SFAM learning have been developed to recognize the target-location situations and obstacle-avoidance situations, respectively, while the NN3 under reinforcement trial and error learning has been developed for the decision making and action. The simulation results illustrate not only the learning, adaptation, and generalization capabilities of both FAMNN1 and FAMNN2 classifiers, but also the decision-making and action capability of NN3. Nevertheless, there are a number of issues that need to be further investigation in perspective of an implementation on a real vehicle. At first, vehicle must be endowed with one or several actions to come back and a smooth trajectory generation system controlling its velocity. Also, it must be endowed also with specific sensors to detect dynamic obstacles and specific processing of data given from them.
The suggested approach in this paper presents two main advantages. The first is related to the obstacle-avoidance behavior which is deduced from the observation of the human one resulting in the principle to perceive partially structured environments as topological situations. The second is related to the performances of the FAMNN approach such as fastness and stability of learning, adaptation and generalization capabilities, fault and noise tolerance, and robustness.
The signals of sensors are often noisy, or they are defective giving incorrect data. This problem is efficiently handled by FAMNN with their inherent features of adaptivity and high fault and noise tolerance making them robust. Indeed, malfunctioning of one of the sensors or one of the neurons do not strongly impair the target-localization and obstacle-avoidance behaviors. This is possible because the knowledge stored in an FAMNN is distributed over many neurons and interconnections, not just a single or a few units. Consequently, concepts or mappings stored in an FAMNN have some degree of redundancy built in through this distribution of knowledge.
By another way, the incremental fuzzy ArtMap learning has proven to be fast and stable surpassing the performances of other techniques, as gradient back-propagation. In fact, a neural navigation approach has been suggested in [21]. In this neural approach, NN classifiers under gradient back-propagation learning are developed to recognize the same target-location situations and obstacle-avoidance situations presented in Figure 2 and Figure 3(b), respectively. In comparison with this neural approach, the suggested FAMNN approach presents several advantages.
One Selectable Parameter
For FAMNN, the only parameter to tune is the baseline of the vigilance ρ, while for NN, several parameters have to be tuned such as the learning rate η, number of nodes in the hidden layer, number of hidden layers, the choice of the weight initialization, and the momentum factor if used.
Fast Learning
Both FAMNN1 and FAMNN2 arrive under supervised SFAM learning to the desired result in only one (01) epoch, while in [21], NN1 and NN2 arrive under supervised gradient back-propagation learning to the desired result in fourty three (43) epochs and fifty (50) epochs, respectively.
Stable Learning
SFAM learning is stable [34, 35], while gradient back-propagation presents the well-known convergence problem to get stuck in local minima.
Number of Weights
For FAMNN1 and FAMNN2, the number of weights is (5*2)*6 = 60 and (5*2)*30 = 300, respectively; while for NN_1 and NN_2 developed in [21], the number of weights is (5*5) + (5*6) = 55 and (5*15) + (15*30) = 525. From these results, the NN take a small advantage (55 over 60) over FAMNN for a small number of classes, while the FAMNN take a great advantage (300 over 525) over NN for a great number of classes.
FPGA Implementation
An interesting alternative is to implement FAMNN1, FAMNN2, and NN3 on Xilinx’s FPGA. In that case, the FPGA architectures of FAMNN1 and FAMNN2 will be simpler and will use less hardware than the NN1 and NN2 developed in [21].
Once implemented on FPGA, the suggested FAMNN-based navigation approach provides IAV with more autonomy, intelligence, and real-time processing capabilities making them more robust and reliable. Thus, they bring their target-localization, obstacle-avoidance, decision-making, and action behaviors near to that of humans in the recognition, learning, adaptation, generalization, decision making, and action.
Elsewhere, the developed simulation is simple aiming to estimate and validate the resulting quality, first of the learned target-localization and obstacle-avoidance behaviors from FAMNN1 and FAMNN2 and second of the suggested decision-making and action behavior, target-localization and obstacle-avoidance association stages acquired through reinforcement trial and error learning and learned by NN3. Of course, the final target, in future, is to implement the suggested approach on a real autonomous vehicle which could have other various sensors or complicated environments and consequently necessitate probably a refining of the number of possible actions, target-location situations, or obstacle-avoidance situations. In such case, the number of the inputs of each FAMNN will change in consequence implying a new learning of different target-location or obstacle-avoidance situations and a new learning of their associations for the decision making and action.
Concerning the repeatability of the experimental results, it is guaranteed by the capability of the learning and generalization of FAMNN1, FAMNN2, and NN3. In addition, in this simulation, the learning stability (FAMNN1 and FAMNN2) is guaranteed by the SFAM learning and by reinforcement trial and error learning for NN3, which are known to be stable (compared for instance to gradient back-propagation).
Note, finally, that the suggested approach demonstrate its ability in partially structured environments, with successful obstacle avoidance only face to dynamic obstacles (vehicles or nonintelligent dynamic obstacles as shown in Figure 12 and Figure 13, resp.) having the same velocity or less than the current vehicle. Thus, in the perspective of the navigation in dynamic environments with unknown or different velocities, vehicles need to be endowed with specific moving obstacle sensors, and a new dynamic obstacle classifier is needed.
An interesting alternative for future research is to extend the solutions of autonomous navigation to a set, not to just one option, and the more movement directions will bring more flexible movement capability for the vehicle. More, as the final decision part is in fact a compromise between the results of the target-localization behavior and those of the obstacle-avoidance behavior, it should be interesting to develop such decision part using an optimization method or strategy.
Another interesting alternative for future research is the lifelong vehicle learning which opens the opportunity for the transfer of learned knowledge. This knowledge could be enhanced by introducing comprehensive knowledge bases and fuzzy associative memories making IAV more robust and reliable.
BalochA. A.WaxmanA. M.Visual learning, adaptive expectations, and behavioral conditioning of the mobile robot MAVIN1991432713022-s2.0-0025888177CherianS.TroxellW.Intelligent behavior in machines emerging from a collection of interactive control structures19951145655922-s2.0-0029405903ThrunS.MitchellT. M.Lifelong robot learning1995151-225462-s2.0-0029220605ChohraA.FarahA.Autonomy, behaviour, and evolution of intelligent vehiclesProceedings of the International IMACS IEEE-SMC Multiconference on Computational Engineering in Systems Applications1996Lille, France3641FukudaT.Intelligent robotic systemProceedings of the International IMACS IEEE-SMC Multiconference on Computational Engineering in Systems Applications1996Lille, France110AzouaouiO.ChohraA.Evolution, behavior, and intelligence of Autonomous Robotic Systems (ARS),Proceedings of the 3rd International IFAC Symposium on Intelligent Autonomous Vehicles (IAV '98)1998Madrid, Spain139145WerbosP. J.Neurocontrol and fuzzy logic: connections and designs1992621852192-s2.0-0000354454PattersonD. W.1996SingaporePrentice-HallGilesC. L.SunR.ZuradaJ. M.Guest editorial neural networks and hybrid intelligent models: foundations, theory, and applications199895721723MengM.KakA. C.Mobile robot navigation using neural networks and nonmetrical environment models199313530392-s2.0-002767997110.1109/37.236323ChohraA.BenmehrezC.Planning and intelligent control of autonomous mobile robots in partially structured environmentsProceedings of the International Symposium on Signal Processing, Robotics and Neural Networks1994Lille, FranceDubrawskiA.CrowleyJ. L.Self-supervised neural system for reactive navigationProceedings of the IEEE International Conference on Robotics and AutomationMay 1994San Diego, Calif, USA207620812-s2.0-0028573376ChohraA.FarahA.BenmehrezC.Neural navigation approach of an autonomous mobile robot in a partially structured environmentProceedings of the International IFAC Conference on Intelligent Autonomous Vehicles1995Helsinki, Finland238243HiragaI.FuruhashiT.UchikawaY.NakayamaS.Acquisition of operator's rules for collision avoidance using fuzzy neural networks1995332802872-s2.0-002935909110.1109/91.413234ChohraA.FarahA.BenmehrezC.Neuro-fuzzy navigation approach for autonomous mobile robots in partially structured environmentsProceedings of the International Conference on Application of Fuzzy Systems and Soft Computing1996Siegen, Germany304313ChohraA.FarahA.Hybrid navigation approach combining neural networks and fuzzy logic for autonomous mobile robotsProceedings of the 3rd International Conference on Motion and Vibration Control1996Chiba, JapanSzynkarczykP.MasiowskiA.The fuzzy ARTMAP neural network as a controller for the mobile robotProceedings of the International Symposium on Methods and Models in Automation and Robotics1996Miedzyzdroje, Poland12011206ChohraA.FarahA.BelloucifM.Neuro-fuzzy expert system E_S_CO_V for the obstacle avoidance of Intelligent Autonomous Vehicles (IAV)3Proceedings of the IEEE/RSJ International Conference on Intelligent Robot and Systems1997Grenoble, France170617132-s2.0-0031356969DubrawskiA.Tuning neural networks with stochastic optimization2Proceedings of the IEEE/RSJ International Conference on Intelligent Robot and Systems1997Grenoble, France6146212-s2.0-0031367746ChohraA.Fuzzy ArtMap neural networks (FAMNN) based navigation for intelligent autonomous vehicles (IAV) in partially structured environmentsProceedings of the 3rd International IFAC Symposium on Intelligent Autonomous Vehicles1998Madrid, Spain304309ChohraA.BenmehrezC.FarahA.Neural navigation approach for Intelligent Autonomous Vehicles (IAV) in partially structured environments1998832192332-s2.0-0032070647ChohraA.FarahA.BelloucifM.Neuro-fuzzy expert system E-S-CO-V for the obstacle avoidance of intelligent autonomous vehicles (IAV)19991266296492-s2.0-0032687463MedskerL. R.1995Dordrecht, The NetherlandsKluwer Academic PublishersIshibuchiH.MoriokaK.TurksenI. B.Learning by fuzzified neural networks19951343273582-s2.0-0001305811MenegantiM.SavielloF. S.TagliaferriR.Fuzzy neural networks for classification and detection of anomalies1998958488612-s2.0-0032163166CarpenterG. A.GrossbergS.RosenD. B.Fuzzy ART: fast stable learning and categorization of analog patterns by an adaptive resonance system1991467597712-s2.0-0026408256CarpenterG. A.GrossbergS.MarkuzonN.ReynoldsJ. H.RosenD. B.Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps1992356987132-s2.0-002692358910.1109/72.159059LinC. T.LinC. J.LeeC. S. G.Fuzzy adaptive learning control network with on-line neural learning199571125452-s2.0-0001913511LeeH. M.LaiC. S.Supervised extended ART: a fast neural network classifier trained by combining supervised and unsupervised learning1996621171282-s2.0-0030125695SorouchyariE.Mobile robot navigation: a neural network approachProceedings of the Art du Colloque Neuromimétique1989Lausanne, SwitzerlandEcole Polytechnique de Lausanne159175MaedaM.MaedaY.MurakamiS.Fuzzy drive control of an autonomous mobile robot19913921952042-s2.0-0002506729KimY. S.HwangI. H.LeeJ. G.ChungH.Spatial learning of an autonomous mobile robot using model-based approachProceedings of the 2nd International IFAC Conference on Intelligent Autonomous Vehicles1995Helsinki, Finland250255AycardO.CharpilletF.FohrD.Place learning and recognition using hidden Markov modelsProceedings of the IEEE/RSJ International Conference on Intelligent Robot and Systems1997Grenoble, France174117462-s2.0-0031382432KasubaT.Simplified fuzzy ARTMAP19938111825GrossbergS.The link between brain learning, attention, and consciousness1999811442-s2.0-003309077910.1006/ccog.1998.0372