Design and Steering Control of a Center-ArticulatedMobile Robot Module

This paper discusses the design and steering control for an autonomous modular mobile robot. The module is designed with a center-articulated steering joint to minimize the number of actuators used in the chain. We propose a feedback control law which allows steering between configurations in the plane and show its application as a parking control to dock modules together. The control law is designed by Lyapunov techniques and relies on the equations of the robot in polar coordinates. A set of experiments have been carried out to show the performance of the proposed approach. The design is intended to endow individual wheeled modules with the capability to merge and make a single snake-like robot to take advantage of the benefits of modular robotics.


Motivation
The capability of moving through a wide variety of remote areas has made mobile robots an interesting topic of research.But since there are different ways to move, the locomotion method selection is a challenging aspect of mobile robots design.
Inspired by their biological counterparts, mobile robots can walk, slide, and swim.In addition, conventional mobile robots travel using powered wheels.Generally, wheels have high efficiency; they are simple, and well suited to flat lands.But, the performance of wheeled mobile robots becomes seriously critical in exploring untraditional environments.For instance, wheeled mobile robots are inappropriate to move over rough terrain, sands, or water.
Chain-based robots tend to become increasingly an alternative to wheeled robots in robotic applications.While both legged and wheeled robots are unable to effectively enter narrow spaces or climb over obstacles, snake robots with many degrees of freedom can cross a narrow gap, climb over a rock, move over rough terrain or marshland, and even swim.However, snake-like robot locomotion is not efficient or appropriate where traditional wheeled systems can be used.
This contrast motivated us to study the transition between wheeled and modular robotics and present autonomous mobile robot modules capable of self-assembling to form up a chain-like robot.Figure 1 shows the main idea of the work.Each robot module is equipped with a docking connector (connection mechanism) on the front plate and a universal joint in the middle.Note that the modules are designed with an articulated central joint, rather than a traditional axle for steering.This design means that no additional actuators are necessary to permit the creation of a snake-like serial chain when the modules are docked.

Introduction
A robotic system can be defined as a collection of members which are employed to do particular tasks.For many applications, it is possible to use a certain structure to complete the tasks.However, in untraditional environments and unexpected situations, it is almost impossible for a fixedarchitecture robot to meet all the task requirements.
The work presented in this paper enables mobile robots to overcome more sophisticated tasks and enables modular robots to change the number of their modules to complete a specific task.We investigate autonomous docking between separate modules which covers (i) design and construction of a suitable connection mechanism; (ii) investigation of a parking control algorithm to drive the robot modules to a defined position and orientation; (iii) implementation of the system using a localization system.
We have already presented the design details of the connection mechanism in [1].The proposed mechanism is suitable for our application since it is lightweight, compact, and powerful enough to secure a reliable connection.It overcomes significant alignment errors, and it is considerably power efficient.So, here we focus on implementation and experiments using proper control algorithm and localization system.
This research contains the study of articulated-steering robots kinematics, using the common model for centerarticulated mobile robots [2] with some modifications.After defining the model, the next step is to develop a stable control law to steer the robot modules from any initial position and orientation to the goal configuration.
The feedback control of center-articulated mobile robots has rarely been addressed in the literature [3].In articulated steering, the heading of the robot changes by folding the hinged chassis units.Apostolopoulos [4] presented a practical analytical framework for synthesis and optimization of wheeled robotic locomotion configurations.
Choi and Sreenivasan [5] investigated the kinematics of a multimodule vehicle using a numerical approach.The number of actuators in this design can vary from nine in a fully active system to a minimum of three.
Load-haul-dump (LHD) vehicles which transport ore in underground mines are articulated-steering vehicles, and their steering kinematics resembles center-articulated mobile robots kinematics.Corke and Ridley [2] developed a general kinematics model of the vehicle that shows how heading angle evolves with time as a function of steering angle and velocity.
A path-tracking criterion for LHD trucks is proposed in [6].Marshall et al. [7] have also investigated localization and steering of an LHD vehicle in a mining network.
In another work, Ridley and Corke [8] derived a linear, state-space, mathematical model of the vehicle, purely from geometric consideration of the vehicle and its desired path.
Visual navigation is increasingly becoming a more attractive method for robot navigation [9].The field of visual navigation is of particular importance mainly because of the rich perceptual input provided by vision.Montesano et al. [10] have presented a method to relatively localize pairs of robots fusing bearing measurements and the motion of the vehicles.
Dorigo et al. and Mondada et al. [11,12] presented the Swarm-bot platform.The basic components of the system, called s-bots, is equipped with eight RGB LEDs distributed around the module and a video graphics array (VGA) omnidirectional camera.The camera can detect s-bots that have activated their LEDs in different colors.
A docking control strategy for security robots recharging has been suggested by Luo et al. [13] based on detecting an artificial landmark.In this configuration, there is a camera mounted on top of the robot and the video signal is captured by the image frame grabber installed inside the main controller.
The works presented in [14,15] have also reported experiments where a robot with on-board vision docks with another robot.
This paper is organized as follows.Section 2 surveys the related work.Section 3 discusses the parking problem.Section 4 presents the experimental results.Finally, Section 5 concludes the paper, and Section 6 points out some future work.

Steering Control
This section addresses the closed-loop steering of the active-joint center-articulated mobile robot.As illustrated in Figure 1, each robot module has a universal joint in the middle, so once they are connected, each module adds 2-DOF to the chain.Therefore, we focus on the steering kinematic of such robots, which in this paper are called "center-articulated" (In this work, we focus on planar motion only.We designed the robot to have out-of-plane capability, but this is left for future work).
To avoid confusion between this type of mobile robots and tractor-trailer vehicles [23], we emphasize on the word "active-joint".The modules are subject to move and dock to one another.Here we call this docking maneuver "parking control." We first propose a kinematic model of an active-joint center-articulated mobile robot, and then a proper law is derived to stabilize the configuration of the vehicle to a small neighborhood of the goal.The control law is designed by Lyapunov techniques and relies on the equations of the robot in polar coordinates.As discussed in [1], the designed connection mechanism allows significant misalignment.Therefore, steering the robot module to a small neighborhood of the goal is enough to achieve successful docking.

Kinematic Model.
A center-articulated mobile robot consists of two bodies joined by an active joint.The vehicle is steered only by changing the body angle, since both axles are fixed.
Consider an active-joint center-articulated mobile robot positioned at a non-zero distance with respect to a target frame (Figure 2).The robot's motion is governed by the combined action of the linear velocity v and the angular velocity w .
The kinematic equations of the robot which involve the robot's Cartesian position and the heading angle of the front body (x, y, ψ) can be written as where l 1 and l 2 are the lengths of the front and the rear parts of the robot, and ϕ is the body angle.Equations ( 1), (2), and (4) are similar to that of a simple differential-drive mobile robot.Equation (3) can be derived as follows.
The relationship between the front and the rear halves of the robot is given by where (x , y , ψ ) denote the position and orientation of the rear part of the robot with respect to the target frame (Figure 2).Taking time derivative of (5) gives We also know that ψ = ψ − φ.Therefore, considering (4), we can write Substituting ( 1), ( 2), and ( 7) in ( 6) gives It is also assumed that there can be no motion parallel to the robot's axles.This constraint on rolling without slipping for the rear part implies that This equation can be simply derived by projection of ẋ and ẏ onto wheels' axle (Figure 3).
Finally, solving (8) and ( 9) for ψ verifies (3).The kinematic equations can also be written in polar coordinates.From Figure 4 we can write where e is the error distance, θ 1 is the error vector orientation with respect to the target frame, and θ 2 is the angle between the distance vector e and the linear speed vector.The time derivative of (10) can be written as Combining ( 11) and ( 12) with ( 13) yields Substituting ( 1) and ( 2) into ( 14) gives So, As Taking the time derivative of ( 11) and ( 12) and substituting (1) and (2) in the results yields As Considering that θ2 = θ1 − ψ, from ( 19) and ( 3) we obtain Therefore, the kinematic equations of a center-articulated mobile robot in polar coordinates can be summarized as It is interesting to note that using polar coordinates allows for a set of state variables which closely resemble the same ones regularly used within our car-driving experience [24].In the next section, it will be shown that ( 21) are suitable to design an appropriate control law for parking maneuvers.

Controller Design.
The Lyapunov stability theory is a common tool to design control systems (see, for example, Bullo and Lewis [25] for a general introduction).Here we consider a simple quadratic equation as a candidate Lyapunov function.
Let the robot be initially positioned at a nonzero distance from the target frame.The objective of the parking control system is to move the robot so that it is accurately aligned with the target frame.
Consider the positive definite form The time derivative of V can be expressed as Substituting ( 21) in (23) gives It can be seen that letting makes V ≤ 0 which implies stability of the system states.Convergence (asymptotic stability) depends on the choice of λs, as discussed next.

Stability Analysis.
The proposed candidate Lyapunov function V is lower bounded.Furthermore, V is negative semidefinite and uniformly continuous in time ( V is finite).Therefore, according to Barbalat's lemma [26], V → 0 as The time derivative of V can be expressed as It is noted that as V → 0 and both Λ 1 and Λ 2 are squared, therefore Λ 1 → 0 and Λ 2 → 0. If λ 4 is selected to be very small, Λ 2 takes on the form So, Λ 2 → 0 implies that θ 2 → 0. As θ 2 → 0, Λ 1 also takes on a simpler form of Consequently, Λ 1 → 0 gives As θ 2 → 0, we get e → 0. Finally, in the limit where both e and θ 2 go to zero, θ 2 /e is bounded and (25) gives Therefore, from (19) we obtain As λ 2 > 0 and (θ 2 /e) 2 is positive, from (32) it is found that θ 1 is stable and eventually approaches zero, though it may do so slowly.
Therefore, V → 0 results in (e, θ 1 , θ 2 ) → (0, 0, 0).Since this system is driftless, Brockett's condition [25] predicts that a smooth control will not stabilize the system.However, in this case, it is not necessary to stabilize the entire state of the system because ϕ is only the internal body angle, which can always be corrected by lifting one end thus eliminating the nonslip constraint on one axle.As a result, we work only to steer the triple (e, θ 1 , θ 2 ) to a near neighbourhood of (0,0,0), indicating that the robot is in position to dock.
In practice, there is a trade-off in selecting parameter λ 4 .Setting λ 4 = 0 stabilizes (e, θ 1 , θ 2 ) while rendering ϕ uncontrollable.In such cases, ϕ can take on physically unrealizable values, for example, causing the robot to fold in on itself.By contrast, choosing λ 4 large can result in very slow approaches to the origin.
It should also be mentioned that the proposed model for the center-articulated mobile robots has a singularity at e = 0, since according to (21), θ1 and θ2 are not defined at e = 0.The condition e = 0 cannot occur in finite time since the approach to zero is asymptotic.
One may also observe another singularity.If l 2 + l 1 cos ϕ = 0 then θ 2 is not defined.If the robot is designed such that l 2 > l 1 , this singularity never happens.If l 2 = l 1 , cos ϕ = ±π results in this singularity, but this case cannot occur since it means that the robot is fully folded back on itself.
Finally, we note that there is a special case where the controller is not able to stabilize the configuration of the robot.This special case occurs when both ϕ and θ 2 are initially zero.As can be observed from ( 25) and (26), in this situation ω = 0 and v = λ 1 e.In fact, there is no control on θ 1 .The controller should recognize this special case and take a proper action.For instance, the controller can change the initial body angle to a nonzero value.

Experiments
In previous section, we introduced a control law to steer a center-articulated mobile robot in order to achieve successful parking (docking).In our experiments, we use a beaconbased localization system [27] to determine the pose of the robot relative to the target position, however that is only an implementation detail.In principle any localization scheme could be used.We include details of our beacon-based localization approach in this paper only for completeness of the description of the experimental setup.
The simulation results and the effect of measurement noise are also presented in our previous works [27,28].
In this section, we first discuss the design details of the robot and the experimental setup.We then present experimental results on the robot system to verify the effectiveness of the proposed approach.

Robot Design.
In order to provide a platform to perform our experiments, we designed and constructed an articulated-steering mobile robot.The robot module consists of a dual-actuated universal joint with servomotors as the joint actuators (Figure 5).
The robot is driven by a single-motor gearbox and the actuators are controlled by a control board.The robot is also equipped with an omnidirectional camera which measures the view angles of the beacons by means of a real-time color detection algorithm implemented on a PC.The PC calculates the control signals (the motor speed and the servo angle) which are transmitted to the control board through serial communication.
In this design, two servomotors are located on the yokes which turn the axles of the middle piece.Rotating the horizontal axle moves the joint up and down (pitch), and rotating the vertical axel moves the joint from side to side (yaw).

Visual System.
We setup a vision-based localization system using an omnidirectional camera that provides a description of the environment around the robot.The system Step 1: The image is rotated and flipped horizontally to be aligned with the actual position of the robot.
Step 2: The image is cropped to only include the beacons area.
Step 3: The raw image is labeled which allows to refer to the image currently being processed at a later time.
Step 4: RGB filtering is performed to detect the first color.
Step 5: Closing process is performed to connect nearby detected objects.
Step 6: Blob-size filtering is performed to remove the image noise.
Step 7: The center-of-gravity of the detected beacon is marked as the position of the beacon in the image plane.
Step 8: Steps 4-7 are repeated for the second and third color, using the labeled raw image.
Algorithm 1: The real-time image-processing algorithm.is composed of a camera pointed upwards looking at a spherical mirror, mounted on the top of the robot (Figure 6).We do not assume to know any camera/mirror calibration parameters (mirror size or focal length).Three red, green, and blue color objects are considered as beacons, located on the top of the target rover to determine the pose of the target.The colored-beacons are detected by the camera, and the images are transferred to an off-board computer for real-time processing.

Image Processing.
Once the images are transferred to the PC, a machine-vision software processes incoming image data and detects the position of the beacons in the image plane.
Step 1: The position of the via points (for point V pi , (X V pi , Y V pi )) is pre-defined in the algorithm (Figure 7(b)).Detection flag (D-Flag) and Point-Flag (P-Flag) are two status flags to determine the status of the robot.P-Flag indicates the next via point that should be reached.D-Flag indicates if the via points are followed.
Step 3: So, the calculation of feedback parameters is changed.
Step 4: The control algorithm (Algorithm 2) is then followed to reach the specified via point.
Step 6: Steps 2-5 are repeated once a new image frame arrives.
Algorithm 3: The algorithm to follow the via points.In this process, we first perform color filtering (RGB filtering), followed by a closing operation.In RGB filtering, all pixels that are not of the selected color are diminished.Closing has the effect of filling small and thin holes in objects by connecting nearby objects in the image plane [29].
Then, blob-size filtering is performed to remove objects below a certain size.As a result, each beacon is located as a single blob in the image.The center of gravity of each blob determines the position of each beacon in the image plane.
The image processor outputs the beacons' positions on the image plane, from which the bearing from each beacon can be computed.The steps of image processing algorithm are summarized in Algorithm 1.   (1) (2) (3) (4) (5) ( 6) The robot's travelled path

Control Algorithm.
In this section, we briefly describe the control algorithm to steer the robot to the neighborhood of the target.The control algorithm is implemented on the PC which receives the measurements of the beacons' positions and sends the control signals to the control board.The algorithm starts with relative bearing measurement, using the data provided from the camera.In case that the measured angles are beyond the interval [−π, π], ±2π is added to the computed angle.It is noted that some parameters such as the beacons' position in target frame, robot's length, and the controller's gains are predefined in the algorithm.
Once the angles are determined, the feedback parameters (e, θ 1 , θ 2 ) are calculated based on equations described in [27].The whole process is performed once a new image frame arrives from the camera.The control algorithm is summarized in Algorithm 2.
The simulation results reveal that, in some cases, the generated path passes through the beacon's region.Since this implies that two robots will be physically occupying the same space, the control algorithm must recognize this situation and resolve it.The solution is as follows, based on locating a set of via points in the robot's workspace.
As Figure 7(a) indicates, the robot's workspace can be divided into four parts, considering the sign of α and β angles.The region where both α and β are positive is considered as the safe region.This region is called safe since if the robot's initial position is located in this region, the robot reaches the goal with no need to pass through the beacons.
Therefore, if the control algorithm detects that the robot is not in the safe region, it steers the robot to first pass through a set of predefined via points.Figure 7(b) shows a set of assumed via points in the workspace.According to the (3) (5) figure, the via points are reached in an order such that the robot is finally located in the safe region.The via points are reached using the same control algorithm summarized in Algorithm 2. For each point, the target frame is redefined for that specific via point.In this approach, the feedback still comes from the beacons located on the target frame, but using a simple translation, the robot is steered to a coordinate system, originated at the specified via point.Once the error distance to that via point is small enough, the next via point is followed till the robot is located in the safe region.
This process is done by setting/resetting status flags.The process of following via points is summarized in Algorithm 3.

Robot Construction.
Based on the mentioned design approach, a prototype of the robot has been implemented including the docking connector and the visual system.Figure 8 presents a general view of the built robot.The prototype weights less than 1.0 kg, and the total length of the robot is 24 cm.
The universal joint's actuators are two Ultra Torque HSR-5995TG servomotors from HiTec company which gives maximum 3.8 Nm standing torque.The robot is driven by a twin-motor gearbox, consisting of two small DC motors, both connected to a single gearbox (ratio 344 : 1).The robot's actuators are controlled by the main control board equipped with an HCS12 Motorola microcontroller and a MicroCore-11 motor drive.The control board communicates with the PC through RS232 serial communication.

Experimental Setup.
To test the proposed approach, an experimental setup has been designed in which the performance of the built articulated-steering robot is examined.The experimental setup is illustrated in Figures 9 and 10.The experiments have been made in a small field made of white styrofoam blocks.
Figure 11(a) illustrates an image frame received from the camera (the image is cropped).The software platform used to implement the control algorithm is Microsoft Visual Studio 9.0.The codes are written in C++ and executed on a 3.0 GHz Intel dual core processor with 1 GB of RAM.The software platform calculates the control signals (the motor speed and the servo angle) which are transmitted to the control board through RS232 serial communication.
A power supply supplies 12 V DC at 1 A to the control board, including an HCS12 Motorola microcontroller which is programmed to generate appropriate PWM signals to the actuators.
We perform experiments while a fixed digital camera records the behavior of the robot.Image processing is  then performed off-line using MATLAB Image Processing Toolbox to yield the robot's true position.

Experimental Results.
A set of experiments were carried out to show the performance of the proposed algorithms.The experimental results are shown graphically in the following figures.In all experiments, the distance between the center of the beacons has been considered as the unit of distance, "B".So, the beacons' distances are a = b = 1 B, and as the beacons are located on an equilateral triangle, δ = π/3 (see the Localization section in [27]).The lengths of the robot are also l 1 = l 2 = 0.5 B.
The controller gains are chosen to be λ 1 = 0.8, λ 2 = 1.05, λ 3 = 0.95, and λ 4 = 0.02.These gains are obtained by trial and error.It is noted that only choosing positive gains is enough to achieve a stable control law (see Section 3.2), but fine adjustment to improve the performance is done considering the actuators' limits.
Figure 12(a) shows the snapshots of an experiment where the robot is positioned at (e, θ 1 , θ 2 , ϕ) = (6,0,0,0).It is noted that the distance unit is in B. Figure 12(b) illustrates the traveled path of the robot for this experiment.
Figure 13 shows the changes in α and β angles (the inputs to the controller), sent from the image processing algorithm, as well as the control signals v and ω (the outputs of the controller), applied to the robot.
(3) (4) (5)   15, 16, 17, 18 and 19 show the results of some other experiments, while the robot is located in different positions and orientations relative to the target frame (imagine that the red and the green beacons are located on the X-axis of the target frame).For each experiment, the snapshots of the experiment and the actual traveled path are presented.The figures also show the inputs and outputs of the controller.In these figures, the robot is located at (7,5π/8,0,0), (7.5,π/8,π/4,0), and (8,5π/8,−π,0), respectively.
As mentioned in Section 4.4, in case the generated path passes through the beacon's region, the control system steers the robot through a set of predefined via points.The via  points are reached in an order such that the robot is finally located in the safe region (Figure 7(b)).
Finally, Figure 20 shows the snapshots of an experiment where the initial position of the robot is not in the safe region.As can be seen, the robot has been steered appropriately to park the robot at the target.
Because our docking joint permits wide initial positioning errors (±40 degrees of yaw and 11 mm of offset), once the robot falls below a threshold speed, it suffices to go forward in a straight line in order to accomplish docking.In these experiments, the robot docked successfully on each of trials.
We also performed a set of experiments with actual docking joints [1] to visualize the connection maneuver.As Figure 21 shows, the connector's pieces are fixed in front of the robot, as well as on the origin of the target frame.This experiment was repeated ten times, and for each trial, the robot successfully completed the connection maneuver.

Conclusion
This research work introduced the idea of a new robotic system including a team of wheeled mobile robots which are able to autonomously assemble and form a chain-like robot.The goal is to improve the performance of mobile robots by autonomously changing their locomotion method.We proposed the design of a center-articulated module to form the building block of a robot team that can form itself into a serial chain where required by terrain constraints.
Next, we proposed a kinematic model of an activejoint center-articulated mobile robot in polar coordinates, and then a proper control law was derived to stabilize the configuration of the vehicle to a small neighborhood of the goal, using Lyapunov techniques.The results reveal that choosing a suitable state model allows to use a simple Lyapunov function to achieve parking control, even if the feedback is noisy.
We finally designed and constructed a center-articulated mobile robot equipped with a beacon-based localization system to verify our approach.The experimental results show the effectiveness of the proposed approach.

Future Work
An important extension to this research work would be investigating self-reconfiguration, using a set of autonomous wheeled mobile robots and the proposed approach.The next step could also be maneuvering the chain to overcome tasks such as passing through a narrow space and climbing up steps.A fully functional modular team of these units will also require the intelligence to examine surrounding terrain to determine when docking is necessary.There are a large number of open research topics in repairable systems.The problem of designing a robot team with repair capabilities is complex and not easily solved [30].The proposed configuration can be extended to answer some of the questions in this area.
Investigating dynamic docking problem can also be considered as a future work.In dynamic docking, the robot modules find and connect to one another while they are all moving.The dynamic docking offers faster docking time since there is no need for the modules to stop and perform parking maneuver.As the target is in motion, vector tracking should be investigated rather than regulation problem in parking control.

Figure 1 :
Figure 1: The main idea of the work: (a) each module is equipped with a docking connector and a universal joint; (b) the modules can dock and couple together using their connection mechanisms.

Figure 2 :
Figure 2: Diagram of a center-articulated mobile robot with respect to the target frame in Cartesian coordinates.

Figure 4 :
Figure 4: Diagram of a center-articulated mobile robot with respect to the target frame in polar coordinates.

Step 1 :Figure 5 :
Figure 5: The design of the articulated-steering mobile robot.

Figure 6 :
Figure 6: The design of the vision-based localization system, composed of a camera pointed upwards to a spherical mirror.

Figure 7 :
Figure 7: (a) The robot's workspace can be divided into four parts, considering the sign of α and β.The region where both α and β are positive is called safe region where the robot reaches the goal with no need to pass through the beacons.(b) A set of assumed via points in the workspace; the via points are reached in an order such that the robot is finally located in the safe region.

Figure 8 :
Figure 8: A general view of the built robot, including the docking connector and the visual system.

Figure 9 :
Figure 9: Three cylindrical-shaped color objects are used as the beacons, and the female connector is fixed on the origin of the target frame.

Figure 11 :
Figure 11: The process of detecting the beacons: (a) an image frame received from the camera, (b) red beacon detection, (c) green beacon detection, (d) blue beacon detection.

Figure 12 :
Figure 12: The robot is initially positioned at (6,0,0,0) (a) the snapshots of the experiment, (b) the traveled path of the robot.

Figure 13 :
Figure 13: The changes in α, β, and the control signals for the first experiment.

Figure 15 :
Figure 15: The changes in α, β, and the control signals for the second experiment.
show the results of image processing algorithm.The size of the cropped image is 420 × 400 pixels.

Figure 17 :
Figure 17: The changes in α, β, and the control signals for the third experiment.

Figures 14 (
Figures 14(a), 14(b),15, 16, 17, 18 and 19  show the results of some other experiments, while the robot is located in different positions and orientations relative to the target frame (imagine that the red and the green beacons are located on the X-axis of the target frame).For each experiment, the snapshots of the experiment and the actual traveled path are presented.The figures also show the inputs and outputs of the controller.In these figures, the robot is located at (7,5π/8,0,0), (7.5,π/8,π/4,0), and (8,5π/8,−π,0), respectively.As mentioned in Section 4.4, in case the generated path passes through the beacon's region, the control system steers the robot through a set of predefined via points.The via

Figure 19 :
Figure 19: The changes in α, β, and the control signals for the forth experiment.

Figure 20 :
Figure 20: The snapshots of the second experiment where the initial position of the robot is not in the safe region.

Figure 21 :
Figure 21: A set of experiments, performed to visualize the connection maneuver.