Design, Implementation, and Performance Evaluation of a Web-Based Multiple Robot Control System

. Heterogeneous multiple robots are currently being used in smart homes and industries for different purposes. The authors have developed the Web interface to control and interact with multiple robots with autonomous robot registration. The autonomous robot registration engine (RRE) was developed to register all robots with relevant ROS topics. The ROS topic identification algorithm was developed to identify the relevant ROS topics for the publication and the subscription. The Gazebo simulator spawns all robots to interact with a user. The initial experiments were conducted with simple instructions and then changed to manage multiple instructions using a state transition diagram. The number of robots was increased to evaluate the system’s performance by measuring the robots’ start and stop response time. The authors have conducted experiments to work with the semantic interpretation from the user instruction. The mathematical equations for the delay in response time have been derived by considering each experiment’s input given and system characteristics. The Big O representation is used to analyze the running time complexity of algorithms developed. The experiment result indicated that the autonomous robot registration was successful, and the communication performance through the Web decreased gradually with the number of robots registered.


Introduction
Autonomous robot registration and control is one of the complex tasks in robotic application development. ROS was developed to improve interoperability and reduce heterogeneous multiple robot programming complexities. ROS is a kind of middleware used by developers in robotic applications to reuse most existing software developed by di erent researchers. ere are di erent nodes, topics, and message formats for di erent robots in ROS. An algorithm was developed to nd the related topics to control di erent robots in ROS. erefore, in our system, the main component is the robot registration engine (RRE), which is developed to register multiple heterogeneous robots by getting all related rostopics. e Web interface was developed to interact with robots and users using the ROS bridge server. ROS bridge server worked as an interface between the ROS environment and the Web interface. We have developed di erent Web interfaces to interact with the user and di erent types of experiments in our research as described by Web interfaces I to V.
Web interfaces I to IV were developed to work with instructions such as moving the robot to a speci c location and working with multiple instructions sequentially. Web interface V was developed to work with instructions with semantics. We have used the Gazebo simulator for our experiments. e robot actions and the initial position were changed with time. erefore, we have created a schedule for each robot to complete movement or navigation in the experiment with Web interface V. en, we have identi ed the relevant ROS topic in corresponding nodes to subscribe and publish the corresponding command values from the user command. e command publishing engine (CPE) is responsible for publishing the ROS command for each action defined in the given user-level instruction.
Different architectures were used to design the heterogeneous multiple robot system, including centralized, distributed, and hybrid mode [1]. Our solution is based on the centralized server architecture as shown in Figure 1.
We have conducted experiments with Web interfaces I to V with different inputs. e state transition system works with multiple instructions when the user issues several commands sequentially. We have derived the mathematical equations for each experiment for the delay time in response to the inputs and system characteristics. e algorithm's running time was expressed using the Big O notation, representing the time complexity.
e following sections are grouped as follows. Section 2 represents a literature survey with background readings and related research works. e methodology with algorithms and main components of the design are presented in Section 3. e experiments and evaluation of the research project with results are described in Section 4. Finally, Section 5 describes the conclusion with future works.

Background Studies
ere are many research works that are currently related to heterogeneous multiple robot control and communication.
erefore, we have categorized all background reading as multiple robot controls, Web Interface for robot control, and robot programming and control interface with user instructions.

Multiple Robot Controls.
Some research groups have implemented heterogeneous multiple robot control with the help of a human. Seohyun et al. have developed layered architecture to manage and control multiple robots with the intervention of humans. ey have designed the interface to separate the autonomous and manual parts. ey have proposed architecture to control multiple robots with the human intervention. ey have separated the manual part and the mechanical part in this architecture. ey have enhanced the multiple robot control with the human intervention [2]. Alberri et al. have developed architecture to connect multi-robot heterogeneous systems with a hierarchical system that is mainly based on the ROS. e layered architecture was used in this development. Lower layers were implemented with C and C++ languages. Complex computations were performed by the upper layer and an intermediate level.
ey have used three different devices (autonomous quadcopter, autonomous mobile robot, and autonomous vehicle) to complete the testing of the system [3].
A system was developed where personal computers work as servers and robots work as nodes. Again, the hybrid architecture based on ROS with multiple robot systems was used.
e server processed all complex computation and visualization, and each node in robots was used to process the real-time tasks [1].
ere were many research projects with multiple robots, but our work is unique because of autonomous robot registration with the Web interface, performance evaluation, and heterogeneous robots.

Web
Interface for Robot Control. Costa et al. have introduced a Web-based interface for multiple robot communication using ROS. Two services were implemented named monitor and control. In addition, they have implemented operations as robots move forward, move to the right, move to the left, and move backward. e main contribution was to manage heterogeneous robots by laypeople with the help of ROS [4].
Penmetcha et al. have implemented a system to manage robots that are based on ROS and non-ROS with cloud technologies. e robotic applications were executed with machine learning algorithms based on JavaScript-based libraries. e CPU utilization and latency performance were calculated, and an average latency of 35 milliseconds is achieved. In addition, the innovative cloud was developed using Amazon Web services [5].
Singhal et al. have developed a fleet management system with autonomous mobile robots using a single master and cloud-based configuration. In addition, autonomous navigation was used with a global planner. e authors have identified the critical limitation and issues with cloud robotics [6].
Beetz et al. have developed a service named openEASE to work with the available research based on cloud technology. openEASE is a Web-based knowledge service that robotic researchers can remotely access. e researchers can access semantically annotated data from real-world scenarios [7].
Casañ et al. have implemented a tool with the Web browser interface for online robot programming. It provides the interface with the text box for scripting. MATLAB remote programming environments were used to implement the system [8].
Even though there are many projects with Web interfaces for robot control, our work is different since we have implemented the interface to register and control heterogeneous robots and work with multiple instructions sequentially.
Rajapaksha et al. have implemented a system, which takes user-level instruction with uncertain words for a drone and converts it to machine-understandable executable format using the ontology [9,10].
Rajapaksha et al. have developed a system to control and communicate with robots using user instruction with uncertain terms. ey used the ontology to represent the knowledge of the robot for uncertain terms. e developed system is able to understand the commands such as go fast and go very fast.
ey have developed the user-friendly environment to interact with the robots [11,12].
Rajapaksha et al. have developed a GUI-based system to program and control the robots with Web interface [13].
Rajapaksha et al. have implemented a heterogeneous multiple robot control system by registering robots autonomously with high-level user instructions [14,15].
Buscarino et al. have proposed a methodology to the control group of robots without central coordination. ey have proved that the system performance with having noise can be improved by including long-range connections between the robots.
ey have modeled the network as a dynamic network [16].

Robot Programming and Control Interface with User
Instructions. Tiddi et al. have developed a system to help nonexpert users in robotics for robotic application development with the help of the ontology in the ROS environment. e main focus was to reduce the time for robot programming for a specific task using the ontology representation. e nonexpert's user needs to configure the system to complete different tasks by the robot [17].
Tiddi et al. have developed the interface, which allows nonexperts to use a robot as a development platform. e system provides high-level commands with the help of fundamental ontology. ese ontologies have mapped the high-level capabilities on the robot low-level capabilities (e.g., communication and synchronization). ey have used the middleware as ROS [18].
Pomarlan and Bateman have implemented a system that translates "semantic specification" in a natural language instruction to a program that a simulated robot can execute. For example, the system can interpret a sentence into a program that allows the robot to understand the sentence. e main task was to cover a set of basic action concepts from an ontology [19]. Amaratunga et al. have developed an interface to program novel programmer to program easily with interface developed. ese ideas can be used for robot programming interface development [20]. Muthugala et al. have reviewed the service robot communications where robots can work with information having uncertainty in natural language instructions. ey have implemented the system to identify the issues in working with the qualitative information in the given user instruction in current research work. ey have indicated that the quantitative value of information with uncertain terms can depend on the environment, previous experience, and the current context [21].
Sutherland and MacDonald have created domain-specific language to work with the text, which is named as RoboLang.
at language is working with the existing programming tools. In addition, the program code can be executed on other robot platforms with minor modification of the code [22]. Datta Figure 1: High-level system diagram.

Journal of Robotics
Jayawardena et al. have implemented a system to implement software for a given robotic programming scenario within a minimum amount of time. Less coding can be used to create software for the given scenario. e software can be modified, and all changes are made quickly without any errors. e behavior execution engine (BEE) was used to integrate the subsystems together [28].
Datta et al. have developed a system with an environment to develop the program for robots with interactive behaviors. Moreover, it is a visual programming tool. Subject matter experts (SMEs) can involve in the service robot application development. It makes the post-software deployment easy [29].
Kim et al. have developed a system to understand the qualitative information with commands for service robots using the ontology. ey have used lexicon semantic pattern matching to get the most relevant keywords from the user instruction. ey developed an interpretation system as a prototype, and it was tested with many commands. Standard vocabulary and semantics were defined in the ontology that intelligent agents can use [30].
Scibilia et al. have reviewed motor control theory and sensory feedback applications performed in parallel. Optimal control models were developed to represent the humans' ability to behave optimally after a certain level of training. e advantage of the structural model and Hosman's descriptive model is discussed in this review [31].
Bucolo et al. have worked on a complex and imperfect electromechanical structure that can be used as paradigm for the imperfect system. ey have indicated that the electrical and mechanical interactions generate complex patterns because it prevents system to reach correct conditions [32]. Our solution may not be perfect in terms of performance characteristics.
Rashid et al. have developed an algorithm named cluster matching to get the orientation and localization of the robots. Each robot could estimate the relative orientation of neighbor robots that are within its transmission range. It is able to get the absolute positions and orientations of the team robots without knowing the ID of the other robots [33].
Ali et al. have developed the multi-robots navigation model in dynamic environment named shortest distance. e collision-free trajectory was developed using the current orientation and position of the other robots. is algorithm is based on the concept of reciprocal orientation that guarantees smooth trajectories and collision-free paths [34].
According to the above background studies, we can identify that some researches are more similar to our system, but in our system, we have developed an automated robot registration engine that is not available in any other system. Furthermore, our semantic analysis is also based on optimized algorithms compared with the existing techniques used by other researchers.

Methodology
e authors have implemented a Web interface to interact with the robots and users. e Web interfaces were developed to interact with different types of experiments in our research.
Web interfaces I to IV were developed to work with simple instructions such as moving the robot forward, moving the robot circle, and getting the robot's current position. Web interface V was developed to work with instructions with semantics. We have used the Gazebo simulator for our experiments. e standard ROS JavaScript Library provided by the ROS Web Tools (http://robotwebtools.org/) was used to connect ROS with the Web interface. In the last experiment, the user can issue an instruction like "Move to the Room 3" to all robots that are placed at different positions. Figure 2 represents the system architecture of our system.

Robot Registration Engine.
e algorithm that we have developed to register all multiple heterogeneous robots with the human intervention is represented in Figure 3. We have initially created a node called "regRobot" to complete the rest of the line execution of the algorithm. IP addresses were extracted from the given IP address list named as "ipList." e IP address is used to connect all heterogeneous service robots in the Gazebo environment. Next, ROS commands were executed to collect the software specification, which has used the execl() system call by the ROS node created earlier. Finally, an ontology named as "Registration Ontology" is created to represent available ROS details.

Command
Interpreter. When a user issues a high-level user instruction on the Web interface provided by the system, the instruction is analyzed by the command interpreter to separate the action, subject, object, and constraint, as shown in Figure 4. First, the instruction can be sent to process the synonyms and semantics. en, it needs to find out relevant ROS nodes, ROS topics for subscription, and publication with the algorithm as shown in Figure 3. e system is implemented by handling multiple instructions one by one issued by the user using a state transition diagram with the description of the states as shown in Figure 5. e robot state is saved in the ROS topic to retrieve the robot state from time to time. When the robot is ready, it will accept the user's instruction and complete the assigned work accordingly.
When a user issues multiple instructions to the robot through the Web interface, the related flowchart with the state transition is shown in Figure 6. Initially, a robot must register with the robot registration engine and update the state as ready in the ROS topic. en, the robot can work according to the instruction given by the user. While the first instruction is processed, the user can issue another instruction and then the robot must be interrupted to handle the second instruction. Based on the priority of the instruction, the robot must be able to decide to continue the current work or start the second instruction. e work state has the highest priority, the motion state has the second highest priority, the dialog state has the third priority, and the ready has the lower priority. Each robot will exit from the system if the instructions are not received within the defined timeout.   e most critical component of our experiments is the movement of the robots using different instructions using different interfaces. Once a robot is registered with the RRE, it uses the ROS topic identification algorithm to identify the corresponding ROS topic for the movement. In experiment 01, the authors have used teleoperation to move robots forward and circle in an open environment in Gazebo. In experiments 02, 03, and 04, the authors have used the Web-based interface to move robots forward and circle in an open environment in Gazebo with multiple robots. Finally, in experiment 05, the robot was moved to a specific location using the algorithm given in Figure 7. e notations used in the flowchart are described in Table 1. Table 2 and based on the command interpreter outputs, and the system accepts only commands and commands with the condition. ere can be some commands with different verbs with the same meaning, called synonyms. Robots may not be able to understand synonyms until it is appropriately programmed. erefore, we implemented ontology, which is created with the Web ontology language property called "sameAs" to find the synonyms in the given instruction. We have used the "owl:sameAs" statement to identify the two uniform resource identifiers, which means each individual has the same "identity." We can take the example as synonyms for instruction "move" are "shift, go, proceed, walk, and advance."       Figure 6: Flowchart for multiple instruction handling.

Start
Input: Move Robot to (x 0 , y 0 ) Create ROS node and Subscribe to the Odometry ROS Topic Get the current position (x g , y g )and orientation (θ) of the Robot Convert the orientation (θ) from quaternion to Euler form

Journal of Robotics
Users can update ontology manually. Synonym identification is used in the ROS topic identification algorithm for publishing commands. Different heterogeneous service robots can use different ROS topics; therefore, we need to find the correct ROS topic to publish the commands.

Semantic
Analysis. e semantic meaning of the command is one of the main tasks in interpreting the user-level instructions. Suppose a robot can detect a semantic error in the given user-level instruction that will better implement the robot's intelligence. For example, when a user issues a userlevel instruction with the verb "go," we can guarantee that the next part should be a location or destination. e semantic analysis algorithm is described in Figure 8.
e ontology code has a property that requires restricting all robots from moving to a specific position. "owl:allValuesFrom" is the property that can be used to define the class with all possible values of the given property defined by "owl:onProperty." If the object is not in the restricted value list, it is considered an invalid command and gets the user intervention.

3.6.
Ontology. Ontology is a model used to represent the concept and the relationships among all related concepts; for example, if we select the robot's ontology, we can represent all concepts in the robot domain and the relationships among all concepts related to robots [35][36][37]. Finding concepts from the ontology is the one that takes more time because the running time complexity of the searching algorithm is given by O(n), where n is the number of classes in the given ontology. e part of the ontology that we have created is shown in Figure 9.

Command Publishing
Engine. According to the userlevel instruction issued, the command interpreter can identify the action (move, navigate, identify) subject, constraint, and object defined in the user instruction. e command publishing engine needs to identify the corresponding ROS topics relevant to the action to publish and subscribe for initiation of the action. For example, if we want to move the robot to a specific location, we can publish the command on ROS topics such as cm d vel, cm d_vel_mux, or cm d vel mux/input/navi. ese ROS topics will be varying from robot to robot in heterogeneous environments. e possible ROS topics for the movement and ROS topic for the initial pose are shown in Figure 10.
When a user enters the instruction to all heterogeneous service robots, we need to initiate the action for each robot.
is task is completed by command publishing engine (CPE), which can publish the action on the corresponding ROS topic. Initially, CPE can locate the current position of each robot using the optimized algorithm. Get robot position algorithm of each robot is defined in Figure 11. e algorithm has used the IP address and the undated ontology to get the initial position and the orientation.
We have created a node in ROS called "initPos." It is responsible for running the remaining lines of the defined algorithm. In addition, this node can find the relevant ROS topics related to the initial position and orientation of the robot.
Each robot may have a different ROS topic to subscribe to and publish for different operations. erefore, we need to identify these topics before executing any commands on each robot. e ROS topic identification algorithm is described in Figure 12. Initially, the system used the given IP address list and port list to connect with all robots. e ROS topic in the ontology, which the RRE generated previously to create a shared file as rtList, is used. en, it called the Get ROSTopic() algorithm, which is used to get the corresponding ROS topics for each action. is algorithm was used to find the ROS topics for each action defined in the user instruction. For example, if the action is to move the robot from one location to another location, then we need to find the corresponding ROS topic used from the identified list as "cmd," "vel," "cmd vel," "velocity," "speed," "travel," and "run." If the identified ROS topics list was not matched with the ROS topics received from the RRE, we called Get Uncertain ROSTopic() to find the ROS topics with synonyms of the action based on the ontology. is algorithm uses the synonyms for the given action to find the corresponding ROS topic. If we can find one, we can use the topic for subscribing or publishing the action; otherwise, we need to get the user input to resolve the problem.     Figure 9: Fragment of the ontology.   values for the ROS topics for each robot. Odometry and sensor information were used as main inputs for the ROS navigational stack, and then, it generated the corresponding velocity for the mobile base. According to the ROS specification, we can find that the mobile base is controlled by xisvelocity, yisvelocity, an dt hetaisvelocity, and a 2D planner laser is mounted on the mobile base. e navigation is exactly successful on the square-shaped robots. e map server was used to store the created map file. All heterogeneous service robots used the map stored in the map server to navigate obstacles from one location to another. amcl (Adaptive Monte Carlo Localization) file and move base file for each robot were maintained as launch files to localize and move the robot in the given environment. For example, ROSscan, ROSo do metry, ROSinitialpose, and ROSparticleclou d topics were used in the amcl launch file for each robot for the localization. For example, ROStopiccm d vel, ROStopicgoal, ROStopico de m, ROStopiclocal plan, ROStopicglobal plan, and ROStopicfootprint were used for remapping the ROStopicmove base node for each robot.

read Management.
Since we need to control and coordinate multiple robots simultaneously, threads can be used to complete the task efficiently. Furthermore, a thread is a lightweight process inside a process. erefore, concurrency can be developed using the threads quickly.

Experiment and Results
We have conducted the experiments with Web interfaces I to V for simple instructions and measured the response time of the robot start and stop with the Web interface. e initial experiment was conducted without the Web interface. We have used the following notation for our experiments as shown in Table 1.

Experiment 01: Single Robot Interaction with Simple
Instruction without Using the Web Interface. Initially, the authors completed the experiment with a single robot without using the Web interface in the Gazebo simulator with TurtleBot3. e authors have issued instructions to move the robot forward and move in a circle using the terminal interface with the rostopic pub command. We have evaluated the average response time of the robot for a start and stop instructions. We have conducted the experiments with different linear and angular speeds of the robot for start and stop instructions. e experiment results will be displayed as shown in Table 3. e interaction with TurtleBot3 with the terminal without using a Web interface is shown in Figure 13. e response delay for the start and stop of the robot is represented by equations (1) and (2) (2) Figure 14 represents the average start and stop response time for the robot for each instruction. e average start response time gradually decreases when the linear and angular speed increases, while the average stop time increases when the linear and angular speed increases.

Experiment 02: Single Robot Interaction with Simple Instruction with Web Interface without Autonomous Robot
Registration.
e authors developed the Web interface to interact with the robot using the ROS bridge server. e authors have issued instructions to move the robot forward and move in a circle using the buttons provided in the Web interface with the robot. We have evaluated the average response time of the robot for a start and stop instructions. We have conducted the experiments with different linear and angular speeds of the robot for start and stop instructions. e experiment results will be displayed as shown in Table 4. e interaction with TurtleBot3 with the terminal with Web interface is shown in Figure 15. e response delay  for the start and stop of the robot is represented by equations (3) and (4), where R start s,d and R stop s,d represent the single robot delay at start and stop, respectively, τ d,web represents the delay in communication through Web interface, τ d,ROS is used to represent the delay in communicating with ROS topics, and c 1 , c 2 are constants. Figure 16 represents the average start and stop response time for the robot for each instruction. e average start response time gradually decreases when the linear and angular speed increases, while the average stop time increases when the linear and angular speed increases. According to the analysis, the authors have identified that Web communication is slightly faster than communication through the terminal.

Experiment 03: Single Robot Interaction with Simple Instruction with a Web Interface with Autonomous Robot
Registration.
e robot registration engine was developed to collect all robot details, including all ROS topics necessary to subscribe and publish. e ROS topic identification algorithm was developed to select the relevant ROS topics for each action defined in the user instruction. We have evaluated the average response time of the robot for a start and stop instructions. We have conducted the experiments with different linear and angular speeds of the robot for start and stop instructions. e experiment results will be displayed as shown in Table 5. e interaction with TurtleBot3 with the terminal with Web interface is shown in Figure 17. e response delay for the start and stop of the robot is represented by equations 5and 6, where R start s,d and R stop s,d represent the single robot delay at start and stop, respectively, τ d,web represents the delay in communication through Web interface, τ d,ROS is used to represent the delay in communicating with ROS topics, τ d,RT represents the delay in ROS topic identification, and c 1 and c 2 are constants. Figure 18 represents the average start and stop response time for the robot for each instruction. e average start response time gradually decreases when the linear and angular speed increases, while the average stop time increases when the linear and angular speed increases. According to the analysis, authors have identified that autonomous robot communication is slightly slower than communication through the Web without autonomous registration.

Experiment 04: Homogeneous Multiple Robot Interaction with Simple Instruction with a Web Interface with Autonomous Robot Registration.
e authors have developed the launch file to create multiple robots in the same Gazebo environment. Initially, two TurtleBot robots were spawned in the empty Gazebo world at two different locations. e simple move instructions were issued to both robots simultaneously and evaluated the average response time for the start and stop instructions. e separate namespaces were used to identify each ROS topic for each robot. e first  robot was named robot 1, and the second one was named robot 2. e interaction with multiple two TurtleBot with the terminal with Web interface is shown in Figure 19. e response delay for the start and stop of the robot is represented by equations (7) and (8) Secondly, the authors have spawned another four robots in the same Gazebo environment for the experiment. Separate namespaces were given for each robot to avoid conflicts with the same ROS topic. e simple move instructions were issued to both robots simultaneously, and the average response time for the start and stop instructions is evaluated.
e experiment results will be displayed as shown in Table 6. e interaction with multiple four TurtleBot with the terminal with Web interface is shown in Figure 20. Figure 21 represents the average start and stop response time for the single robot, two robots, and four robots for each instruction where the linear speed is changed, but the angular speed is kept constant to avoid the collision among the robots. e average start response time gradually increases when the number of robots increases, while the average stop time increases when the number of robots increases.

Experiment 05: Move the Robots to a Specific Location with a Web Interface with Autonomous Robot
Registration. e authors have completed the experiment to move the robot (single robot, two robots, and four robots) to a given target location by an instruction using the Web interface. e robots were placed at different positions to move the same distance on average. e following map represents the initial position and target locations of two and four robots as shown in Figure 22.
e authors have conducted the experiments with a single robot, two robots, and four robots with a single instruction to move the robot to a specific location given by (x, y) coordinates. e average time taken by robots to a specific location was measured and presented in Table 7. e average move time increases with the number of robots and distance, as shown in Figure 23. e delay for moving single robot and multiple robots is represented by equations (9) and (10), where R move s,d and R move m,d represent the single and multiple robots' delay in moving to specific location, respectively, τ d,web represents the delay in communication through Web interface, τ d,ROS is used to represent the delay in communicating with ROS topics, τ d,RT represents the delay in ROS topic identification, τ d,pos is used to represent delay in getting the current position and orientation of the robot, and c 1 , c 2 , α, and β are constants.

Experiment 06: Robot Interaction with Multiple Instructions with a Web Interface with Autonomous Robot
Registration. We have completed the experiment with the multiple instructions issued by the user sequentially with the state transition diagram. e sample interaction between the user instruction through the Web interface and the robot is shown in Figure 24. is diagram represents only three user instructions that the user issues to control the robot. e experiment was conducted with three instructions to move the robot to three different locations. e target locations were represented as (x 0 , y 0 ), (x 1 , y 1 ), and (x 2 , y 2 ). ese target locations were selected to make sure all robots move at equal distance on average. e initial robot positions for two robots and four robots are represented in the map given in Figure 25. e robots were initially placed concerning the target locations where each robot must move the same distance. e blue color circle represents the initial robot position. e green color square represents target locations given by user instructions. e target locations are identified to ensure all robots travel equal distances on average. e equation that represents the delay occurs because multiple instructions issued by user were developed using the mathematical notation. We have used δ ij as state transition time from i to j, ∀(i, j) ∈ 1, 2, 3, 4, 5, 6 { }, S δ as time taken to save the state in ROS topic, R δ as time taken to retrieve the state from ROS topic, and ϵ n as transition delay by n instructions, where n ∈ 1, 2, 3, . . . , l { }. e total state transition delay time ϵ s n for single instruction n � 1 is shown in equation (11). e total state transition delay time ϵ m n for multiple instructions n � 1, 2, 3, ..l is shown in equation (12). e delay for moving single robot and multiple robots to specific location with multiple instructions sequentially is    Linear Speed U x (ms -1 ) s Figure 21: Multi-robot interaction with Web interface.
e experiment was conducted with multiple instructions with single, two, and four robots. All robots were given the target locations in each instruction to travel the same distance on average to make the completion time for the comparison. e average completion time is tabled as shown in Table 8. e average completion time and the number of instruction relationships are shown in Figure 26.

Experiment 07: Heterogeneous Multiple Robot Interaction with Semantic Instruction with a Web
Interface with Autonomous Robot Registration. We have evaluated our system in the Gazebo environment using three robots such as turtlebot, husky, and TiaGo. e virtual environment, available in Python httpserver (Python-mhttp), was executed to implement necessary Web pages with JavaScripts  for the Web interface. We have used the rosbridge server to work as an interface between ROS and non-ROS clients. e user has added the instruction on the Web interface provided by the system to interact with the multiple robots. e instruction types, which were used to test our system, are shown in Table 9. Type I was a general instruction with no synonym or semantic issue. e synonym was added to instruction type II, where a synonym analysis algorithm 10m 10m (0, a 1 ) (0, -a 1 ) (x 1 , y 1 ) (x 0 , y 0 ) 10m (x 1 , y 1 )    Proceed to sea and clean processed it. e semantic of the instruction is not clear in instruction type III. Instruction type IV has both synonym and semantic issues. e synonym and semantic were not programmed for the instruction type V, where the user has to handle the synonym and semantic issues. e system was tested with many instructions, type I to type V. e identification of the synonym and the semantic issues were performed by our algorithms accurately. Furthermore, we have completed the time complexity analysis of our algorithm to measure the system's performance using the Big O notation. e time complexities of all algorithms are shown in Table 10. Time complexity is calculated using the number of loops used by each algorithm, where n is the input size. e graph of the time complexity for all algorithms is shown in Figure 27. According to the time complexity analysis, we can identify that the robot registration algorithm and ROS topic identification algorithm have poor performance because time complexity is O(n 4 ).
Time complexity analysis with Big O notation for each type of instruction is shown in Table 11. Command interpreter has used the Synonym Analysis Algorithm(), and Semantic Analysis algorithm(), where Synonym Analysis Algorithm() has taken O(n 2 ), and Semantic Analysis algorithm() has taken O(n 3 ) running time based on the asymptotic notation in algorithm analysis. erefore, instruction type II is poor compared with instruction type III. Instruction type V is worse because user interaction is needed to solve the synonym and semantic issue in the instruction since synonym and semantics are not programmed.
In addition to the above discussed time complexity analysis for instruction types I to V, we have conducted two types of experiments with the Gazebo environment with Turtlebot, Husky, and TiaGo robots. In the first experiment type, we have moved all heterogeneous robots to a given goal in the open world in the Gazebo, and the second type of experiment is to navigate all heterogeneous robots to a given   goal with obstacles in the Gazebo. All three robots (turtlebot, husky, and Tiago) in an open world in the Gazebo are shown in Figure 28. Experiments were conducted using the system above multiple robots with movement and navigation using 20 type IV instructions. Users can update the goal and task assigned for each robot for the different schedules in Table 12. We have added the self-rotation for each robot to simulate the task completed by robots based on the scheduled task. We found some errors in robot registration algorithm and ROS Topic Identification Algorithm() for movements and navigation. ere were more ROS topic settings than the robot's movement in an open world in navigation.
e results of the experiment are represented in the table for three robots Turtlebot, Husky, and TiaGo, where we have tested 20 times for each goal at 4 different time slots as 8.00-10.00 am, 10.00-12.00 noon, 12.00-2.00pm, and 2.00-4.00pm. We received different ontology searching errors, robot registration errors, ROS topic identification errors, and command publishing errors in each time slot. erefore, we gradually minimized the error with the experienced we had in each experiment with the timing. e success rate is measured with 20 tests. It defines the number of successful tests without errors out of 20 tests for each robot in each type of experiment. e results of experiment type 01 (without navigation) are shown in Table 13. According to the analysis, we have identified that the turtlebot has a higher success rate compared with other robots, as shown in Figure 29.
e results of the experiment type 02 (with navigation) are shown in Table 14. e success rate is also increasing as similar to experiment 01 as shown in Figure 30. e running time of the robot registration algorithm and ROS topic identification algorithm is O(n 4 ), where n is the number of actions defined in the user instruction. ese two algorithms had the highest time complexity compared with other algorithms developed in our system.
In general, delay in response time for the start has decreased when the linear and angular speed is increased. However, delay in response time for the stop has increased when the linear and angular speed is increased. Delay has occurred when the robot is controlled without the Web interface because of the delay with system call execution through operating system and delay with communication with ROS functions. When a robot is controlled through the Web without auto-registration, the delay has occurred in communication through the Web and communication with ROS through the ROS bridge server. When the auto-registration was added to the system, then we need to add the delay taken by the algorithm for the ROS topic identification. It is obvious that the delay time increases with the number of robots increased. When the robot is sent to a specific location, then we need to add time taken to get the current position and orientation for the delay time. When a robot is   controlled by the multiple instructions, then we had to use a state transition system. erefore, we need to add the time taken by the state transition system to save and retrieve the state to the delay time to get the more accurate results. According to the analysis, the authors have identified that Web communication is slightly faster than communication through the terminal.

Conclusion and Future Works
is research study has developed a system to issue instruction through the Web interface and controls multiple robots. Initially, all multiple robots need to register with robot registration engine. e autonomous robot registration and autonomous ROS topic identification algorithms were implemented successfully. e delay time is increased with the introduction of these algorithms. We have derived the mathematical equations for each delay time, which varies based on the inputs and system characteristics. e experiment result indicated that the autonomous robot registration was successful, and the communication performance through the Web decreased gradually with the number of robots registered. e running time of the robot registration algorithm and ROS topic identification algorithm is O(n 4 ). We have not implemented the access control of the multiple robots in the same environment. We will be implementing access controlling and synchronization with all robots in our future work.

Data Availability
ere are no data involved in this research.

Conflicts of Interest
e authors declare that they have no conflicts of interest.   Journal of Robotics