Delivery robots face the problem of storage and computational stress when performing immediate tasks, exceeding the limits of on-board computing power. Based on cloud computing, robots can offload intensive tasks to the cloud and acquire massive data resources. With its distributed cluster architecture, the platform can help offload computing and improve the computing power of the control center, which can be considered the external “brain” of the robot. Although it expands the capabilities of the robot, cloud service deployment remains complex because most current cloud robot applications are based on monolithic architectures. Some scholars have proposed developing robot applications through the microservice development paradigm, but there is currently no unified microservice-based robot cloud platform. This paper proposes a delivery robot cloud platform based on microservice, providing dedicated services for autonomous driving of delivery robot. The microservice architecture is adopted to split the monomer robot application into multiple services and then implement automatic orchestration and deployment of services on the cloud platform based on components such as Kubernetes, Docker, and Jenkins. This enables containerized CI/CD (continuous integration, continuous deployment, and continuous delivery) for the cloud platform service, and the whole process can be visualized, repeatable, and traceable. The platform is prebuilt with development tools, and robot application developers can use these tools to develop in the cloud, without the need for any customization in the background, to achieve the rapid deployment and launch of robot cloud service. Through the cloud migration of traditional robot applications and the development of new APPs, the platform service capabilities are continuously improved. This paper verifies the feasibility of the platform architecture through the delivery scene experiment.
In 2020, the COVID-19 pandemic unraveled the weak points in the global supply chain for goods. With the rapid development of autonomous driving technology, the application scenarios of wheeled robots are more and more extensive. Companies such as Amazon and logistics and supply chain management organizations have been experimenting with wheeled robots to deliver their own packages and have developed unmanned delivery robots. As application scenarios become more complex, the computing power requirements of delivery robots are also increasing.
Cloud robot technology is an emerging technology developed by robots with the help of cloud computing. The storage and computing power of traditional robots are limited to the robot body, while intelligence requires more knowledge storage, retrieval, and reasoning computing capabilities. In response to this question, Professor James Kuffner of Carnegie Mellon University in 2010 first proposed the concept of cloud robots [
In 2010, Singapore’s ASORO Lab studied DAvinCi [
The delivery robot cloud platform based on microservice proposed in this paper divides robot applications into microservices, which reduces the coupling degree of platform software and the difficulty of code maintenance. Due to the automated deployment components, the deployment time of platform service is obviously better than that of single architecture and SOA-based robot cloud platform, which greatly reduces the cost of operation and maintenance.
Robot unloading computing depends on the cloud platform to achieve. Cloud platform, also known as hardware virtualization, abstract and transform the physical hardware resources of many computers, which can be divided and combined into one or more computer configuration environments. Thus, breaking the impartible barrier between entity constructs enables users to use these computer hardware resources in a better form than the original configuration. Virtualization is a way to complete the pooling of hardware resources. Mainstream cloud virtualization technologies and cloud platform management systems include EXSI technology in VMware software, Microsoft Hyper-V products, open-source Xen, and OpenStack, etc.
The modern robot control system is usually a logical design for a distributed system based on the component; each unit can abstract some hardware parts or functions, and opening to the rest of the system running a standard interface to build and deliver complex ROS applications and services for nonprofessionals can be a difficult task. Which software architecture should be used to rapidly deploy ROS packages is a key consideration for the platform.
The microservice architecture is a cloud-native architecture whose goal is to split the application into a series of small microservices, each of which can be deployed independently on potentially different platforms and technology stacks. Originally, the web was primarily considered a means of presenting information to a wide range of people, but SOA programming led to a fundamental shift from web to computing architecture, and SOA brought a paradigm shift in methodology when designing and implementing distributed systems. In this architecture, the system is broken down into integrated services. The microservice architecture is a specialized approach to SOA implementation [
The DAvinCi cloud computing platform is an important attempt to combine the advantages of robotics with cloud computing. It proves that the execution time of executing algorithms in the cloud to build large-area maps can be significantly shortened, which greatly improves the performance of robots. However, its architectural design is mainly aimed at this specific scenario of real-time positioning and map construction, and its versatility is not strong. The RoboEarth architecture does not take into account the problem of how to deploy services in the cloud, and how to improve the efficiency of resource usage is the difficulty of the system. The overall architecture of the service robot cloud platform uses a service-oriented architecture (SOA) to build robot cloud services, focusing on the scheduling and management of cloud platform services. But the SOA integration mechanism and centralized governance predetermined the bottleneck when the system needs to be expanded, so the microservice architecture seems to be ready to replace SOA as the dominant industrial architecture. The microservice-based cloud robotics system for intelligent space [
As shown in Figure
Delivery robot cloud platform based on microservice.
The physical layer is the hardware hierarchy that provides infrastructure resources, including servers, storage, and network resources. It can be based on the public cloud IaaS layer or build its own private cloud to provide users with IaaS services and deploy tools such as Kubernetes (k8s) and Docker, which solves the problems of virtualization and automatic management of IT resources.
The physical layer in Figure
After the platform is virtualized, the Docker container is installed to isolate the resources, and the k8s is installed to orchestrate and deploy the containers, which can realize automated operation and maintenance management of the robot cloud platform.
The communication layer preinstalls the Ubuntu system and development tools and integrates the ROS to realize the interaction between the robot and the cloud platform. ROS is not a traditional operating system, such as Windows, Linux, and MacOS, but provides a cross-platform modular software communication mechanism and software development framework. The topic-based communication mechanism implemented by ROS provides communication support functions for robots, which decouples the logic between different applications (nodes), so it is widely applied to the actual development of robot software services. When the ROS system version of the cloud platform is consistent with the robot, remote communication can be achieved through ROS_MASTER.
The micro application layer consists of a number of functionally independent and well-defined service particles that communicate with a lightweight restful protocol. They are dynamically reorganized in real time and are an important resource that constitutes user layer services. The following components are required to implement governance of microservices.
Zuul is a microservice API gateway that provides dynamic routing, monitoring, resilient load, and security capabilities for accessing and invoking microservices. It is a front-facing portal entry in the overall network system.
Consul is used to implement the registration and discovery of microservices. Service providers typically provide services in clusters and notify service callers of the service address so that they can discover the target service.
The ribbon is used to achieve client load balancing, by positioning the middle-tier services running in the AWS domain, to achieve the purpose of load balancing and middle-tier service failover.
Message bus is used to implement communication between microservices. Message bus integrates event processing mechanism and message middleware to send and receive messages, which are mainly composed of the sender and the receiver and the event. For different business requirements, different events can be set, the sender sends the event, and the receiver accepts the corresponding event and handles the corresponding event.
The distributed config center is used to manage profile uniformly, making deployment and maintenance easier. The profile can be centralized in the GitHub repository, and a new configuration server can be created to manage all profiles. When a configuration change is required during maintenance, it is simply pushed to a remote repository after the local change.
The business layer encapsulates resource abstraction and virtualization into services deployed on the cloud platform to provide software services to users, which is the top level of the cloud. The cloud platform provides common services such as offline calculation, data storage, and map drawing for wheeled robots [
The cloud platform adopts the architecture of microservice to design the encapsulation of services, and Docker is used to build the microservices of the platform. The CI/CD (continuous integration, continuous deployment, and continuous delivery) of cloud platform services is implemented by a series of components such as k8s, Jenkins, Harbor, and Pipeline [
Kubernetes (K8s) is a service-oriented portable container orchestration management tool for automatic deployment, extension, and management of containerized applications, making the deployment, operation, and maintenance of our applications more convenient.
Jenkins is an open-source, extensible, continuous integration, delivery, deployment (software/code compilation, packaging, build, test, and deployment) based web interface platform. It can upload the code to the warehouse (such as GitHub, GitEE, and GitLab) and then automatically deploy the latest code from the code warehouse in the web page, rather than manually packaging, uploading the server, and deploying the series of steps, which is very convenient.
Harbor is an enterprise-class private registry server with capabilities such as authority management (RBAC), LDAP, administrative interface, self-registration, and mirror replication. Harbor is Docker’s mirrored repository, providing a layered transport mechanism to optimize network transport, provide a web interface to optimize user experience, and support horizontal cluster expansion, with a good security mechanism.
Pipeline is an official Jenkins plug-in that can be used to implement and integrate continuous delivery within Jenkins. Pipeline is a process that defines the steps to complete a CI/CD process. Instead of completing a CI/CD process manually and automatically, the process is user-defined.
As shown in Figure
CI/CD of code.
Experimental setup: deploy microservices in the cloud and deliver prop to the designated parking spot. The robot calls the cloud service and completes a series of operations in real time. The algorithm is split and deployed on the platform as shown hereinafter.
The operation flow of autonomous parking is shown in Figure The parking environment is detected by sensors such as cameras, and the parking sign and parking space are detected According to the information uploaded by the sensor, the effective parking space information and the relative position of the vehicle are obtained, thereby determining the initial position of the parking The control unit models the real-time environment based on the sensor information, generates a motion path, and controls the wheeled robot to automatically move to the parking space without collision
The operation flow of autonomous parking.
The key technologies involved in autonomous parking include lane detection, sign identification, and motion control. This paper uses an open-source module “Autorace.” These algorithms are converted into microservices and split into the detect_lane module, detect_parking module, detect_sign module, control_lane module, control_parking module, and control_decider module. Each module is a subsystem and a microservice that constitutes an autonomous parking cloud service. Each microservice is a separate Java process running on a separate virtual machine (container). A single microservice can be developed independently, can use different development languages, and is easier to deploy into the proposed cloud platform than the traditional monolithic architecture.
The delivery robot used in the experiment is shown in Figure
Turtlebot3.
The information flow between the robot and the cloud platform is shown in Figure Task 1: Log in to the background interface of the cloud platform, remotely start up the robot by terminal command, establish the connection between the robot and the cloud platform, and collect the running data of the robot in real time. The ROS system uploads the environmental information collected by the robot lidar and camera to the cloud. As shown in Figure Task 2: Through Zuul Gateway identity authentication, access to the microservices deployed on the platform, and support traffic scheduling. Task 3: Locate the microservice information registered in the Consul component, and ensure high availability of services through TCP/IP calls between microservices. Task 4: Compute in the cloud, and then feedback the calculation result to the robot for execution.
The flow of information between robots and cloud platforms.
As shown in Figure
Maps are stored in the cloud.
Delivery of the prop to a designated parking spot.
The experiment of the delivery scene proves that a series of processes such as automatic deployment and invocation of the platform service can be realized. As shown in Table
Comparison of three schemes in detail.
No. | Deployment time | Coupling degree | Scalability | Code maintenance difficulty |
---|---|---|---|---|
Scheme 1 | One day | High | Bad | Hard |
Scheme 2 | Eight hours | Low | Good | General |
Scheme 3 | Fifty minutes | Very low | Very good | Easy |
In order to verify the service invocation performance of the proposed delivery robot cloud platform based on microservice, we performed service invocation experiments on the monomer CR system based on ROS, SOA-based robot cloud platform, and the proposed delivery robot cloud platform based on microservice. Service invocation experiments can verify that the solution proposed in this article can save more time when calling services.
As can be seen from Figure
Comparison of the time consumption of the three schemes.
A delivery robot cloud platform based on microservice is proposed in this paper, and through the delivery scene experiment, it can be concluded that the proposed platform architecture is feasible. Compared to other platforms, this platform enables rapid development and deployment of applications, reducing development time from months to weeks. The platform is currently just a prototype system, and we will continue to conduct in-depth research to improve it further. In the future, we hope to implement the visual composition and orchestration of microservices on the platform, support graphical drag-and-drop development, and make cloud service development easier.
No data were used to support this study.
The authors declare that they have no conflicts of interest.