Wireless telemetry systems for remote monitoring and control of industrial
processes are now becoming a relevant topic in the field of networked control.
Wireless closed-loop control systems have stricter delay and link reliability
requirements compared to conventional sensor networks for open-loop monitoring
and call for the development of advanced network architectures. By following
the guidelines introduced by recent standardization, this paper focuses on the
most recent technological advances to enable wireless networked control for
tight closed-loop applications with cycle times below 100 ms. The cooperative
network paradigm is indicated as the key technology to enable cable replacing
even in critical control applications. A cooperative communication system
enables wireless devices placed at geographically separated locations to act
as a virtual ensemble of antennas that creates a virtual multiple-antenna-distributed system. A proprietary link-layer protocol/based on the IEEE
802.15.4 physical layer has been developed and tested in an indoor environment
characterized by non-line-of-sight (NLOS) propagation and dense obstacles. The
measurements obtained from the testbed evaluate experimentally the benefits
(and the limitations) of cable replacing in critical process control.
1. Introduction
The increasing demand of oil and gas supplies frequently requires the design of very large production and processing plants over remote locations with harsh environmental conditions and challenging logistics. The adoption of cabling to fully interconnect machines and monitor/control large number of processes is becoming unfeasible due to the high fluctuations of installed industrial wiring costs [1].
In networked control systems, the controller and the plant are connected via a digital communication channel of limited bandwidth [2]. A cable-based networked control architecture is considered in Figure 1: the sensors monitor any plant activity and periodically forward the digital measurements (yk) to a remote controller. Based on these observations, the remote controller computes a sequence of control messages (uk) according to a given policy and sends them to the actuators over the feedback channel. Upon retrieval of the controller messages, the actuators apply appropriate control signals to adjust the plant state. For several process control applications like semiconductor manufacturing, tooling machines, production of nanomaterials, and so forth, the determinism of data transfer is a key issue, and the cycle time (i.e., round trip time) is a critical parameter to guarantee process stability [2].
Cable-based versus wireless closed-loop control systems.
The adoption of wireless technology in critical industrial applications is still rather limited: it is generally acknowledged that to allow a wider adoption of wireless networks in an industrial context, some substantial technology innovation is required either based on new physical layer solutions or on different approaches at the upper protocol layers [3]. Industrial networks typically require low-jitter-sampling period for monitoring, high integrity data delivery of critical messages, automatic reconfiguration, and usage of redundancy in case of communication failures. The most representative application cases where the wireless technology is adopted can be found in [4–6]. The available commercial wireless systems for industrial control and monitoring predominantly use the ISM bands at 2.4 GHz and prevent the adoption of wireless in emergency actions and tight process control loops. Today, commercial battery-operated systems are based on the IEEE 802.15.4 standard and enable data to be transmitted at a typical rate of 250 kbit/s (and scaling up to 2 Mbit/s by disabling direct-sequence spread spectrum functions), with up to a maximum of 10 dBm output RF power to meet the RF regulations for hazardous environments. The IEEE 802.15.4 physical layer also constitutes the basis for the wireless HART [7] and ISA100.11a [8] industry standard protocols.
This paper focuses on the most promising technologies to support the next generation wireless control systems designed for tight closed-loop applications. The wireless communication system used to transmit observations and control messages must guarantee a minimum quality of service in order for the system to be controllable. Some recent works in this domain have highlighted the relation between the unreliability of the transmission channel and the controllability of the system stability, showing that a strict relation exists between the transmission channel characteristics and the unstable poles of the open-loop system [9–11]. In these works, the impact of the noisy transmission channel is mostly considered for the feedback loop with the assumption of simple additive white Gaussian (AWGN) channel model. Without looking at more realistic scenarios where fading is the main impairment of the communication, those approaches are, thus, very prone to failure in realistic contexts.
In this paper, we evaluate experimentally the impact of fading channels on the controllability of the closed-loop wireless system. In particular, it is envisaged here that the incorporation of the cooperative network paradigm [12] into future wireless system standardization will allow cable replacing in tight closed-loop control applications with cycle time below 100 ms [2]. Cooperative communication systems emulate the transmission and the reception of data on a (virtual) antenna array, thus, creating a virtual and distributed multiple antenna array network [13]. To highlight the potential of such systems, a proprietary link-layer protocol tailored for closed-loop process control applications has been developed on top of an existing IEEE 802.15.4 compliant PHY/MAC layer radio stack. Real-time process control has been tested in an indoor environment with non-line-of-sight (NLOS) propagation and dense obstacles. Results from the testbed measurements confirm that cooperative communication is a promising enabling technology for the next generation critical wireless control systems as it provides clear performance advantages compared to classical network architectures in terms of link reliability and closed-loop stability performance. Analysis of experimental data reveals that the configuration and planning of the wireless control system should account for the stability properties of the plant process. This imposes a substantial redefinition of conventional wireless network deployment and design methods.
2. Wireless Closed-Loop Control Networking
In what follows, we consider a control network with clock-driven sensing. The focus of the analysis is thus on a scenario where the wireless network is constrained to periodically monitor and control the process state being subject to unpredictable disturbance. The output sensor in active state is periodically sampling a continuous signal y(t)∈ℝm with period Ts (reporting rate) to obtain the time vector series yk=y(tk), tk=kTs. Discrete signals yk∈ℝm provide an observation of the plant state vector xk∈ℝq. The plant model for process observations is described in discrete-time state-space form:
(1)yk=C×xk+nk,xk=A×xk-1+B×uk+D×ek+wk,uk=G(xk-1,xk-2,…∣x~k),∀k.
At time tk, the plant vector state xk is a function of the previous states xk-1, the feedback control variable uk∈ℝv, and the external random input process ek∈ℝv acting as an external nonstationary disturbance. The feedback control uk is generated by the controller on every new received process observation. Control message follows a generic control law function G(·) that depends on all the previous vector states xk-1,xk-2,… estimated from the corresponding noisy observations yk-1,yk-2,…. The purpose of the controller is to stabilize the system by balancing the external input disturbance and minimizing the deviation of plant states xk from the desired stable set points indicated here by x~k. Given that the focus is on wireless control performance assessment, it is assumed that any observation yk reliably transmitted over the wireless link provides a full state measurement of xk and is affected by a scaling factor modeled as a full-rank matrix C. The instrument AWG noise nk∈ℝm with nk~𝒩(0,σn2I) includes quantization and other unwanted effects. The noise term wk∈ℝq accounts for the state disturbance and is modeled as independent AWG noise with wk~𝒩(μw,σw2I) so that x0=μw.
The round-trip latency TRT is a critical parameter for process control, and it is defined as the time between the sampling (and transmission) of the observation yk and the successful decoding of the feedback control message uk+1 by the remote actuator. A networked control system that satisfies the stabilizable properties needs two additional conditions to guarantee closed-loop stability: (i) the observation yk and the feedback control uk must be successfully decoded by the respective parties; (ii) the tolerable round-trip latency is such that TRT≤Ts.
In this paper, the main focus is on cable replacing for control systems requiring TRT<100 ms. This is a reasonable choice to address most industrial control applications [2]. The case for highly critical control (e.g., motion control) that requires cycle times TRT<10 ms is not considered here as still too challenging for implementation over current low-power wireless technology.
2.1. System Model
The development of robust network designs requires accurate modeling of radio propagation to account for the random fluctuations of the received signals due to fading impairments [14]. To simplify the reasoning, we assume the output sensor and the actuator to be colocated and referred to as input/output sensor (I/O sensor). Extension to a more general model is straightforward. Both the controller and the I/O sensor are deployed at fixed locations over the plant and equipped with a radio device characterized by a single omnidirectional antenna transceiver and a limited battery energy supply mainly used for the transmission, reception, and processing of data. Transmission of measurements sk (over uplink) and feedback control uk (over downlink) is subject to half-duplex constraint so that it occurs in different time slots and satisfies the round trip delay constraint TRT. Let dI,C be the distance between the I/O sensor I and the controller C; the probability of successful closed-loop control Pc is modeled by outage probability
(2)Pc=Pr[min{γI,C,γC,I}≥β],
where γI,C is the Received Signal Strength (RSS) measured by the controller C over the uplink, while γC,I is the RSS observed by the actuator I over downlink. β models the sensitivity of the receiver and depends critically on hardware implementation and on modulation of signals. The RSS γa,b for a wireless link (a,b) depends on deterministic components (transmitter/receiver distance, height from ground, and obstruction size/position) and on random components due to multipath-fading impairments [15]. An effective statistical description of fading channel terms can be obtained by Weibull distribution [16].
Assuming statistical independence between uplink and downlink RSS fluctuations, successful control probability Pc can be rewritten as the product of the success probabilities over uplink and downlink
(3)Pc=Pr[γI,C≥β]︸Uplink: Sensor→Controller×Pr[γC,I≥β]︸Downlink: Controller→Actuator.
This assumption is also confirmed by measurements over the 2.4 GHz spectrum (see Section 5).
2.2. Closed-Loop Control Performance Metrics
The probability of successful control Pc (or equivalently the packet loss rate) is a good indicator of networked control quality as stability is primarily ruled by packet loss rate [17]. Given that it is important to develop an understanding of how much loss the control system can tolerate before observing instability [18], an additional measure to characterize the stability properties of the process is the stability interval Tstability. The stability interval measures how infrequent feedback information is needed to guarantee that the system remains stable, even if subject to packet drops [19]. A large packet loss rate (or small enough Pc) causes frequent interruptions of closed-loop control while if the duration of those interruptions exceeds Tstability, that is, as observed during deep fades, then the process states might experience large deviations from the desired stable set points or become unstable for even longer communication interruptions.
A convenient metric used to evaluate process stability is
(4)Pstability=Pr[∥xk-x~k∥≤δ]
that measures the probability that the deviation of process states xk from the stable set-points x~k, that is, caused by random packet losses, lies below an accuracy parameter δ>0. This factor δ indicates a critical condition for HW instrumentation that might cause costly losses for the plant operator. Stability probability Pstability is computed over K consecutive loops where the process can be reasonably assumed as ergodic.
3. Cooperative Communication for Critical Networked Control
The emerging area of cooperative communications suggests that it is worthwhile to explore the potential of advanced network architectures where the classic constraints valid for wired communications are relaxed [12]. The cooperative link abstraction consists of separate radios encoding and transmitting messages in coordination. Both information-theoretic (see, e.g., [20, 21]) and experimental analyses [22, 23] showed that under specific conditions on the propagation environment, a cooperative system could achieve similar performance to colocated multiantenna systems. A cooperative network architecture has the potential to be less sensible to isolated wireless link failures, compared to noncooperative architectures, as it creates a virtual distributed antenna network consisting of multiple paths where the same information is spread to maximize path redundancy (spatial or cooperative diversity).
3.1. Multihop Cooperative Link Modeling
To introduce the problem of multihop-cooperative link performance modeling, let the wireless control network in Figure 2 be represented by a set of randomly distributed nodes within a specific area. A sequence of messages is continuously transmitted by a source node S to a destination node D over an optimal “connection oriented” unicast route path ℛ (primary route) that involves M intermediate nodes relaying data to destination D. Ordering of nodes is labelled as ℛ={S,1,2,…,M,D} where the source node S and the destination node D take the role of I/O sensor and centralized controller for uplink, while their roles are reversed over downlink.
Data propagation (uplink) over the primary route by multihop cooperative architecture with diversity d=3 (a). Switched combining example at node no.4 (b) uses d=3 replicas of observation yk while signal strengths are taken from measurements at 2.4 GHz. In switch no.1 link (1,4) is replaced by link (2,4) while after switch no.2 link (3,4) is chosen.
The propagation of the messages is based on time division access and it is illustrated in Figure 2(a). The multihop-cooperative architecture improves the reliability of multihop message passing along the primary route by implementing a chain of consecutive cooperative transmissions [13]. At time slot t=1 (for convenience the time slots are numbered as for the nodes), the process observation is originated from I/O sensor source S and relayed at time t=2 from node 2 and so on. Similarly, after the destination is reached, the same message propagation is initiated now by the centralized controller acting as the source node for backward propagation of the feedback control message. In general, for each transmitting node k∈ℛ∖{D}, there areup to d subsequent nodes in the route that are overhearing. Therefore, the kth receiver has up to d copies of the same message during d subsequent time slots that experience statistically independent fluctuations of the received signal strength and can be incrementally combined to exploit the cooperative diversity order of d. The cooperative set of nodes 𝒯k,d that are transmitting towards the terminal k as part of the cooperative link (𝒯k,d,k) are defined as 𝒯k,d=𝒯k,dU with 𝒯k,dU={k-d,…,k-1}⊂ℛ for uplink and as 𝒯k,d=𝒯k,dD with 𝒯k,dD={k+d,…,k+1}⊂ℛ for downlink. For practical system design, a useful bound to the probability of successful control is
(5)Pc=Pc(d)≈∏k∈ℛ∖{S≡I}Pr[γ𝒯k,dU,k≥β]︸Uplink:Pr[γI,C≥β]×∏k∈ℛ∖{S≡C}Pr[γ𝒯k,dD,k≥β]︸Downlink:Pr[γC,I≥β],
being the product of successful probabilities over all the cooperative links (𝒯k,dU,k),(𝒯k,dD,k) with k∈ℛ∖{S} defined for uplink and downlink, respectively. The approximation holds for large enough Signal-to-Noise Ratio (SNR) [21]. Unlike conventional multihop message passing, each kth receiver combines the RSSs measured over the d links involved in collaborative transmission. The term γ𝒯k,d,k measures the quality of the virtual cooperative link (𝒯k,d,k) and depends on the selected combining scheme as illustrated in the following section.
3.2. Selection and Switched Combining
The selection combining technique can be employed to exploit the redundancy made available by the cooperative network architecture. The selection combining scheme allows each receiver to decode only the message copy originated from the link that experienced the highest instantaneous RSS. From (5) the d combining weights wh with h∈𝒯k,d over the d links are such that
(6)γ𝒯k,d,k=∑h∈𝒯k,dwhγh,k=maxh∈𝒯k,dγh,k,
where wh=1 if and only if h=argmaxh∈𝒯k,dγh,k and zero otherwise. The probability of successful control (5) can be bounded as
(7)Pc=Pc(d)>∏k∈ℛ∖{S}Pr[maxh∈𝒯k,dUγh,k≥β]×Pr[maxh∈𝒯k,dDγh,k≥β],
where Pr[maxh∈𝒯k,dγh,k≥β]=1-∏h∈𝒯k,dPr[γh,k<β].
An alternative option to selection combining is the switched combining scheme that allows the device to switch to the best link only if the previously chosen connection (e.g., with node h) undergoes a deep fade such that γh,k<β. The implementation of the switched combining scheme is illustrated in the example of Figure 2 (at (b)) for cooperative diversity order d=3. Although selection combining outperforms switched combining in terms of average performance (same outage probability performance is observed at high SNR), switched combining requires only a single RF chain to serve all the cooperative links and is practical enough for implementation over low-power devices.
4. Virtual Multiple Antenna MAC Protocol Design
Most existing works on cooperative communication focus on various aspects at physical layer while the advantages of the proposed schemes are often demonstrated by using an information theoretic approach. Many results are therefore based on asymptotically large data frame length assumption and usually ignore the upper layer overhead required to set up, synchronize, and coordinate a cooperative system [24]. MAC protocol design for cooperative communications has recently been a hot research topic [25]. Cooperative MAC protocols can be classified into proactive and reactive schemes [24]: proactive schemes always provide one (or more) prearranged and optimal partner(s) serving as relay node for the source node [26]; reactive schemes prescribe that cooperative transmission is initiated only when a negative acknowledgement (NACK) message is received (see, e.g., [27]). Extensive work has been reported in the literature relating to MAC design based on modifications of the distributed coordination function (DCF) of IEEE 802.11 standard. Several protocol designs have been proposed for single and multiple relay networks employing both fixed relaying assignments [26–28] and dynamic [27–29] assignments (relay selection). In these papers, both decode and forward (DF), amplify and forward (AF) and coded cooperation strategies have been investigated. Some attempts in the literature have been made towards the definition of MAC specifics to enable cooperative communication over IEEE 802.15.4 networks (see, e.g., [30]), although the topic is still considered an open issue.
The proposed cooperative MAC protocol depicted in Figure 3 is defined on top of the IEEE 802.15.4-2011 PHY layer and it is based on a proactive scheme. The network architecture consists of three components, detailed as follows.
The Centralized Controller manages a low-power radio interface for two-way communication with the remote I/O sensor and acts as a translator over the wired network. The centralized controller transmits set-points x~k and computes control commands uk to guarantee the global stability of the plant.
The virtual controllers are the additional infrastructure used to emulate the virtual antenna array system. The virtual controllers take the dual role of cooperatively receiving the plant observations from the I/O sensor and replacing the centralized controller when its direct link with the actuator experiences any degradation. The virtual controllers act as leaf nodes for propagating the decisions made by the central controller and have no permission to generate new set points. In case of consecutive packet drops, they can replace the centralized controller to secure local stability and data loss compensation.
The I/O sensor is the low-power input/output field instrument that interacts with the plant behavior generating process observations yk and applying control commands uk.
Virtual multiple antenna array system architecture: message passing over the multihop cooperative transmission chain (a) for uplink (left side) and downlink (right side): the token message passing is also superimposed (dashed arrows). Virtual antenna arrays are shown at (a) and provide (in this example) a cooperative diversity of d=3. Framing structure for timed-token MAC (b): the example refers to the case of M=2 virtual controllers while token holding times, guard times are illustrated in the table at (b). Channels used for FH are f1=2.425 GHz and f2=2.455 GHz, respectively.
The message-passing scheme and the framing structure depicted in Figure 3 refer to a system deploying M=2 virtual controllers with maximum allowed cycle time TRT. An analogous framing structure can be defined for an arbitrary number M of virtual controllers. A time division duplex system is employed to separate uplink and downlink. Transmissions are organized into consecutive superframes consisting of 2(M+1) time-slots of length T separated by guard times of length ΔT to compensate for residual clock misalignments. Each superframe contains one closed-loop session (or cycle time) of TRT sec. A closed-loop session starts with the transmission of the available measurement yk and stops when the feedback control uk is received and applied to the plant. The transmission of the noisy process sample yk is delayed by the I/O sensor until the assigned time slot is obtained. The measurement is then propagated by the M virtual controllers towards the centralized controller according to the multihop cooperative network architecture described in Section 3. When the measurement is received by the centralized controller, the new sample is used as input to generate the new control message uk. Similarly, as for the process samples, the control message is then propagated over the downlink using the assigned time slot.
The proposed cooperative network protocol is based on a frequency-hopped, timed-token message passing scheme (see Section 4.1). Devices implement an ad hoc cooperative link control policy to handle the switched combining of the signal replicas and network synchronization (see Section 4.2). Data loss compensation is applied to address the residual impairments observed over the cooperative wireless channel (see Section 4.3).
4.1. Medium Access Control Sublayer
The medium access control sublayer implements a channel frequency hopping (FH) over consecutive superframes. FH is commonly adopted in industrial networks as it allows the system to be less susceptible to interference, providing some additional protection against eavesdroppers. Within each superframe, the medium access control uses a timed-token message passing protocol on top of the multihop cooperative network architecture described in Section 3. The timed-token protocol has been also proposed to enforce real-time on wired/wireless Profibus and industrial Ethernet networks, overriding the native collision-based multiple access [31]. A token message is multiplexed with information data to form a frame (token frame) and visits all the devices on every closed-loop session to synchronize the cooperative transmissions. The token holding time is bounded to the duration T of one time slot to satisfy the round trip delay requirement TRT.
During MAC configuration, the network is organized into a logical primary ring connecting the I/O sensor to the centralized controller and vice versa. The primary ring is a two-way routing path connecting the controller with I/O sensor through the M virtual controllers. The configuration of the primary routing path is optimized as it is based on radio planning. In complex environments like refinery or power plants, the use of the 3D model during the design phase is also crucial to maximize radio-planning accuracy in order to limit any rework to a percentage which is in line with a regular installation of a wired system [15].
The amount of cooperative diversity d is decided based on the behavior of the process in open loop (further details are given in Section 6): the selected cooperative diversity limits the number virtual MIMO links that can be combined. When the cooperative diversity order is chosen, the centralized controller assigns to devices one time slot (TX time slot) for transmission and up to d time slots (RX time slots) for receiving redundancy over the virtual multiple-antenna links (in uplink and downlink).
4.2. Cooperative Link Control and Synchronization
The cooperative architecture imposes a redefinition of conventional logical link control designs. An additional level of abstraction compared to multihop networks should be defined to efficiently manage and control the “cooperative link” as the set of physical links involved in collaborative transmission. The proposed cooperative link control implements a switched combining scheme configured to estimate the RSS during the assigned RX time slots and switch to the best link if the measured RSS goes below the threshold β. The purpose of switched combining is to enforce the real-time constraint by avoiding the use of error control methods based on explicit acknowledgements [4].
The periodic token-passing procedure plays also a crucial role to guarantee device synchronization [32]. When a device overhears a new token message, it computes the misalignment between the expected and the actual time of arrival of the token packet. This information is then used to predict the next time-to-token visit time Ttoken-visit (and thus the beginning of the assigned time slot). Every new timing update for Ttoken-visit must account for the particular path over which the token frame is received: given that the token frame is received by device k and transmitted by device h∈𝒯k,d, the next time to token visit is computed as
(8)Ttoken-visit=ΔT+(h-k-1)×(T+ΔT)+τk,h,
where τk,h is the random misalignment (in number of OQPSK symbols, with duration 16μs) measured by node k between the expected and the actual time of arrival of the token frame from node h.
4.3. Data Loss Compensation
Any residual data loss over the cooperative links might result in missing plant measurements at the centralized controller. The virtual controllers and the centralized controller are thus designed to predict the missing sample yk based on the p previous samples yk-1=[yk-1,…,yk-p-1]T according to the linear predictor
(9)y^k∣k-1=apTyk-1.
For a stationary process, the minimum mean square error (MMSE) predictor is obtained by letting
(10)ap=Cp-1r,
where Cp=E[ykykH]and r=E[ykyk-1H] are the covariance and cross-correlation of the stationary process observations, respectively.
Gradient-based linear prediction is a common choice in predictive model-based control [2] as model parameters for linear regression are estimated from data samples yk without a priori information about the statistical behavior of the process. Prediction is obtained by
(11)ap=Np×z,
where Np=P×(PTP)-1, P=[tk,1], tk=[(p-1)Ts,(p-2)Ts,…,0]T, and z=[pTs,1]T.
5. Wireless Critical Process Control System Implementation: Case Study
In the proposed experimental setup, the virtual multiple antenna protocol specifics are implemented over battery-powered motes based on the low-power CC2420 single-chip 2.4 GHz IEEE 802.15.4 compliant [33] with radio transmit power set to PT=1mW. The IEEE 802.15.4 PHY layer allows the use of 16 channels for FH where each channel occupies an effective bandwidth of 2 MHz with center frequency separation of 5 MHz.
The RSS Indicator (RSSI) is used to assess the link quality for switched combining with β=-87 dBm [16]. The RSSI provides an estimate of the signal power by energy detection over 8 consecutive offset quadrature phase shift keying (O-QPSK) symbols, corresponding to a duration of 128μs. The RSSI is quantized using 8 bit/sample and stored in the CC2420 RSSI_VAL register.
As depicted in Figure 3, the duration of one closed-loop session equals the superframe length of TRT=50 ms; a guard time of ΔT=3 ms among consecutive superframes is adopted. The frequency hopping is performed over consecutive closed-loop sessions: the hopping pattern periodically switches among the IEEE 802.15.4 channels with center frequencies 2.425 GHz and 2.455 GHz, corresponding to the channel numbers 15 and 21, respectively. Frequency-hopping requires on/off radio switching and introduces a latency of ~2 ms. The selected channels are marginally influenced by cross-tier interference that originated from WiFi or Bluetooth modules [34].
The superframe is divided into slots of fixed length of T=5 ms. The IEEE 802.15.4 slotted CSMA-CA access implemented by the devices is modified so that the back-off function is disabled. An energy scan to detect cross-tier interference (by clear channel access CCA) is performed at the beginning of the assigned slot; in case the channel is sensed as free, the transmission of the token frame is performed with the acknowledgement option disabled. The token frame structure is based on the IEEE 802.15.4 beacon frame type and contains the control message uk (for downlink) or the actual/predicted process sample yk (for uplink). Additional information is embedded in each frame to identify (i) the closed-loop session; (ii) the current set-point x~k; (iii) the device type and position within the primary ring; (iv) the channel offset for frequency hopping; (v) the selected diversity order d.
5.1. System Implementation
In what follows, the application-specific system implementation is detailed by looking at each network component separately (see also Figure 4).
Functional block diagram for centralized controller, virtual controller, and I/O sensor.
(i) Centralized Controller. The centralized controller is equipped with a low-power 8-bit AVR microcontroller implementing a linear state-feedback controller such that uk=Kx^k-1, where x^k=C-1yk∀k while feedback gain matrix K is designed to achieve the desired closed-loop pole locations (see, e.g., [35]). When a new measurement is received either from the virtual controllers or the I/O sensor, a notifying indication event is generated by the MAC layer to inform the controller that a new control message is required. Control message is then forwarded by the centralized controller over the assigned time slot. The centralized controller generates new set points x~k and acts as a translator over the wired network by communicating with a device serving as gateway node. Even if the chosen proportional control policy is fairly simple compared to conventional industrial process control systems [2], it is useful to highlight the potential benefits of the cooperative architecture.
(ii) Virtual Controller. On every new closed-loop session, the virtual controller is designed to receive and combine up to d copies of the signal encoding the sensor measurement over uplink and up to d copies of the signal carrying the control message over downlink. The RSSI is used as a metric to select the message copy to decode by switched combining (Section 4.2). In case of missing process observations, the gradient-based data loss compensation function (see Section 4.3) is implemented: similarly as for process observations, the predicted sample y^k∣k-1 is now forwarded to the centralized controller over the assigned multihop cooperative links. In case of missing control messages, the virtual controller replaces the centralized controller to guarantee the stability of the set point (received before losing communication with the centralized controller). It therefore implements a linear state feedback control policy using the same feedback gain matrix K of the centralized controller.
(iii) I/O Sensor. The AVR microcontroller is used to emulate the transducer and the actuator functions of the field instrument by generating the simulated process observations obtained from the discrete-time state-space plant model in (1). The observations yk=xk+nk provide a noisy representation of the process states. The sampling time of the process is set to Ts=60 ms: sampling process is implemented using a timer obtained by the system clock sourced by an external oscillator. Each observation is encoded before radio transmission using a 16 bit/sample. The I/O sensor uses a buffer of finite length that stores the available sample before transmission over the assigned time slot: the samples which do not belong to the current step are discarded. Any new control message received either from the virtual controllers or the centralized controller during a closed-loop session generates a notifying indication that activates the actuator functions. A plant state adjustment is thus simulated according to model (1): any state adjustment influences the upcoming process sample without introducing significant delay.
5.2. Experimental Activity
An example of a single hop (a) and of a virtual multiple-antenna-based (b) closed-loop control is depicted in Figure 5: the purpose is to assess process stability by visual inspection of plant variables with respect to accuracy threshold δ. In this example, the noisy observations yk of one process state are visualized over a time window of 150 s. External input ek (1) randomly switches among two set points on every 30 sec on average to emulate a nonstationary disturbance. The stable set-points x~k are depicted in solid red lines and depend on the external input disturbance. In this example, the use of a single-hop network architecture is not sufficient to guarantee stability while the virtual double-antenna option provides a clear advantage.
Example of wireless process control by single hop (a) and cooperative networking (b). Process observations are taken from simulated plant model (Model A, see Figure 6).
For the experiments, the considered indoor environment consisted of two rooms separated by a wall with 10 cm thickness. Up to 7 people were moving inside each room, and this causes random fluctuations of the radio signals. For all devices, the antenna height from ground is 1 m; the harsh radio environment was made of metallic objects (e.g., coaxial cabling, monitors/PCs, tubes for air conditioning, etc) responsible for additional attenuations. This is a worst case scenario as compared with typical industry standard installation designs that recommend 2 m height from the ground [36]. The centralized controller sends control messages to the I/O sensor placed in the adjacent room at a distance of 16 m (see topology superimposed on the floor plan in Figure 6). This specific setting is designed to assess the impact of NLOS propagation on the performance of closed-loop control.
Floor plan map of the environment for experimental activity over 2.4 GHz. Network topologies for all settings are also superimposed.
For the proposed architecture, we considered the deployment of a single (M=1 with diversity d=2) and a pair (M=2 with diversity d=3) of virtual controllers. The performance of single-hop and multihop architecture are also evaluated. The single-hop scheme implements a standard ARQ policy where the retransmissions are subject to timing constraints and are thus confined within the time division-framing structure of Figure 4 with TRT=50 ms. The multihop scheme requires the installation of a wireless repeater that implements decode and forward relaying [21].
Closed-loop control stability is evaluated over two state-space discrete time plant models: these are referred to as model A and model B, respectively. Locations of the unstable open-loop and desired closed-loop poles are 0.85±0.625j and 0.85±0.5j, respectively for Model A, while for model B these are 1.1±0.837j and 0.95±0.01j. To analyze the impact of wireless propagation on closed-loop control performance, the stability interval Tstability is evaluated (defined in Section 2.2). Stability interval defines the tolerable duration of the wireless link interruption (e.g., for N consecutive packet drops) above which the deviations from stable set-points x~k are too large compared to accuracy δ in (4). Analysis over the first configuration (model A) shows that up to N=8 consecutive packet losses, corresponding to Tstability=480 ms, are still tolerable in practice to stabilize the system dynamics. Instead, analysis over the more challenging plant model B shows that any link interruption above Tstability=180 ms, corresponding to N=3 consecutive packet losses, makes the system dynamics highly unstable.
Performance of closed-loop control is depicted in Figure 7 for plant model A and in Figure 8 for model B, respectively. For each setting, continuous real-time control is tested over a period of 5 days on average. In both figures, each point maps to the average open-loop probability 1-Pc with Pc defined in (2) and the process stability Pstability (4) observed over a time window of 20 minutes corresponding to K=20000 process samples. Tolerable deviation from the stable set-points is based on feedback gain and chosen here as δ=ς×max∥x~k∥ with ς=1/2 so that the maximum deviation of measured state xk from stable set-point x~k lies below the 50% of the maximum range maxk∥x~k∥.
Closed-loop control performance for Model A with stability interval Tstability=480 ms (corresponding to 8 consecutive packet losses). Each point maps to an open-loop probability 1-Pc and stability Pstability computed over 20 minutes of real-time control. Network topology “setting no.1” is depicted in (a). The case for optimal deployment of the virtual controller is shown in “setting no.2” at (b). Observed average IAE over K=20000 consecutive control loops is also superimposed for each case.
Performance analysis of closed-loop control for plant Model B with Tstability=180 ms (corresponding to 3 consecutive packet losses).
In Figure 7, we compare the single-hop, the multi-hop, and the cooperative settings configured with a single virtual controller (with diversity d=2). In Figure 7(a), the virtual controller is deployed in the same room of the centralized controller (setting no.1). This case is often typical in industrial settings where the I/O sensors are deployed in hazardous areas and require IP66/67 certification while the installation of additional infrastructure in the same area might not be allowed. In Figure 7(b), the virtual controller is now deployed in the same room of the I/O sensor (setting no.2) as this is the best choice for network planning to minimize the packet loss probability over the two-hop route. For both settings, the tolerable open-loop probability for 99% stability Pstability should lie below 10-2 (Pc>0.99). Only the cooperative architecture can guarantee such a high level of reliability. The multihop architecture is highly sensible to the relay deployment as accurate network planning (if allowed) provides significant performance improvements as observed in Figure 7(b). For all the considered settings, the observed average integral absolute error [2] (IAE) over K consecutive control cycles IAE=∑k=0K-1∥xk-x~k∥Ts is also superimposed and confirms the benefits of the proposed architecture.
The more challenging plant model B is analyzed in Figure 8; the performance of the single-hop scheme is compared with the cooperative system configured for M=1 and M=2 virtual controllers with cooperative diversity d=2 and d=3, respectively. The small stability interval Tstability tolerated by the more critical plant model B suggests the use of 3 virtual antennas, thus configured for M=2 virtual controllers, with cooperative diversity d=3. This is the only viable solution to guarantee an average open loop probability below 10-3 (Pc>0.999) for the desired 99% stability level.
6. Virtual Multiple Antenna System Design
As described in the previous section, the stability interval Tstability characterizes the behavior of the process in open loop. In addition, it defines a tolerable level of success probability Pc of closed-loop control above which the system can be considered as stable for all practical purposes: the lower the interval Tstabilty, the larger the tolerable success probability Pc. The choice of the cooperative diversity d and of the number of virtual controllers M for cooperative network planning should therefore account for these key design parameters. The purpose of this section is to highlight the factors that mostly influence the protocol configuration with special focus on the choice of the cooperative diversity d (Section 6.1) and its impact on the energy consumption (Section 6.2).
6.1. Cooperative Diversity Design
The proposed approach to the design of the cooperative diversity is to fix a required stability probability (here 99%) and to numerically choose the cooperative diversity to meet this stability constraint, thus limiting the number of consecutive packet drops accordingly. The required diversity therefore depends on the stability interval Tstability of the considered plant model.
To allow for general insights, a simulation tool has been developed to assess the stability of the control system for varying plant models characterized by different values for the tolerable stability interval Tstability. In Figure 9, the control stability Pstability is analyzed for varying open-loop probabilities (1-Pc) and plant models. Plant processes are indicated by different markers and experience different stability intervals Tstability to model low (Tstability=1sec) to highly unstable (Tstability=120 ms) behaviors. For each setting, the cooperative diversity d is chosen to guarantee the desired open-loop probability for 99% stability (dashed line). The required cooperative diversities d and open-loop probabilities are also reported in the table at the bottom as a function of Tstability.
Stability Pstability versus open-loop probability 1-Pc. Different markers refer to plant models with stability intervals Tstability=120ms/1sec. Models A and B are included as special cases. Performance of the proposed virtual multiple antenna system for different diversity configurations correspond to different open-loop probabilities as observed in the experiments (see Figures 7 and 8). (Bottom table) Cooperative diversity design for 99% stability: 4 plant models are considered with corresponding stability times Tstability, desired open-loop probability, and cooperative diversity.
The analysis clearly shows that the use of single and multihop architectures is reasonable for supervised control with Tstability≥1sec where up to N=16 consecutive packet drops are still tolerable to maintain stability. The cooperative scheme designed for diversity d=2 is a reasonable option for process control with Tstability=480 ms. Finally, the system configured for diversity d=3 confirms as a promising option to support critical control with Tstability≤180 ms, for example, where no more than N=3 consecutive packet drops are allowed.
6.2. Design Considerations for Battery-Powered Devices
In this section, the average power absorbed by the virtual controller device on every control cycle is computed. Notice that the virtual controllers experience the longest activity cycle as they employ selection combining over uplink and downlink. The purpose is to highlight relevant considerations for network lifetime prediction. To allow for general insights, the power consumption is modeled as a function of the required cooperative diversity d. The power absorption measurements are taken from the IEEE 802.15.4 compliant transceivers used during the experimental activity and specified for 2.7V/3.3V operation.
For a given slot duration T (token holding time) and closed-loop session TRT, the virtual controller is designed to keep the radio transceiver active for receiving and combining up to 2d messages (sensor observations and control messages, resp.). Two additional slots are used for relaying messages over uplink and downlink. The average power consumption per control cycle Ploop is thus proportional to the selected diversity order d as
(12)Ploop=d×2Prx(T+ΔT)TRT+2PtxTTRT+(1-2T(d+1)TRT)Psleep,
where Ptx=62.7mW is the average power absorbed (in milliwatts) during transmission at 3.3V while Prx=56.1mW is the power absorbed in receiving mode. Power draw in sleep mode is 82.5μW: during sleep mode the internal oscillator and RAM must be in active state (memory hold). The power absorbed by a virtual controller configured for diversity d=2 is Ploop=33.2mW, while for diversity order d=3 is 40% larger as Ploop=45.9mW. These results highlight the inherent trade-off between maximizing the control reliability (that requires a high spatial redundancy and a long duty cycle) and the network lifetime (that requires a long sleep cycle). Given that the average power draw can be reasonably assumed to remain constant until battery depletion, the expected battery life Tlife can be predicted as Tlife=Cbatt/Ploop being a function of the available battery capacity Cbatt.
7. Concluding Remarks
In this paper a cooperative network architecture is proposed to emulate transmission and reception of data on a distributed network for tight closed-loop process control applications. A proprietary cooperative link-layer protocol has been developed on top of an existing IEEE 802.15.4 compliant PHY/MAC layer architecture to implement a virtual multiple-antenna array system. A multihop chain of consecutive cooperative transmission sessions guarantees a robust two-way communication between the controller and the I/O sensor with a cycle time of 50 ms. The cooperative network protocol configuration imposes a substantial redefinition of conventional radio-planning methods. The required level of cooperative diversity for high quality control depends on the unstable properties that characterize the process in open loop. Despite the clear benefits of the proposed scheme, the experimental results highlighted a number of limitations that could be the target of future research: (i) compared to multihop architectures, the exploitation of cooperative diversity demands far more energy to enable the combining stage: the use of optimized batteries and/or harvesting from alternative sources of power represents promising solutions to improve device lifetime, another option is to enable event-driven control strategies to limit the channel use; (ii) a massive deployment of virtual controllers for the simultaneous control of multiple processes might cause spectrum overcrowding and autointerference: this suggests the adoption of advanced network-coding schemes to improve spectral efficiency; (iii) typical highly critical processes (e.g., motion control) cannot tolerate any interruption of feedback control, as this might result in costly losses for the plant operator: cable-replacing in highly critical loops is therefore not feasible for current low-power radio technology. Besides these limitations, experimental results clearly suggest that the use of the proposed architecture is a mandatory roadmap to enable cable-replacing in networked control systems.
Acknowledgments
This work has been partially supported by Saipem S.p.A, a subsidiary of Eni and has been performed in the framework of the European research project DIWINE (Dense Cooperative Wireless Cloud Network) under FP7 ICT Objective 1.1—The Network of the Future. The author would like to thank Dr. Sergio Guardiano, Professor Umberto Spagnolini, and Professor Vittorio Rampa for their fruitful discussions and helpful comments.
ZuehlkeD.Smart factory—towards a factory-of-things20103411291382-s2.0-7864997419410.1016/j.arcontrol.2010.02.008AntsaklisP.BaillieulJ.Special issue on technology of networked control systems2007951582-s2.0-6294910715410.1109/JPROC.2006.887291European Commission2012Brussels, BelgiumWilligA.Recent and emerging topics in wireless industrial communications: a selection20084210212410.1109/TII.2008.923194SavazziS.SpagnoliniU.GorattiL.MolteniD.Latva-ahoM.NicoliM.Ultra-wide band sensor networks in oil and gas explorations2013514142153De PellegriniF.MiorandiD.VitturiS.ZanellaA.On the use of wireless networks at low level of factory automation systems2006221291432-s2.0-3364677467410.1109/TII.2006.872960SongJ.HanS.MokA. K.ChenD.LucasM.NixonM.PrattW.WirelessHART: applying wireless technology in real-time industrial process controlProceedings of the 14th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS '08)April 20083773862-s2.0-5124911768610.1109/RTAS.2008.15Standard ISA100.11a-2009Wireless systems for industrial automation: process control and related applicationsISA, July 2009SahaiA.MitterS.The necessity and sufficiency of anytime capacity for stabilization of a linear system over a noisy communication link—part I: scalar systems2006528336933952-s2.0-3374660232310.1109/TIT.2006.878169BraslavskyJ. H.MiddletonR. H.FreudenbergJ. S.Feedback stabilization over signal-to-noise ratio constrained channels2007528139114032-s2.0-3454822147210.1109/TAC.2007.902739IshidoY.TakabaK.QuevedoD. E.Stability analysis of networked control systems subject to packet-dropouts and finite-level quantization20116053253322-s2.0-7995506136010.1016/j.sysconle.2011.02.008ScaglioneA.GoeckelD. L.LanemanJ. N.Cooperative communications in mobile ad hoc networks200623518292-s2.0-3375004376510.1109/MSP.2006.1708409SavazziS.SpagnoliniU.Energy aware power allocation strategies for multihop-cooperative transmission schemes20072523183272-s2.0-3384775119010.1109/JSAC.2007.070208GungorV. C.HanckeG. P.Industrial wireless sensor networks: challenges, design principles, and technical approaches20095610425842652-s2.0-7034961916310.1109/TIE.2009.2015754SavazziS.GuardianoS.SpagnoliniU.Wireless sensor network modeling and deployment challenges in oil and gas refinery plants201320131710.1155/2013/383168155014BardellaA.BuiN.ZanellaA.ZorziM.An experimental study on IEEE 802.15.4 multichannel transmission to improve RSSI—based service
performanceProceedings of the 4th International Workshop on Real-World Wireless Sensor Networks (REALWSN '10)2001Colombo, Sri Lanka15416110.1007/978-3-642-17520-6_15SchenatoL.SinopoliB.FranceschettiM.PoollaK.SastryS. S.Foundations of control and estimation over lossy networks20079511631872-s2.0-6414912066210.1109/JPROC.2006.887306ImerO. C.YükselS.BaşarT.Optimal control of LTI systems over unreliable communication links2006429142914392-s2.0-3374603718010.1016/j.automatica.2006.03.011BaillieulJ.AntsaklisP. J.Control and communication challenges in networked real-time systems20079519282-s2.0-3604895271110.1109/JPROC.2006.887290CuiS.GoldsmithA. J.BahaiA.Energy-efficiency of MIMO and cooperative MIMO techniques in sensor networks2004226108910982-s2.0-434471076510.1109/JSAC.2004.830916SavazziS.SpagnoliniU.Cooperative fading regions for decode and forward relaying20085411490849242-s2.0-5534911438010.1109/TIT.2008.929911WangC. X.HongX.GeX.-H.ChengX.ZhangG.ThompsonJ. S.Cooperative MIMO channel models: a survey2010482808710.1109/MCOM.2010.5402668CastiglioneP.SavazziS.NicoliM.ZemenT.Partner selection in indoor-to-outdoor cooperative networks: an experimental study201331811310.1109/JSAC.2013.130818ShanH.ZhuangW.WangZ.Distributed cooperative MAC for multihop wireless networks20094721261332-s2.0-6214913452710.1109/MCOM.2009.4785390ShanH.ChengH. T.ZhuangW.Cross-layer cooperative MAC protocol in distributed wireless networks20111082603261510.1109/TWC.2011.060811.101196LiuP.TaoZ.NarayananS.KorakisT.PanwarS. S.CoopMAC: a cooperative MAC for wireless LANs20072523403532-s2.0-3384774944410.1109/JSAC.2007.070210ZhaoB.ValentiM. C.Practical relay networks: a generalization of hybrid-ARQ200523171810.1109/JSAC.2004.837352ZhuH.CaoG.rDCF: a relay-enabled medium access control protocol for wireless ad hoc networks200659120112142-s2.0-3374691610410.1109/TMC.2006.137ShiC.ZhaoH.WangS.WeiJ.ZhengL.CAC-MAC: a cross-layer adaptive cooperative MAC for wireless ad-hoc networks20122012910.1155/2012/785403155014NguyenV.BrunelliD.Cooperative transmission range doubling with IEEE 802.15.4Proceedings of IEEE International Conference on CommunicationsJune 2012126130SousaP. B.FerreiraL. L.Hybrid wired/wireless profibus architectures: performance study based on simulation models20102010252-s2.0-7795249391610.1155/2010/845792845792GaneriwalS.TsigkogiannisI.ShimH.TsiatsisV.SrivastavaM. B.GanesanD.Estimating clock uncertainty for efficient duty-cycling in sensor networks20091738438562-s2.0-6765012833010.1109/TNET.2008.2001953Datasheet CC24202.4 GHz IEEE 802.15.4 ZigBee-ready RF TransceiverMarch 2007AngrisaniL.BertoccoM.FortinD.SonaA.Experimental study of coexistence issues between IEEE 802.11b and IEEE 802.15.4 wireless networks2008578151415232-s2.0-4874911332710.1109/TIM.2008.925346LunzeJ.LehmannD.A state-feedback approach to event-based control20104612112152-s2.0-7304909800410.1016/j.automatica.2009.10.035WirelessHARTIEC 62591, System Engineering GuideRevision 2, October 2010