Multihop Data Delivery Virtualization for Green Decentralized IoT

1The Youth College of Political Science, Inner Mongolia Normal University, 81 Zhawudalu Street, Saihan District, Hohhot 010022, China 2The Graduate School of Informatics and Engineering, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan 3The VTT Technical Research Centre of Finland, P.O. Box 1100, 90571 Oulu, Finland 4The Information Technology Center, Nagoya University, Chikusa-ku, Nagoya, Japan 5The Information Systems Architecture Research Division, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan


Introduction
In recent years, Internet of Things (IoT) technologies have attracted great interests [1,2].With the increase of the number of IoT devices, it is difficult to provide all the communications using infrastructure-based wireless communications (including cellular networks and wireless access point-based communications) due to the cost of deploying and maintaining the infrastructures.Decentralized communication technologies can provide a low cost and efficient solution for emerging wireless Internet of Things (IoT) due to the flexibility and expandability.However, the limited wireless resources pose new challenges on the design of an efficient end-to-end multihop transmission scheme.
Transmission control protocol (TCP) [3,4] is a widely used transport layer protocol which provides reliable end-toend data delivery.In order to avoid network congestion, TCP sender node adapts its congestion window size according to the reception status of data segments which are acknowledged by the destination node.The rationale behind this is that TCP assumes that the main reason for packet loss is due to the network congestion.However, this is not true in 2 Wireless Communications and Mobile Computing wireless networks especially for multihop communications where the weak signal strength and random backoff incurred packet collisions are primarily responsible for packet losses.
In a multihop wireless communication, the end-to-end packet loss probability increases linearly with the packet hopping, which results in a small congestion windows size at the TCP sender (low throughput) and hence a large end-toend delay.Thus, there is an urgent need to design an efficient protocol to handle the multihop data transmissions in an IoT environment.
This paper focuses on efficient utilization of wireless resources in multihop wireless communications.For sensor data collection, multihop communications are usually required.At the same time, the real time data exchange between the source node and the destination node is not required.For example, a sensor node could upload large files such as video data to the cloud server for the surveillance purpose.For this kind of applications, the main objective is to successfully transfer large amount of data with small amount of wireless resources.These applications typically require high throughput but do not require the real time acknowledgment (response) from the receiver side (the destination node).In this paper, we propose a multihop data delivery virtualization approach which utilizes hop-by-hop acknowledgment instead of end-to-end feedback.The proposed protocol can achieve a higher throughput and shorter transmission time which is more energy efficient.The main contributions of this paper are as follows.
(i) We propose a protocol which can efficiently solve the performance degradation problem of multihop TCP transmissions in a wireless IoT environment by virtualizing a multihop data transmission to multiple one-hop reliable transmissions.The protocol is easy to implement as it works on the top of TCP/IP protocol stack and does not require any modification on existing protocols.(ii) We conduct experiments with real devices and launch computer simulations to evaluate the protocol.Simulation results show that the proposed protocol can achieve a significant throughput improvement and delivery time reduction as compared to the conventional approach.
The remainder of the paper is organized as below.Section 2 gives a brief survey of related work.We give a detailed description of the proposed protocol in Section 3 and perform a theoretical analysis in Section 4. Experimental results and simulation results are presented in Sections 5 and 6, respectively.Finally, we draw our conclusions and present future work in Section 7.

Related Work
There have been some studies discussing transport layer solutions for efficient data transmissions in multihop wireless networks.Some of them try to improve the congestion control of TCP, and some of them propose to design a new congestion control algorithm from scratch.Existing works also include some approaches which are specifically designed for green IoT networks.
2.1.TCP-Based Approaches.Talau et al. [5] have proposed an early congestion control approach which adjusts the size of congestion window according to the length of router queue.This approach is efficient in a congested scenario where the length of queue is large.However, the performance problem in a multihop lossy and less congested scenario is not discussed.Zhang et al. [6] have proposed a bandwidth delay product (BDP) estimation method and then designed a congestion control approach based on the BDP estimation.The BDP estimation approach solves the bandwidth overestimation problem by taking into account the contention delay at the MAC layer.In a multihop lossy network, the network bandwidth is underestimated due to the high probability of segment losses.Schubert and Bambos [7] have proposed an approach which freezes the congestion window size when a deep fading happens in order to maintain a high sending rate at the sender side.Multihop transmissions are not addressed adequately in [7] as the congestion control is only conducted at the source node.Jude and Kuppuswami [8] have proposed a scheme in which the initial congestion window size is set based on the advertised window size.This is not suitable for a multihop TCP flow where the intermediate nodes do not advertise window size.Lee et al. [9] have proposed an approach which changes the slow start threshold in case of momentary link instability.However, it is difficult to know the reason of a segment loss especially for multihop communications.All these protocols [5][6][7][8][9] do not address the TCP performance drop problem in a multihop wireless IoT environment adequately.

Congestion Control Algorithms Specifically Designed for
IoT. Li et al. [10] have proposed a congestion window adaptation scheme which employs a -learning algorithm based on network conditions.Since the observation of network condition is limited to delay variance, [10] cannot solve the problem of lossy wireless environment.Liaqat et al. [11] have proposed a social-similarity-aware congestion avoidance protocol which performs data transfer over TCP by taking advantage of similarity-matching social properties.Since it is difficult to get social relationship between nodes, [11] cannot provide a general solution.Govindan and Azad [12] have analyzed content delivery probability and delay in message queue telemetry transport protocol (MQTT) for wireless sensor networks.However, [12] does not discuss how to improve the performance.
Study in [13] proposes an advanced congestion control algorithm for the constrained application protocol.The core concept of [13] is based on an accurate estimation of round trip time.Al-Kashoash et al. [14] have proposed GTCCF, a game theory-based congestion control framework for IEEE 802.15.4 networks.GTCCF takes into account node priorities and application priorities.These works [13,14] do not discuss the small congestion window size problem in multihop lossy networks.

Green IoT Solutions.
Arshad et al. [15] have provided a survey of green IoT technologies including energy efficient data centers, energy efficient transmission of sensor data, and energy efficient policies.Li et al. [16] have proposed an adaptive network coding scheme to improve the transmission efficiency in IoT networks.With dependence on software defined wireless networking technology, it is difficult to implement [16] in conventional networks.Liu and Ansari [17] have provided an architecture which utilizes an overlay spectrum sharing approach to facilitate device-to-device for IoT in heterogeneous cellular networks.Wali et al. [18] have proposed a Time-Spatial Randomization (TS-R) technique to mitigate the radio access network overload problem in a Long-Term Evolution-Advanced (LTE-A) cell.These existing studies [15][16][17][18] do not discuss the green IoT technologies in decentralized environments.

Problem Definition and Protocol Concept.
TCP is an endto-end acknowledgment based data delivery approach.In TCP, the sender node adapts the sending rate by using a congestion control algorithm according to the acknowledgment packets successfully received (see Figure 1).The congestion control algorithm consists of two phases specifically the slow start phase and the congestion avoidance phase.In the slow start phase, the congestion windows size is initialized at 1 maximum segment size (MSS) and increases by 1 MSS upon reception of each acknowledgment (ACK) packet.In the congestion avoidance phase, upon the detection of a packet loss, the congestion windows size is reduced.
The throughput of a TCP connection is affected by the end-to-end packet loss probability.The end-to-end packet loss probability for  hop transmission can be calculated as where   is the packet loss probability for each hop.In order to show the effect of hop count on the TCP performance, here we analyze the TCP congestion window size which basically determines the throughput of a TCP connection.In the TCP slow start phase, the congestion window size is increased by 1 MSS upon reception of an ACK.When the end-to-end loss probability is   , the average  congestion window in the slow start phase for the first 5 round trip times (RTTs) is where  rtt is the number (index) for RTT.Figures 2 and 3 show the end-to-end segment loss probability and the corresponding congestion window for different numbers of hops.We can observe that the congestion window size of TCP drops drastically with the increase of hop count, resulting in low throughput for multihop lossy environment.This shows the problem of end-to-end retransmission approach that the hop count could drastically increase the possibility of packet loss.In order to solve this problem, we propose an efficient multihop data forwarding approach, namely, multihop data delivery virtualization, which virtualizes a multihop TCP communication to multiple one-hop reliable communications.With the virtualization, the proposed protocol enables hop-by-hop acknowledgment instead of end-to-end feedback and therefore utilizes the wireless resources better by avoiding underestimation of bandwidth.

Multihop Data Delivery
Virtualization.The proposed multihop data delivery virtualization approach is designed to solve the problem of TCP in a multihop lossy scenario where the segment loss probability increases with the number of hops.The proposed protocol works between the transport layer and the application layer (see Table 1).Therefore, any transport layer protocol (such as TCP and UDP) can be used.However, here we explain the implementation of the proposed protocol by using TCP as it is the most commonly used transport layer protocol in the Internet.As shown in Figure 4, in the proposed approach, the data segments are acknowledged by using a hop-by-hop approach.The proposed approach works on the top of transport layer and uses multiple one-hop TCP connections to establish a multihop end-to-end connection.Since each one-hop communication is conducted based on TCP, the congestion window size at each TCP sender node is not affected by the number of hops between the source node and the destination node.This can facilitate an efficient use of wireless resources.The fairness among multiple flows is also ensured by each TCP sender node as the proposed protocol does not require any modification on TCP.As we observe from Figures 1 and  4, the proposed protocol does not increase the overhead in terms of ACK packets compared to the conventional multihop end-to-end TCP approach.Instead, the proposed protocol can reduce the data transmission overhead when a packet loss occurs by conducting retransmissions from an intermediate node while the ordinary approach conducts data retransmissions from the source node.

Protocol Implementation Details.
In the proposed protocol, multihop data transmission session is maintained by a control message called "SESS INFO" which contains the fields of session ID, source IP, and the destination IP (see Table 2).The session ID field is generated based on the flow identifier.As shown in Figure 5, the source node of each data flow transmits this information to the next hop node based on the routing information (the route to the destination can be acquired by any routing protocol such as [19,20]).After reception of "SESS INFO" message, each node maintains the session information locally.The corresponding information fields are shown in Table 3 where "Loc Pointer" denotes the pointer for the corresponding session data stored locally (a local buffer is used to store the unacknowledged data).The next hop node then sends the same information to the node downstream to the destination node.In this way, the session information is exchanged among all the forwarder nodes.As shown in Figure 5, node 0 sends data to node 1, then node 1 forwards the data to its next hop node 2, and so on.Each data flow is a one-hop TCP transmission, and therefore a multihop data flow is virtualized into multiple one-hop data flows.As a result, the sending rate (congestion window size for TCP) at each sender node is improved, which means better utilization of wireless resources environment.
Algorithms 1 and 2 show the actions at each source node and forwarder node, respectively.Upon reception of the first packet of a certain flow, the source node sends the corresponding session information, specifically "SESS INFO" message, to the next hop node.The "SESS INFO" message will be forwarded further until reaching the destination node.Each forwarder node maintains the session information about each data flow locally and acknowledges the data packets received based on the corresponding session information.

Cooperation with Routing Agent.
Due to the topology changes, there could be some route breaks.The handling of route error should be done by the routing agent.However, the proposed protocol reports the link failure event to the routing agent in order to eliminate the effect of route failure on the network performance.As shown in Algorithm 3, when a link failure happens, the event will be reported to the routing agent to optimize the end-to-end route.The node that sensed the link failure transmits the data in the local buffer by establishing a new route to the destination.

Theoretical Analysis
Here we analyze the performance of the proposed protocol for delay tolerant applications such as sensor data collections.
In the proposed protocol, since the traffic source node does not require the acknowledgment of a data packet from the Report the link failure event to the routing agent.if (the local buffer is not empty) then Contact the routing agent to create a new route to the destination.Send the data in the buffer to the destination using the route.end if Algorithm 3: Actions when a link failure happens.destination node, the multihop transmission can be separated into multiple one-hop transmissions similar to the delay tolerant approach in [21].Therefore, the proposed protocol can reduce the number of concurrent transmission nodes, resulting in more efficient wireless resource utilization.

Probability of Collision.
In the IEEE 802.11p standard, the backoff time is a random number which is drawn from a uniform distribution over the interval [0, CW] where CW is the current contention window.If two sender nodes are located closer than the sensing range and they choose the same contention window size, there will be collisions at some receiver nodes.Since a transmission is successful only when all sender nodes choose different backoff values, we can calculate the probability of collisions as where  is the number of sender nodes.In a multihop communication scenario, if   is the number of traffic flows, then the collision probability is   =   (2  ).In the proposed protocol, the probability is P =   (  ).As shown in Figure 6, the proposed protocol can attain lower collision probability by reducing the number of sender nodes.

Probability of End-to-End Packet Loss.
A packet can be lost due to either collisions or weak signal strength.The packet loss probability for a 1-hop transmission can be calculated as where   () is the packet loss probability due to weak signal strength.For -hop transmission, the end-to-end loss probability is where  is the number of hops between two communication nodes.Based on (5), we show the TCP segment loss probability for various numbers of hops in Figure 7. Since the proposed approach can reduce the probability of collisions and can avoid using multihop communications, the probability of TCP segment loss can be reduced significantly.

Experimental Results
We conducted real-world experiments using devices as shown in Figure 8.Each node is equipped with IEEE 802.11g radio interface.In order to show the effect of the proposed protocol more clearly, we used a chain topology where all the devices are deployed on a straight line.The number of hops from the source node to the destination node was 2. The proposed protocol was compared with the conventional TCP which uses end-to-end acknowledgment to set the congestion window size.
Figure 9 shows the TCP throughput for different internode distances.We can observe that the proposed protocol can achieve up to 50% throughput improvement as compared to the conventional end-to-end acknowledgment approach of TCP.This is because the proposed protocol can utilize the wireless resources better by using hop-by-hop acknowledgment which could result in higher sending rate (larger congestion window size) at each node.

Simulation Results
In order to clearly explain the performance of the proposed protocol, we conducted computer simulations using QualNet 6.1 [22].Specific simulation parameters are shown in Table 4.Other parameters were the default settings of QualNet 6.1.The proposed protocol was compared with the conventional end-to-end based retransmission approach (multihop TCP).In the following simulation results, the error bars indicate the 95% confidence intervals.

Effect of Various Internode Distances.
The number of hops from the source node to the destination node was 2. Figure 10 shows the throughput for various internode distances.The  proposed protocol can achieve larger throughput than the conventional end-to-end based retransmission approach in all scenarios.By using the multihop data delivery virtualization approach, the proposed protocol shows up to 50% throughput improvement over TCP, which can be expected to provide significantly higher quality of service to end users.Figure 11 shows the required time for sending 10 MB data.The proposed protocol can efficiently reduce the required active time (including transmission time, contention time, and all other overheads), which is more energy and resource efficient.

Effect of Various Numbers of Hops.
We evaluated the effect of hop count in different internode distances.Figures 12, 13, and 14 show the throughput for various numbers of hops in short, medium, and long internode distances correspondingly.
The simulation results show that the throughput improvement of the proposed protocol is dependent on both the internode distance and the number of hops.The advantage of the proposed protocol becomes more notable with the increase of the number of hops.As the number of hops increases, the end-to-end segment loss probability increases, resulting in a smaller congestion window size.This is the main reason for the throughput degradation of the conventional approach.
The performance gain of the proposed protocol is also dependent on the internode distance.When the number of hops is small (in the case of 2), the lower the packet loss probability of a link is, the larger the gain is.However, when the number of hops is large (in the case of 10), the proposed protocol shows the highest gain in the short internode distance case.This is because a larger internode distance would incur a higher delay at each hop due to the retransmissions which affect the overall throughput in general.Figures 15,16,and 17 show the required time for sending 10 MB data in short, medium, and long internode distances correspondingly.We can observe that the proposed approach can shorten the time by up to 60%.This improvement is significant especially for battery-powered wireless sensor networks as our protocol could enable more efficient duty cycling.

Conclusions
We proposed a multihop data delivery protocol for the distributed IoT environment.The proposed protocol uses multiple one-hop data flows to virtualize a multihop end-to-end data transmission.By using hop-by-hop acknowledgment, the proposed protocol is able to provide higher throughput as compared to the conventional end-to-end acknowledgment approach which is used by TCP.We conducted theoretical analysis, real-world experiment, and computer simulations to evaluate the performance of the proposed protocol and showed the advantage of the proposed protocol over the conventional approach.Evaluation results show that the proposed protocol can achieve up to 50% throughput improvement over the end-to-end approach in 3-hop scenarios.By shortening the required transmission time at each node, the proposed protocol is able to provide a more energy and cost efficient solution for data transmissions in a multihop decentralized IoT environment.

Figure 1 :
Figure 1: Data and acknowledgment in TCP (the numbers on the arrows show the transmission order of packets, and the dotted circle shows the transmission range of node 0).

Figure 2 :
Figure 2: End-to-end segment loss probability for various numbers of hops.

Figure 3 :
Figure 3: TCP congestion window size for various numbers of hops.

Figure 4 :
Figure 4: Multihop data delivery virtualization (numbers on the arrows show the transmission order of packets).

Figure 6 : 8 Figure 7 :
Figure 6: Collision probability for different numbers of traffic flows.

Figure 8 :Figure 9 :
Figure 8: Devices used for the experiment.

Table 3 :
Session information maintained locally.