Pro NDN : MCDM-Based Interest Forwarding and Cooperative Data Caching for Named Data Networking

,


Introduction
Over the last decade, the number of devices connected to the Internet has been rapidly increasing due to the proliferation of emerging technologies such as Internet of ings, artificial intelligence, and blockchain [1].According to Cisco Annual Internet Report [2], there will be 29.3 billion networked devices by 2023.Even though the global network performance will be significantly improved, e.g., the fixed broadband speed and mobile network connection speed will reach 110.4 Mbps and 43.9 Mbps in 2023, respectively, and the fast growth of connected devices still put high pressure on the underlying Internet infrastructure, which was developed in the 1970s.Today's Internet has exceeded all expectations for facilitating conversations between communication endpoints but shows signs of aging when it meets with next-generation content-oriented services and applications [3].us, in order to keep pace with a changing world, a future Internet architecture, named data networking [4], has been regarded as the most promising Internet architecture to drive further growth and success of the future Internet.
In NDN, all communications are performed by using Interest and Data packets, both of which carry data (or content) names rather than host (or physical location) addresses.
e data names are hierarchically structured like URLs; e.g., the first segment of author's paper may have the name /marshall.edu/cs/congpu/papers/ndn2020.pdf/segment1,where "/" delineates name components in text representations.is hierarchical structure allows applications to represent the context and relationships of data elements and facilitate traffic demultiplexing.As shown in Figure 1, to retrieve data, data consumer (e.g., PC i ) first sends out an Interest packet piggybacked with the name of desired data (e.g., X).When a router receives the Interest packet, it forwards the Interest packet toward the data producer(s) based on the information in the forwarding table.Along the forwarding path, any router (e.g., R 1 , R 3 , or R 5 ) or data producer (e.g., DS j ) who has the requested data can reply a Data packet piggybacked with the requested data.en, the Data packet is forwarded along the reverse path of the Interest packet back to data consumer (e.g., PC i ).In addition, when the router (e.g., R 1 , R 3 , or R 5 ) receives the Data packet, it caches the piggybacked data in the caching table in order to satisfy future Interests that request the same data.
Designing and evaluating stateful forwarding and innetwork caching have been a major challenge within the overall NDN research area [5].Since NDN was proposed in 2010, there have been many research efforts focusing on this challenge and a rich literature has been developed.e literature in [6] stands out as one of the notable landmarks that sketches a basic picture of NDN's forwarding daemon and describes an initial design of stateful forwarding and innetwork caching.However, the conventional stateful forwarding approach and its future variants [7,8] fail to consider multiple network metrics when accessing the status of outgoing interface alternatives, which causes the forwarding strategy not to be adaptive and sensitive to network condition changes.In addition, the default in-network caching strategy simply relies on storing each received Data packet regardless of various caching constraints and criteria.As a result, the routers in the vicinity of data producers typically incur excessive caching overhead due to the frequent data retrieval requests from remote data consumers.Consequently, the challenge of improving stateful forwarding as well as in-network caching has attracted the attention of NDN research community.
In this paper, we propose the Pro NDN , a novel stateful forwarding and in-network caching strategy for NDN networks.
e Pro NDN consists of multicriteria decisionmaking (MCDM) based Interest forwarding and cooperative data caching.
e MCDM-based Interest forwarding exploits the MCDM theory to select the outgoing interface to retrieve the desired data, and thus, the forwarding strategy is adaptive and responsive to diverse network conditions.Moreover, the cooperative data caching complements the default in-network caching strategy to overcome the challenge of excessive caching overhead and efficiently support data access.Our major contribution is briefly summarized in twofold: (1) We propose the Pro NDN  with the consideration of extendable and flexible capability, and thus, additional network metrics can be easily included.e cooperative data caching approach is seamlessly integrated with the default innetwork caching strategy, and thus, it can be regarded as additional cache policy in NDN forwarding daemon.We revisit prior forwarding and caching strategies, Con NDN [6,7] and liteNDN [8], and modify them to work in the framework for performance comparison.
We develop a customized discrete event-driven simulation framework using OMNeT++ [9] and evaluate its performance through extensive simulation experiments in terms of Interest satisfaction ratio, Interest satisfaction latency, hop count, cache hit ratio, and Content Store utilization ratio.e simulation results show that the Pro NDN can improve Interest satisfaction ratio and Interest satisfaction latency as well as reduce hop count and Content Store utilization ratio, indicating a viable stateful forwarding and in-network caching strategy in NDN networks.
e rest of the paper is organized as follows.Prior forwarding and caching strategies are presented and analyzed in Section 2. In Section 3, the basic operations of NDN's stateful forwarding and in-network caching and the architecture of the Pro NDN are presented.e MCDM-based Interest forwarding and cooperative data caching is presented in Sections 4 and 5, respectively.Section 6 focuses on simulation results and their analyses.Finally, concluding remarks and future research directions are provided in Section 7.

Related Work
In this section, we present and analyze a variety of up-to-date stateful forwarding and in-network caching strategies in NDN.

Stateful Forwarding Strategy.
e authors in [8] propose a cooperative forwarding strategy for NDN networks, where routers share their information such as data names and interfaces to optimize their packet forwarding decisions and estimate the probability of each downstream path to swiftly retrieve the requested data.However, each router needs to collect the information about the names of data being exchanged from neighboring routers, which generates a large number of control messages and in turn increases the communication overhead in NDN networks.In [10], a forwarding strategy is proposed to balance the tradeoff between network overhead and performance satisfactory in Internet of ings environments.Each node overhears Data packets and learns a cost value by reinforcement and then  [11] propose a forwarding strategy named IFS-RL based on reinforcement learning.
e IFS-RL trains a neural network model which chooses appropriate interfaces for the forwarding of Interest based on observations of the resulting performance of past decisions collected by a routing node.e IFS-RL can achieve the goal of improving throughput and packet drop rate but fails in load balancing.
In [12], a forwarding strategy is proposed for persistent Interests in NDN, where forwarding decisions are based on a combination of information from the forwarding information base and probing results.Clients issue probing Interests in order to rate paths through the network, and all probe-receiving routers can use them to evaluate the performance of already known paths, but also to explore new, possibly better paths.Nevertheless, the probing Interest packets will significantly increase network traffic and cause other issues such as traffic congestion and packet loss.e authors in [13] exploit the partially observable Markov decision process (POMDP) to design NDN request forwarding mechanism based upon the key concept of event.Since the exact optimal solution of POMDP problems in general is extremely computationally demanding, a simulation-based optimization algorithm is also proposed to find the approximate optimal solution.In [14], a deep reinforcement learning-based forwarding strategy is proposed in NDN, where the details such as data content, interface status, and network states are first collected during the forwarding process.After that, the collected information is used as input of the deep reinforcement learning for training, whose result will be used as the forwarding strategy to guide the forwarding of Interest packets.e authors in [5] provide a list of requirements of NDN forwarding plane and compare all available schemes proposed for NDN forwarding plane based on the data structure utilized.In addition, this survey paper discusses some issues, challenges, and directions in future research.

In-Network Caching Strategy.
In [15], the authors propose a sum-up Bloom-filter-based request node collaboration caching (BRCC) approach for NDN networks, where different forms of caching are deployed for different types of data content.In addition, the BRCC uses the sumup Bloom filter to enhance the data content matching rate and decrease the searching time.e simulations indicate that the BRCC can improve cache hit ratio and caching efficiency.However, the BRCC makes caching decision based on the subscriber's request frequency only, which has poor extensibility when considering additional caching criterion.Araújo et al. [16] introduce a shared caching in named data networking mobile (SCaN-Mob) strategy which aims to alleviate the side effects of producer mobility by adopting an opportunistic content caching in mobile NDNs.In the SCaN-Mob, the mobile producer carries out a roundrobin selection of a device in the vicinity to store a copy of the searched content upon receiving a content request.e proposed SCaN-Mob can reach a greater content diversity, thereby increasing the likelihood of satisfying the Interest requests during the producer's unavailability periods.However, wireless devices usually have limited memory storage, and the opportunistic caching strategy may select a device that does not have enough storage to cache the data content.
e authors in [17] present a dynamic popularity-based caching permission strategy (DPCP) for NDN networks.e DPCP takes advantage of Interest packet and Data packet to carry content popularity, so routers in the path can obtain the information about the content popularity and use dynamic popularity threshold to make cache permission policy.In addition, the DPCP deploys a cache control flag to avoid caching the same redundant copies in adjacent routers.e DPCP can reduce the amount of redundant data in the network.However, it is very challenging to set an accurate popularity threshold to make caching decision.In [18], a probability-based caching strategy with consistent harsh is proposed in NDN networks, where the caching decision is made based on the probability that is calculated by jointly considering content's popularity, node's betweenness, and distance to consumers.In [19], the authors propose a two-layer hierarchical cluster-based caching solution to improve in-network caching efficiency.A network is grouped into several clusters, and then, a cluster head is nominated for each cluster to make caching decision.However, the cluster head has to collect and allocate the information of node importance based on betweenness centrality, content popularity, and probability matrix in its cluster, which introduces a significant amount of communication overhead.Saxena et al. [20] broadly categorize caching schemes into cache placement and cache replacement.
e cache placement is used to decide whether to cache the data content at the network or not, while the cache replacement is adopted to evict data from the cache when new data arrive.
2.3.Our Approach.NDN's stateful forwarding strategy decides how to effectively evaluate multiple outgoing interface alternatives and objectively choose the best interface(s) to forward the Interest packets.In summary, most prior forwarding strategies are implemented as either adaptive forwarding or context-aware forwarding, where various optimization or machine learning techniques are used to strike a balance between several performance metrics and facilitate interface adaptation.However, little attention has been paid to multicriteria decision-making (MCDM) based stateful forwarding strategy for NDN networks, where each outgoing interface alternative is evaluated in the matter of multiple network metrics and the optimal outgoing interface is chosen to forward the Interest packet based on Technique for Order Performance by Similarity to Idea Solution (TOPSIS).By taking into account multiple network metrics, the overall framework of stateful forwarding is adaptive and responsive to diverse network conditions.Another desirable feature is that the MCDM-based Interest forwarding is designed with the consideration of good extensibility and flexibility.us, additional network metrics can be easily included in the MCDM-based Interest forwarding for potential extension.e in-network caching is fundamentally important to support the basic concepts of NDN communication paradigm and brings several benefits, such as dissociating data from their producers, relieving the communication overhead at the data producer side, and reducing the network load and data dissemination latency.Nonetheless, little work has been done on the cooperative data caching which can overcome the challenge of excessive caching overhead and efficiently support data access in NDN networks.In addition, the cooperative data caching approach is regarded as additional cache policy in NDN forwarding daemon.erefore, the proposed caching approach can be seamlessly integrated with the default innetwork caching strategy to efficiently support data access in NDN networks.

Preliminaries and System Overview
In this section, we first present and analyze NDN's stateful forwarding and in-network caching, and then, we introduce the overall architecture of the proposed Pro NDN .

Stateful Forwarding.
As shown in Figure 2, a data consumer can retrieve data by issuing an Interest packet piggybacked with the name of desired data to the network.When a router receives the Interest packet, it first checks whether its Content Store already caches the desired data or not.Here, router's Content Store is a temporary cache of Data packets it has received.If the desired data exists in the Content Store, the router replies a Data packet piggybacked with the desired data back to the consumer along the reverse path of Interest packet.Otherwise, the router checks the name of desired data with each entry in the Pending Interest Table .In NDN, the Pending Interest e forwarding strategy makes Interest packet forwarding decision based on the information stored in the Forwarding Information Base, where each entry records a name prefix and a list of outgoing interfaces together with their associated forwarding preference.
e forwarding preference reflects forwarding policy as well as the cost of forwarding path which is typically calculated using certain network metrics.For example, the BestRoute [7] adopts the coloring scheme to represent the working status of each outgoing interface, based on which the forwarding strategy selects the best outgoing interface to forward an Interest packet.For each name prefix, all outgoing interfaces are ranked based on Interest rate limit, and the highest ranked Green outgoing interface is always selected to forward an Interest packet.If there is no Green outgoing interface, the highest ranked Yellow outgoing interface is adopted.e Red outgoing interfaces are never used because they cannot bring data back.e forwarding strategy in the BestRoute can improve the link utilization.However, the BestRoute fails to detect and respond to network condition changes timely because it only considers one network metrics (i.e., Interest rate limit) to make forwarding decision.For instance, if the Interest packet forwarding rate reaches the rate limit, the outgoing interface will experience traffic congestion sooner or later, which leads to the fact that the secondranked Green outgoing interface would be the best option for Interest packet forwarding.
us, the highest ranked Green outgoing interface may not always be the best option with various changes in-network conditions.In summary, the forwarding strategy is playing an important role in NDN forwarding plane.In order to improve the network performance and respond to network condition changes accurately and astutely, the forwarding strategy should take into account multiple network metrics to make Interest packet forwarding decision.

In-Network Caching.
When the data producer or the router who caches the desired data in the Content Store receives the Interest packet, it replies a Data packet piggybacked with the desired data back to the data consumer.When a router receives the Data packet from an upstream router or the data producer, it first searches the piggybacked data name in the Pending Interest Table .If an entry with matching data name is found in the Pending Interest Table, the router forwards the Data packet to all stored incoming interfaces, caches a copy of piggybacked data in the Content Store, and removes all entries with matching data name from the Pending Interest Table .Otherwise, the router drops the Data packet because the data are unsolicited and may pose security risks to the forwarder.However, these are also cases when unsolicited Data packets need to be stored in the Content Store.In order to purge the stale entry in the Pending Interest Table, an entry lifetime is assigned to each entry.When the lifetime expires, the entry is removed from the Pending Interest Table .e default in-network caching relies on storing each received Data packet disregarding various caching constraints and criteria.For large-scale network with a significant amount of data retrieval traffic, the routers that are located in the vicinity of data producers will receive an excessive number of Interest packets from remote data consumers, which definitely causes enormous caching overhead.As a result, the cache space of these routers can be exhausted in vain.To tackle the issue of excessive caching overhead for the routers in the vicinity of data producers, the liteNDN [8] implements a decision-making mechanism to proactively decide upon caching the received Data packets, where the routers located close to a certain data producer can completely avoid caching Data packets from this data producer.e liteNDN can reduce the caching overhead.

MCDM-Based Interest Forwarding
e basic idea of the MCDM-based Interest forwarding is to employ Technique for Order Performance by Similarity to Idea Solution (TOPSIS) to dynamically evaluate outgoing interface alternatives based on multiple network metrics and objectively select an optimal outgoing interface to forward the Interest packet.
e TOPSIS is a multicriteria decisionmaking model to identify the best alternative that is nearest to the positive-ideal solution and farthest from the negativeideal solution [21].When a router receives an Interest packet to forward, it evaluates all outgoing interface alternatives based on the up-to-date network metrics information and calculates the forwarding index of each outgoing interface.Based on the forwarding index, the router ranks all outgoing interface alternatives and selects the highest ranked outgoing interface to forward the Interest packet.e detailed design of the MCDM-based Interest forwarding is provided as follows.
First, the router establishes a decision matrix with the up-to-date network metrics information for the ranking of outgoing interface alternatives.e structure of the decision matrix can be expressed as follows: x mj x mn (1) where AI i represents the i th outgoing interface alternative, i � 1, 2, . .., m; PM j denotes the j th network metrics, j � 1, 2, . .., n; and x ij is a crisp value of the j th network metrics related to the i th outgoing interface alternative.Second, the router generates the normalized decision matrix M norm (�x * ij ) according to Since the scales of measurement for multiple network metrics are not unique, it is important to normalize the decision matrix to make crisp values comparable to each other.ird, the router calculates the weighted normalized decision matrix by multiplying the normalized decision matrix by the relative weights of multiple network metrics.e weighted normalized decision matrix M wgt (�x ⊕ ij ) is calculated as where w j represents the relative weight of the j th network metrics.e rationale behind the design of w j is to adjust the effect of the j th network metrics for subjective preference.e relative weights of multiple network metrics can be determined by applying analytic network process (ANP) [21].Fourth, the router calculates the separation measurement using m-dimensional Euclidean distance.
e separation between the i th outgoing interface alternative and positive-ideal solution, denoted as Sol + i , is given as Similarly, the separation between the i th outgoing interface alternative and negative-ideal solution, denoted as Sol − i , is as follows: Based on the separation measurements, the router can calculate the relative closeness of the i th outgoing interface alternative to the idea solution as follows: 6 Journal of Computer Networks and Communications where the I i is considered as the forwarding index of the i th outgoing interface alternative.e I i lies between 0 and 1, and the larger the forwarding index value means the better the overall performance of the i th outgoing interface alternative.Finally, the router ranks the forwarding indexes of all outgoing interface alternatives, and the highest ranked outgoing interface will be the optimal one to forward the Interest packet.For example, suppose that a route has four outgoing interface alternatives (AI 1−4 ) to choose and forward the Interest packet.We consider interface utilization ratio, round-trip time (RTT), and NACK ratio as real-time network metrics to calculate the forwarding index of each outgoing interface alternative.
e decision matrix containing the crisp value of network metrics is shown in Table 2.According to equations ( 2) and (3), the normalized decision matrix and the weighted normalized decision matrix is calculated and presented in Tables 3 and 4, respectively.Here, the relative weight of interface utilization ratio, RTT, and NACK ratio is set to 0.3, 0.4, and 0.3, respectively.After that, the separation measurement between each outgoing interface alternative and the positive and negative-ideal solutions can be calculated by using the data in Table 4, and related results are shown in Table 5.
In the final ranking stage, by using equation ( 6), the forwarding index of each outgoing interface alternative is calculated.
e calculated forwarding indexes are ranked and listed in Table 6.According to the forwarding index, the ranking order of four outgoing interface alternatives is AI 2 , AI 3 , AI

Cooperative Data Caching
In NDN, since each Data packet only carries a data name, it is said to be independent of who requested or from where it is retrieved [4].In order to quickly satisfy future Interest packet requesting the same data, a router can choose to cache a copy of received Data packet in its Content Store when it receives a Data packet.NDN's default in-network caching [6] is designed to store each received Data packet regardless of various caching constraints and criteria.e default caching strategy is incredibly simple and easy to implement.However, the potential issue is that the routers located in the vicinity of data producers will receive an excessive number of Interest packets from remote data consumers, which definitely causes enormous caching overhead.For large-scale network with a significant amount of data retrieval traffic, the Content Store of these routers can be exhausted in vain, which in turn causes more frequent cache replacement operations.Moreover, the routers in the vicinity of data producers may have a worst-case caching overhead of Θ(n), where n is the total number of Interest packets requesting different data from all data consumers.In light of these, we propose a cooperative data caching strategy to overcome the challenge of excessive caching overhead and efficiently support data access in NDN networks.e codata caching strategy consists of two schemes, CacheData and CacheFace, to complement the default innetwork caching strategy.In the following, we present CacheData and CacheFace with more details.
In the CacheData, the router caches the received Data packet if more than △ In ⟶ number of different incoming interfaces request the piggybacked data.Here, △ In ⟶ is a system parameter, △ In ⟶ � 1, 2, . .., n. e rationale behind the design of CacheData is that if the data are popular, i.e., many Interest packets from different incoming interfaces request the data, the router should cache the received Data packet.e basic idea of CacheData can be explained by using Figure 4. Suppose that the data consumer PC 1 and PC 2 are interested in data d i and send out Interest packets to retrieve d i through the router R 3 .Here, we assume that △ In ⟶ � 1. R 3 receives Interest packets from two different incoming interfaces connected with PC 1 and PC 2 , respectively, and thus, it should cache a copy of d i when receiving the Data packet according to the CacheData.Since all Interest packets received by the router R 1 come from R 2 , which in turn come from R 3 , and thus, R 1 and R 2 do not cache the received Data packet.As another example, considering that the data consumer PC 1 and PC 4 send out Interest packets to retrieve the data d i .With the CacheData scheme, the router R 2 should cache the Data packet, whereas R 1 , R 3 , and R 4 need not to do so.In summary, the CacheData are designed to cache the Data packet conservatively.In some rare situations, for instance, when most of data consumers are interested in certain data at the same time, the CacheData might decrease the cache hit ratio because the data are not cached at every intermediate router.However, we do not assume that certain data are interested by all data consumers concurrently in NDN networks.
us, the CacheData can be adopted to solve the challenge of excessive caching overhead as well as efficiently support data access in NDN networks.
In the CacheFace, the router caches the outgoing interface toward the border router who has the data and uses it to redirect future Interest packets if its distance (i.e., hop count) to the border router is △ hop hop(s) shorter than its distance to the data producer.Here, △ hop is the number of hops that a cached interface can save and is denoted as where H(producer) is the number of hops to the data producer and H(border) is the number of hops to the border router.In this paper, we define the border router as the router that is directly connected with data consumer(s).In Figure 4, R 3 and R 4 are considered as border routers.e rationale behind the design of CacheFace is that the data retrieval latency can be reduced if the data can be obtained through a shorter distance.For example, in Figure 4, suppose that the data consumers PC To properly make CacheFace decision, it is necessary to embed the hop count information in the header of each NDN packets.anks to the adoption of type-length-value (TLV) format, which is an encoding scheme used for optional information element in NDN, the hop count can be easily added as a new field and type in the NDN packets [22].When the data consumer issues an Interest packet, it initializes the hop count value to zero.When a router receives the Interest packet, it increases the hop count by one and forwards the Interest packet.us, each router along the forwarding path knows its hop count distance to the data consumer and boarder router when receiving the Interest packet.e same idea will be applied to the Data packet.In the CacheFace, the router only needs to cache the outgoing interface toward the border router when it is closer to the border router than to the data producer.For instance, when the Data packet piggybacked with d i is forwarded back to both PC 1 and PC 2 along the path does not need to cache the outgoing interface toward the border router R 3 because R 1 is closer to the data producer than to R 3 .
In the cooperative data caching strategy, when a router receives a Data packet, it decides whether to apply Cache-Data, CacheFace, or default in-network caching based on the following rules: First, the CacheData is adopted if the router receives the Interest packets from more than △ In ⟶ number of different incoming interfaces.
Second, if the CacheData is not applicable, the CacheFace is applied if the router's distance to the border router is △ hop hop(s) shorter than its distance to the data producer.
ird, if both CacheData and CacheFace are not applicable, the default in-network caching is adopted.
Major operations of the cooperative data caching are summarized in Algorithm 2.

Performance Evaluation
6.1.Simulation Testbed and Benchmarks.We conduct extensive simulation experiments using OMNeT++ [9] to evaluate the performance of the Pro NDN .100 nodes are randomly distributed in the network area, where 5 nodes are selected to serve as data consumers and data producers, respectively.To generate random network topologies, we employ the network topology generator BRITE [23], which is a parametrized topology generator that can be used to study the relevance of possible causes for power laws and other metrics observed in Internet topologies.Table 7 specifies the configured network connectivities: low connectivity, medium connectivity, and high connectivity, where the second column identifies an integer number of links per router.Figure 5 illustrates three sample random network topologies with different network connectivities.In addition, the Interest packet rate of each data consumer and the size of packet is set to 5 pkt/sec and 512 Bytes, respectively.In the existing literature, various packet sizes (i.e., 256, 512, and 1024 Bytes) have been adopted to evaluate the effect of packet size in NDN [24].us, a medium packet size of 512 Bytes is adopted as a representative in this paper.In [25,26], the Interest packet rate and the size of packet is set to 200 pkt/sec and 1040 Bytes, respectively.So the total traffic rate is 208,000 Bytes per second.In our simulation, the total traffic rate is 2,560 Bytes per second.us, we believe that the value of customized system parameters such as Interest packet rate and the size of packet are within a reasonable range.e total simulation time is set to 500 seconds, and △ In ⟶ � 1 and △ hop � 1 are adopted.In order to obtain steady performance results, each simulation scenario is repeated 10 times with different randomly generated seeds.In this paper, we measure the performance in terms of Interest satisfaction ratio, Interest satisfaction latency, hop count, cache hit ratio, and Content Store utilization ratio by changing key simulation parameters, including network connectivity and number of link failures.
Interest satisfaction ratio: the Interest satisfaction ratio is defined as the ratio between the total number of retrieved Data packets and the total number of issued Interest packets.
Interest satisfaction latency: the Interest satisfaction latency is the averaged elapsed time when the data consumers issue the Interest packets to when the data consumers receive the Data packets.
Hop count: the hop count is calculated as the total number of links Data packets traversed to satisfy issued Interest packets divided by the total number of Data packets.Cache hit ratio: the cache hit ratio is the ratio of the total number of satisfied Interest packets by Content Store to the total number of received Interest packets.Content Store utilization ratio: the Content Store utilization ratio is calculated as the total number of cached Data packets divided by the size of Content Store.Here, In ⟶ is the set of incoming interfaces that the Interest packets have been received from; CS j : e Content Store at R j ; FIB j : e Forwarding Information Base at R j ; H j (producer): e number of hops between the data producer and R j ; H j (border): the number of hops between the boarder router and R j ; When R j receives a Data packet pkt[d i , Da ta, hop]: We revisit prior forwarding and caching strategies, Con NDN [6,7] and liteNDN [8], and modify them to work in the framework for performance comparison.e basic idea of these two benchmark schemes is briefly discussed as follows: Con NDN : the Con NDN classifies outgoing interfaces based on a coloring scheme.Outgoing interfaces are classified as Green, Yellow, and Red, which indicates that the outgoing interfaces can bring data, may or may not bring data, and cannot bring data, respectively.Interest packets are forwarded to the highest-ranked Green outgoing interface.If no Green outgoing interface is available, the highest-ranked Yellow outgoing interface is chosen to forward Interest packets.In addition, the caching strategy of Con NDN relies on storing each received Data packet.liteNDN: the liteNDN comprises two main components, including cooperative forwarding and heuristicbased caching.e former component leverages shared data names and outgoing interfaces among routers to estimate the most probable paths toward cached versions of the requested data.e latter component implements a decision-making mechanism to proactively decide upon caching the received Data packets, where the routers located close to a certain data producer can completely avoid caching Data packets from this data producer.

Simulation Results and Analysis
. First, we measure the performance of Interest satisfaction ratio against network connectivity, number of link failures, and simulation time in Figure 6.As shown in Figure 6(a), the Interest satisfaction ratio of Con NDN , liteNDN, and Pro NDN increases as the network connectivity increases, where the number of link failures is set to zero.With a higher network connectivity, the number of neighbor routers to which a new router connects increases, and thus, the network becomes more denser.In the denser network, each router has more outgoing interface alternatives to select and forward the Interest packets, and the traffic of Interest packets is distributed among multiple paths without causing traffic congestion.As a result, data producers receive more Interest packets and then reply more Data packets, which causes the Interest satisfaction ratio to increase.
e Pro NDN outperforms Con NDN and liteNDN.Since the Pro NDN evaluates outgoing interface alternatives based on multiple network metrics and objectively select an optimal outgoing interface to forward the Interest packets, more Interest packets can be delivered to data producers.Correspondingly, more Data packets will be replied back to data consumers along the reverse path of Interest packets; thus, a higher Interest satisfaction ratio is observed.e liteNDN shows a higher Interest satisfaction ratio than that of Con NDN .
is is because each router reroutes the Interest packets to closer router who has Data packets based on the shared data names and interfaces information, more Data packets can be received through shorter routes, and a higher Interest satisfaction ratio can be achieved than Con NDN .In Figure 6(b), the Interest satisfaction ratio of all three schemes decrease when the number of link failures increases.is is because the Interest or Data packets will get lost during the transmission when the link failure happens, and a less number of Interest or Data packets can be received.As a result, a lower Interest satisfaction ratio is obtained.However, the Pro NDN still provides  ℘, the number of neighbor routers to which a new router connects when it joins the network [23].
the highest Interest satisfaction ratio because it can detect the changes in network conditions quickly and select more reliable outgoing interfaces to forward Interest packets.us, more Interest packets can be received by data producers; in turn, more Data packets will be replied back to data consumers and a higher Interest satisfaction ratio can be achieved.Figure 6(c) shows the changes in Interest satisfaction ratio as the simulation time elapses.
Second, we measure the performance of hop count by varying network connectivity and the number of link failures in Figure 7.In Figure 7(a), the hop count slightly decreases as the network connectivity increases.As each router has more neighbors, it more likely finds a shorter path to forward the Interest packets to data producers.Since the Data packets are replied along the reverse path of Interest packets, the number of links Data packets traversed to satisfy issued Interest packets decrease and a decreasing hop count is observed.e Pro NDN obtains the lowest hop count because it evaluates the performance of RTT to select the outgoing interface, and a shorter path with a lower RTT can be identified to forward Interest packets.us, the lowest hop count can be achieved by Pro NDN compared to Con NDN  router who has the Data packet, the Interests packets can be satisfied through a shorter route, which causes the hop count to decrease as well.As shown in Figure 7(b), the hop count of Pro NDN , Con NDN , and liteNDN increases linearly when the number of link failures increases.Since there are more link failures in the network, Interest packets might be forwarded through a longer path, which results in an increasing hop count.e Con NDN shows the highest hop count because it only considers the link limit rate to select outgoing interface.
e liteNDN achieves a smaller hop count than Con NDN . is is because the liteNDN replies on sharing data names and interfaces knowledge among neighbor routers to select a shorter route to forward Interest packets.ird, we measure the performance of Interest satisfaction latency by varying network connectivity and the number of link failures in Figure 8.As shown in Figure 8(a), the overall Interest satisfaction latency decreases as the network connectivity increases.When the network becomes more connected, data consumers can easily find a shorter route to forward Interest packets and retrieve the desired data from data producers.Since the Data packets will also traverse back to data consumers along the shorter route, a shorter Interest satisfaction latency can be achieved.Most importantly, the Pro NDN still outperforms Con NDN and liteNDN because it considers multiple network metrics to select the best outgoing interface.In addition, the Pro NDN adopts CacheData and CacheFace to help retrieve the desired data through shorter routes, where the Interest packets can be satisfied by intermediate router or rerouted to closer boarder router who has the data.In Figure 8(b), the Interest satisfaction latency increases as the number of link failures increases.When there are more link failures in the network, the Interest packets might need to be forwarded through a reliable but longer routes.us, a higher Interest satisfaction latency is obtained.However, the Pro NDN still shows a lower Interest satisfaction latency compared to that of Con NDN and liteNDN.
Fourth, we measure the performance of cache hit ratio and Content Store utilization ratio by varying network connectivity in Figure 9.It is shown in Figure 9(a) that the cache hit ratio of Con NDN is much higher than that of Pro NDN and liteNDN. is is because the Con NDN adopts the conventional caching strategy in the network, where each router stores every received Data packet.As a result, the Data packets are cached at every intermediate router, which can improve the cache hit ratio beyond all doubts.e Pro NDN shows a higher cache hit ratio than liteNDN.Since the Pro NDN can satisfy the Interest packets through either CacheData or CacheFace, a higher cache hit ratio can be obtained.In liteNDN, the router can reroute the Interest packets to the routers who have the desired data only if it has the information of data names and interfaces.Otherwise, the router has to forward the Interest packets to data producers to retrieve the desired data.us, the liteNDN delivers the lowest cache hit ratio.In Figure 9(b), it is not surprising to see that the routers in the vicinity of data producers have 100% Content Store utilization ratio in Con NDN .e reason is that the Con NDN adopts default in-network caching strategy which relies on storing each received Data packet disregarding various caching constraints and criteria.Since both Pro NDN and liteNDN adopts caching strategies to prevent the routers in the vicinity of data producers from caching every received Data packet, a much lower Content Store utilization ratio is achieved compared to Con NDN .e Pro NDN shows a higher Content Store utilization ratio than that of liteNDN because the Pro NDN caches the popular Data packets.

Journal of Computer Networks and Communications
Fifth, we measure the performance of Interest satisfaction ratio by changing the number of nodes in Figure 10.Overall, as the number of nodes in the network increases from 100 to 200, the Interest satisfaction ratio of all three schemes increase.e rationale is that more neighbor nodes can be selected to forward the Interest packets in the network.As a result, data producers can receive more Interest packets and then reply more Data packets, which causes the Interest satisfaction ratio to increase.When the number of nodes is increased from 160 to 200, a slight increase of Interest satisfaction ratio is observed.However, the Pro NDN still outperforms other two schemes.

Conclusion and Future Work
In this paper, we proposed the Pro NDN , a novel stateful forwarding and in-network caching strategy for NDN networks.e Pro NDN consists of multicriteria decisionmaking (MCDM) based Interest forwarding and cooperative data caching.In the MCDM-based Interest forwarding, each outgoing interface alternative is first evaluated based on multiple network metrics to obtain the forwarding index, which is an indicator of the overall performance.en, all outgoing interface alternatives are ranked in terms of the forwarding index, and the highest ranked one is chosen to forward the Interest packet.In addition, the cooperative data caching consists of two schemes: CacheData, which caches the data, and CacheFace, which caches the outgoing interface.For performance evaluation, we considered interface utilization ratio, round-trip time (RTT), and NACK ratio as real-time network metrics.We also developed a customized discrete event-driven simulation framework by using OMNeT++ and evaluated its performance through extensive simulation experiments.e simulation results show that the Pro NDN can improve Interest satisfaction ratio and Interest satisfaction latency as well as reduce hop count and Content Store utilization ratio, indicating a viable stateful forwarding and in-network caching strategy in NDN networks.
As a future work, we plan to investigate analytic network process (ANP) to analyze the interrelationships between decision levels and multiple network metrics and dynamically calculate the relative weights of multiple network metrics.In addition, we plan to further extend the proposed MCDM-based Interest forwarding with the feature of Interest traffic load balancing.For example, the router can stochastically select an outgoing interface to forward the Interest packet by randomly generating a number and comparing it with the forwarding index of each outgoing interface.If the forwarding index of outgoing interface is larger than randomly generated number, this outgoing interface is chosen to forward the Interest packet.In this way, each outgoing interface will have a chance to forward the Interest packet, which can achieve the goal of traffic load balancing.In addition, in the context of Internet of Everything or 5G [27], as the number of connected devices significantly increases in the network, the amount of network metrics information that is used to make forwarding decision will be increased extensively as well.As a result, a longer computational latency of making forwarding decision could be observed at each intermediate router, which becomes a nontrivial problem.us, we plan to investigate the algorithm optimization of forwarding strategy to balance the tradeoff between algorithm efficiency and computational latency.

Figure 5 :
Figure 5: Randomly generated sample network topologies with different network connectivities defined in Table 7.(a) Low connectivity.(b) Medium connectivity.(c) High connectivity.

Figure 6 :
Figure 6: e performance of Interest satisfaction ratio against network connectivity, number of link failures, and simulation time.

Figure 7 :
Figure 7: e performance of hop count against network connectivity and number of link failures.

Figure 8 :
Figure 8: e performance of Interest satisfaction latency against network connectivity and number of link failures.

Figure 9 :
Figure 9: e performance of cache hit ratio and Content Store utilization ratio against network connectivity.

Figure 10 :
Figure 10: e performance of Interest satisfaction ratio against the number of nodes.
metrics and objectively select an optimal outgoing interface to forward the Interest packet.In addition, the cooperative data caching consists of two schemes: CacheData, which caches the data, and CacheFace, which caches the outgoing interface.(2)We design the MCDM-based Interest forwarding which comprises MCDMbased Interest forwarding and cooperative data caching.e MCDM-based Interest forwarding is to employ Technique for Order Performance by Similarity to Idea Solution (TOPSIS) to dynamically evaluate outgoing interface alternatives based on multiple network Table stores the forwarded Interest packets but have not been satisfied by Data packets yet.In addition, each Pending Interest Table entry contains four components: data name, nonce, and incoming interface of the Interest packet has been received from and outgoing interface of the Interest packet has been forwarded to.If the Pending Interest Table contains an entry with the same data name and nonce as the Interest packet, the router immediately drops the Interest packet because the Interest packet that has been forwarded before is looped back.If there is an entry with matching data name and unmatching nonce, the router just adds a new entry with data name, nonce, and incoming interface without forwarding the Interest packet since this Interest packet is considered as subsequent Interest.If the data name and nonce do not match with any entry in the Pending Interest Table, the router forwards the Interest packet to an outgoing interface according to forwarding strategy and adds a new entry in the Pending Interest Table.
Interest and Data packets processing in NDN.Journal of Computer Networks and CommunicationsHowever, it does not consider the popularity of data in the cache decision-making process and completely discards the default in-network caching strategy.In one word, the caching is a common technique to improve the performance of data access.us,itshouldbetreatedwith respect to efficiently support data access in NDN networks.3.3.ProNDNArchitecture.As shown in Figure3, the Pro NDN comprises two main components, namely, MCDM-based Interest forwarding and cooperative data caching.In the MCDM-based Interest forwarding strategy, when a router receives an Interest packet to forward, it evaluates all outgoing interface alternatives based on the combination of multiple network metrics and selects an optimal outgoing interface to forward the Interest packet.To be specific, the router first establishes a decision matrix with the up-to-date network metrics information and calculates the weighted normalized decision matrix by multiplying the normalized decision matrix by the relative weights of multiple network metrics.And then, the router calculates the forwarding index of each outgoing interface alternative and chooses the outgoing interface with the highest ranked forwarding index to forward the Interest packet.In the cooperative data caching strategy, when a router receives a Data packet, it will decide whether to apply CacheData, CacheFace, or default in-network caching based on the predefined rules.In short, if the piggybacked data are popular, the router caches the received Data packet by adopting CacheData.Otherwise, the router chooses to apply CacheFace by caching the outgoing interface toward the border router who has the data if its distance to the border router is shorter than its distance to the data producer.If both CacheData and CacheFace are not applicable, the router adopts the default in-network caching.More details about the proposed MCDM-based Interest forwarding and cooperative data caching strategies are presented as follows.Table1lists all notations used in this paper.

Table 1 :
1 , and AI 4 , which indicates that AI 2 is the best outgoing interface candidate to choose and forward the Interest packet.Here, AI 2 has the lowest interface utilization ratio, 25%, shortest RTT, 45 ms, and smallest NACK ratio, 10%.Major operations of the MCDM-based Interest forwarding are summarized in Algorithm 1. Notations.

Table 2 :
Decision matrix for interface alternatives.

Table 3 :
Normalized decision matrix for interface alternatives.

Table 4 :
Weighted normalized decision matrix for interface alternatives.
1and PC 2 have requested the data d i through the border router R 3 .Here, we assume that △ hop � 1.When the router R 2 receives and forwards the Data packet piggybacked with d i to R 3 , R 2 knows that R 3 has a copy of d i .Later, if the data consumer PC 4 requests d i through the router R 4 , which in turn through R 2 , R 2 knows that the data producer is two hops away, whereas the border router R 3 who has d i is only one hop away.erefore, R 2 forwards the Interest packet to R 3 instead of R 1 to retrieve d i .

Table 5 :
Separation distances for interface alternatives.

Table 6 :
Forwarding index ranking for interface alternatives.

△
In ⟶ , △ hop , pkt[name, type, hop], R i , PC i , and d i : Defined before; PIT j : e Pending Interest Table at R j ; PIT j [In ⟶ , d i ]: e set of incoming interfaces associated with data d i in the Pending Interest Table at R j .
Apply CacheData * / else if(H j (producer) − H j (border)) > △ hop then Cache PIT j [d i ].In ⟶ in FIB j / * Apply CacheFace * / else Cache d i in CS j / * Apply default in-network caching * / end end Remove PIT j [In ⟶ , d i ] from PIT j