Efficient Coded-Block Delivery and Caching in Information-Centric Networking

Information-centric networking (ICN) provides request aggregation and caching strategies that can improve network performance by reducing content server loads and network traffic. Incorporating network coding into ICN can offer several benefits, but a consumer may receive the same coded block from multiple content routers since the coded block may be cached by any of the content routers on its forwarding path. In this paper, we introduce a request-specific coded-block scheme to avoid linear dependency of blocks that are utilizing in-network caching. Additionally, a non-cooperative coded caching and replacement strategy is designed to guarantee that the cached blocks can be reused. Our experimental results show that the proposed scheme has superior performance to conventional CCN and two network coding-based ICN schemes.


Introduction
Trends in recent years have shown that Internet users care more about what the content is rather than where the content is. Information-centric networking (ICN) [1] is a novel design for a future networking architecture that has been proposed as a promising alternative to the current Internet. In ICN, IP addresses are replaced by content names and content routers (CRs) are equipped with storage capabilities to cache the content passing through each router. Content is requested by Interest packets that are sent by the consumer. With in-network caching [2,3], the content can be cached by multiple CRs, and any content router (CR) that contains the content that is being requested by the Interest can respond with a data packet, where both the Interest and the data are identified by the content name. Content-centric networking (CCN) has been shown to be a promising ICN architecture [4].
Recently, several studies have shown that network coding can also offer benefits to ICN [10][11][12][13][14][15][16][17][18][19][20][21], as network coding can be employed in ICN to effectively utilize multiple paths and reduce the complexity of the cache coordination. However, due to the ICN caching strategy, the same coded block may be cached by multiple CRs on its forwarding path and provided to the same consumer at a later time in response to their multicast requests [22].
In this case, the consumer will not be able to recover the content from the received coded blocks. Several solutions have been proposed to guarantee that all the coded blocks that are provided to the consumer are linearly independent of each other. In some centralized schemes [11,15], central routers are used to ensure that content caching and routing strategies can provide independent blocks. In some distributed schemes [20,21], information on the coded blocks which have already been received by the consumer must be carried by the Interest to retrieve linearly independent blocks. e CR can decide whether to respond to the Interest according to the information carried by the Interest. erefore, several round trips will be required to obtain sufficient linearly independent coded blocks. In our previous work [23], the CRs only cached the original received blocks to guarantee that all the coded blocks provided to consumers were linearly independent. Any coded blocks generated and transferred were wasted.
To increase the caching efficiency and reduce the cost of computation and communication induced by centralized schemes, we propose a request-specific coded-block (RSCB) scheme to reduce the transmission volume and download delay and ensure that only a single round trip is required for the consumer to retrieve sufficient linearly independent blocks. A non-cooperative coded caching and replacing strategy is then proposed to guarantee that any two coded blocks that are cached in a network will be linearly independent. It is assumed that chunk-based routing and traffic control schemes are in place. e contributions of this paper are as follows: (i) We propose a special content delivery strategy to retrieve blocks from multiple CRs simultaneously. Each CR on the forwarding path will aggregate Interests received from multiple consumers for chunks of the same content, to eliminate duplicates.
Interests received by a CR will be separated again and forwarded in different directions. A mechanism is proposed for the aggregation and separation of Interests for the chunks of content to guarantee that the minimum number of coded blocks will be requested and will be linearly independent. (ii) An on-path non-cooperative coded caching mechanism is designed to guarantee that the cached blocks can be reused. Blocks received by a CR can be encoded and cached depending on pending Interests and the proposed caching strategy. (iii) In our model, only chunks (i.e., original blocks) and coded-from-original blocks can be cached. One coded-from-original block can satisfy multiple Interests sent by different consumers requesting a set of its component chunks. A chunk-level codinginstead-of-evicting cache replacement scheme is designed to effectively increase the caching efficiency and optimize cache capacity. (iv) Our strategy is evaluated by comparison with conventional CCN and two network coding-based ICN strategies. Our experimental results demonstrate that the proposed strategy achieves the highest performance in terms of parameters such as average download time, server hit reduction rate, and cache hit rate.

Related Works
Network coding techniques have received much attention in a variety of network scenarios including P2P networks [6], CDNs [7], and wireless networks [9]. Recently, several works have been proposed that apply network coding in ICN. ere are two categories of solutions that can be used to ensure consumers are provided with sufficient linearly independent coded blocks: centralized strategies and distributed strategies.
Wang et al. [24] proposed a novel SDN-based framework to implement content caching and routing in ICN with linear network coding. e SDN controllers determine how to cache and route based on the information collected by the CR. us, a near-optimal caching and routing strategy can be obtained. Sadjadpour [11] proposed an architecture based on index coding for ICN, which groups the nodes into several clusters. e central router of each cluster maintains information on which content is cached by each node. Coded blocks generated by the central router are used to satisfy Interests for different content sent by different nodes. However, this strategy does not reduce traffic the first time content is requested. Llorca et al. [14] presented a multicast scheme based on network coding to achieve maximum network efficiency. However, the proposal does not mention a solution to deploy the proposed strategy in ICN. Talebifard et al. [15] proposed a method based on network coding that reduces the costs of coding and decoding by breaking the network into several clusters, with network coding only performed by selected nodes or clusters.
As well as centralized strategies based on network coding, some works have obtained enough linearly independent coded blocks by sending Interests repeatedly. Zhang and Xu [21] proposed two checking strategies to guarantee that the consumer will receive sufficient linearly independent coded blocks, which were called precise matching and RB matching. In precise matching, each Interest carries the global coefficients X of the coded blocks that have already been received by the consumer. Each CR performs Gaussian elimination to check linear dependencies. Precise matching is an efficient approach to guarantee that all blocks received by consumers will be linearly independent. However, it has very high communication and computation overheads. erefore, RB matching was proposed as a more lightweight approach, where the Interest only carries the rank of the global coefficients X of the coded blocks already received by the consumer. If the number of coded blocks cached by the CR is larger than the rank of the global coefficients X, the CR can respond to the Interest with a coded block. e larger the value of |X| is, the more difficult it is to serve the Interest. Wu et al. [16] proposed a network coding and random forwarding-based caching strategy, CodingCache, to enhance the caching efficiency. To guarantee all the blocks provided to consumer are linearly independent, each Interest carries the global coefficients of the coded blocks already received to retrieve the next block, similar to precise matching. erefore, N rounds will be required in order to retrieve N blocks. Nguyen et al. [20] proposed a lightweight caching and Interest aggregation strategy to ensure that all the coded blocks received by the consumer are independent. Like RB matching, the rank of the global coefficients of the coded blocks already received by the consumer is carried by the Interest packet. Saltarin et al. [19] proposed a protocol named NetCodCCN to permit Interest aggregation and pipelining. Each node responds to an Interest once it has received enough coded blocks to recover the content or |X| is larger than the number of coded blocks already sent out over face i previously, where |X| is the rank of the global coefficients X of the coded blocks cached in ContentStore.

2
Discrete Dynamics in Nature and Society However, NetCodCCN has a weakness also shared by RB matching in that it may provide false negative decisions, i.e., a node may falsely decide it cannot provide an innovative coded block for the consumer while actually the block is available. Montpetit et al. [17] proposed an architecture based on network coding, NC3N, where each Interest retrieves one coded block. However, there method does not include a strategy to ensure all the received blocks are independent. Liu et al. [18] proposed an ICN-NC method to guarantee that all the blocks received are provided by different CRs to increase the probability of obtaining linearly independent blocks. Each Interest packet contains a record of the Interest exploration range of the previous round. Only CRs within a new exploration area are permitted to respond to these Interests. Several rounds are required to retrieve enough independent coded blocks, and the Interest may retrieve linearly dependent coded blocks. e authors in [25] proposed a framework based on network coding for cache management in ICNs. Saltarin et al. [26] proposed a distributed caching strategy for ICNs enabled for network coding, which gives CRs the responsibility of estimating the popularity of contents and ensuring that the most popular content is cached near the network edge.
Most of the existing schemes require several round trips to obtain sufficient linearly independent coded blocks to recover the content. In this paper, we propose a novel content delivery strategy to ensure enough blocks can be retrieved within a single round. An on-path non-cooperative caching and replacing strategy based on network coding is proposed to guarantee that all blocks received by consumers are linearly independent. Moreover, in our scheme, coded blocks are generated only if the traffic can be saved instead of generated at the server and all CRs on the forwarding path in order to reduce the cost of coding and decoding.

Method of Interest Aggregation and Separation
In ICN, chunk-based delivery strategies route chunks separately. Chunks may meet on an intermediate node during their forwarding paths to several consumers. Motivated by this, we propose a special request-specific coded-block (RSCB) scheme to encode chunks that meet during transport in order to reduce traffic.

Overview of RSNC.
e definitions given in our previous study (referred to as RSNC) will be followed here. Each Interest (S, N) requests a specific set of chunks, where S � 1, . . . , N { } is the set of chunk indices and N is number of independent coded blocks required to recover the content. Since chunks may be cached by different CRs, each CR can aggregate, separate, and forward Interests. If Interest 1 requests a set of chunks S 1 and Interest 2 requests a set of chunks S 2 , then (S, n) satisfies both Interests, where S � S 1 ∪S 2 is the set of chunks used to generate n linearly independent coded blocks, n is the number of coded blocks to be sent by the upstream CR, and n � max |S 1 |, |S 2 | . When n < |S|, the traffic required to deliver chunks from upstream will be reduced.
Since an Interest sent by a consumer for multiple chunks will be copied and forwarded along a multicast tree, requests for different chunks sent from the same consumer will not meet again in a CR on the multicast tree. erefore, the Interest aggregation operation "⊕" is defined to combine two Interests originating from at least two different consumers, i.e., Similarly, a separation operation "Div" is defined to split an Interest into several sub-Interests: (2)

Interest Aggregation and Separation in RSCB.
In contrast with RSNC [23], RSCB includes information on (S 1 , n 1 ) and (S 2 , n 2 ) in the aggregated Interest (S, n) which guarantees that linearly independent coded blocks are provided to consumers and minimize the number of coded blocks transported in the network. To reduce the size of the Interest, the sub-Interest information is presented as a binary { }, n 2 � 2), the binary information of Interest 1 is b(S 1 ) � 1010, and the binary information of Interest 2 is b(S 2 ) � 0101, and thus b( erefore, an Interest can be expressed as I(p, [S, b(S)], n), where p is the name of the requested content, S is a set of chunks that match the name of the content p, b(S) is a set of binary numbers representing the sub-Interests (each sub-Interest is a subset of S ), and n is the number of linearly independent coded blocks being requested. erefore, any n linearly independent coded blocks that contain all chunks specified by S will satisfy the sub-Interests specified in b(S).
Equation (1) can thus be further modified as where s is the minimum number of linearly coded blocks satisfying both Interests

Discrete Dynamics in Nature and Society
It should be noted that the binary number b(S i ) is used to represent the subset information, which is required to guarantee that the requested number of coded blocks is minimized. When s � 1 or s � |S|, this subset information is not necessary, as shown in Figure 1(a). Moreover, if S i ⊆S j , the information on S i , i.e., b(S i ), will be deleted from b(S).
For instance, Figure 1(a) shows that CR 1 receives two Interests for content C p from different interfaces, I(p, S 1 � 2, 4 { }, 2) and I(p, S 2 � 1, 3 { }, 2). Before these two Interests are forwarded, CR 1 aggregates the two requests into a single Interest , 2] using equation (3). b(S) can then be used to reconstruct the subsets 2, 4 Similarly, the separation operation used to split an Interest ([S, b(S)], s) into several sub-Interests is modified, which is used to distribute the sub-Interests out over several interfaces of the CR towards different content sources:

If an Interest([S, b(S)], s) is formed by merging multiple
Interests, the subsets (S i , s i ) should be reconstructed based on b(S), and these subsets should then be separated into subsubsets using equation (2). en, new Interests, i.e., (4), are generated by aggregating the sub-subsets using equation (3).
is procedure is described in Algorithm 1. e complexity of Algorithm 1 is O(n), where n is the number of subsets. According to Algorithm 1, using equation (2), and then aggregated into Interest I(p, 1, 2 { }, 1) and Interest I(p, 3,4 { }, 1) according to equation (3). If the original blocks ob 1 and ob 2 are located in the same direction and the original blocks ob 3 and ob 4 are located in another direction, the new Interests can be sent from two interfaces in two different directions, as shown in Figure 1(a). In this case, only two blocks will transmitted which is in contrast with RSNC [23], which requires four blocks to be transmitted.
If Interest 2 (S 2 , n 2 ) arrives after Interest 1 (S 1 , n 1 ) has already been sent upstream, then the aggregated pending Interest will be (S, n) � def (S 1 , n 1 )⊕(S 2 , n 2 ). Since (S 2 , n 2 ) may contain some chunks that have also been requested by Interest (S 1 , n 1 ), these chunks should be removed from Interest 2. erefore, we define an operation to determine incremental Interest based on the separation operation: where ΔS 2 � S 2 \ΔS 1 , Δn 2 � min ΔS 2 , n 2 , Since Interest 1 will return at most n 1 coded blocks, Δn 1 ≤ n 1 is required. If |S 1 ∩ S 2 | > n 1 and n 2 > n 1 , we let ΔS 1 ⊂ S 1 ∩ S 2 and Δn 1 � |ΔS 1 | � n 1 . Similarly, if a CR has cached a subset (W, w) of blocks, only the remaining (S ′ , n ′ ) blocks need to be requested from the upstream CRs, where (S ′ , n ′ ) � (S, n)\(W, w). In RSCB, the coded blocks generated by the original blocks (referred to as coded-fromoriginal block) are cached by the first node en route to consumers. e coded-from-original blocks are used as the original blocks to satisfy future Interests. For example, in Figure 1(b), the coded-from-original block, ocb 1 � α 11 ob 1 + α 12 ob 2 , can be presented as is received by CR 2 , the incremental Interest (S ′ � 2, n � 1) is determined and sent to the next node.
e benefits of RSCB are illustrated in Figures 1 and 2. In Figure 1(a), two consumers connected to router CR 5 and CR 6 have requested content, which contains four original blocks, ob 1 , ob 2 , ob 3 , and ob 4 at the same time. Each original block has a size of one unit, each CR has a two-unit cache capacity, and each link has a one-unit transmission cost. Figure 1 shows the communication and caching in RSCB. For RSCB, CR 5 and CR 6 receive two coded-from-original blocks generated by CR 3 and CR 4 , respectively. Codedfrom-original blocks ocb 1 and ocb 2 are received from CR 1 , where e total transmission cost is eight units. Figure 2 shows the conventional ICN communication. CR 2 receives the four original blocks (ob 1 , ob 2 , ob 3 , ob 4 ) from CR 3 and CR 4 and then forwards these original blocks to CR 1 . CR 1 responds to Interest I(p: 2, 4 { }, 2) with two original blocks, ob 2 and ob 4 , and Interest I(p, 1, 3 { }, 2) with two further original blocks, ob 1 and ob 3 . e total transmission cost is 12 units. erefore, our proposed solution saves 33% of the transmission cost compared with conventional ICN and 20% more than our previous work [23].

Caching in RSCB.
In RSCB, the original/coded-fromoriginal blocks are cached by CRs to respond to future Interests. In order to provide consumers with sufficient linearly independent blocks in a single round, none of coded blocks that were encoded by coded blocks are cached in the network. e coded-from-original block only can be cached by a single CR, which is the immediate downstream neighbor of the CR that generated the block. us, the two coded-from-original blocks, ocb 1 and ocb 2 , will be cached only by CR 2 (Figure 1(b). e coded-from-original block ocb 1 can be used as the original block ob 1 or ob 2 to respond to future Interests. When cache replacement happens, CR encodes several original blocks into one coded-from-original block to release the caching space. is ensures that all information contained in the original blocks is retained in the CR.
W is the set of blocks stored in content store (a) W is the set of blocks stored in content store W is the set of blocks stored in content store    Since the forwarding paths of requests for different chunks generated by the same consumer will form a multicast tree, these requests will not meet in any intermediate CR on the multicast tree. Interests are responded to with chunks or coded blocks which are linear combinations of chunks that have been specified by the Interest. Random linear network coding (RLNC) is used to generate the coded blocks within each generation. For convenience, in the remainder of this paper, we will not explicitly state which generation each chunk belongs to. Our model makes the assumption that a chunk-based routing and flow control scheme is in place.

4.2.
Interest and Data. All communications are driven by consumers in CCN. Consumers can receive chunks of content from multiple sources, which may include the content provider and CRs. A consumer interested in C p will send a set of requests I p � i p,1 , . . . , i p,N with one request for each chunk. Before these requests are forwarded, the CR determines the forwarding interface of each chunk using the forwarding information base (FIB). Requests that have the same forwarding interface will be aggregated into a single Interest. Each Interest can contain multiple requests for a set of chunks, S i , where S i is the set of chunk names.
ere are two types of CCN packets, Interests and data. In our model, the network coding information is appended to the selector field of the Interest packet and includes the set of chunk names S, the sub-Interests b(S), and the number of required blocks n. e coefficient of the coded blocks, the caching flag Fq, is contained within the signed info field of the data packet. e data field of the data packet contains the original/coded block(s).
e Interest and data packets used in our model are formulated using the following method: e FIB is the same as for CCN. When an Interest is received by a CR, its CS is first consulted, followed by PIT and finally FIB. Data packets will be sent back to consumers using the same path that was created by the Interest, but in the opposite direction.

Forwarding Interest. When a CR receives an
Interest I(p, X, x) over interface f, the first step is to check its CS. If the CS contains all chunks or the coded-from-original blocks containing all the information in the Interested set X, the CR will respond to Interest I(p, X, x) directly, as described in Algorithm 2. e complexity of Algorithm 2 is O(n), where n is the number of coded blocks. If |X| � x, the CR responds to Interest I(p, X, x) with the x cached blocks (chunks or coded-from-original blocks) without coding; otherwise, the CR will respond with the x coded blocks generated from the blocks cached in the CS, which contain the chunk information specified by set X. e caching flag, Fq, will be turned on, i.e., Fq � 1 if the block used to respond to the Interest is an original block or a coded-from-original block encoded by that CR; otherwise, Fq � 0. In RSCB, the CR performs network coding only if it will save on the transmission costs. For instance, as shown in Figure 1(a), CR responds to Interest I(p, [ 1,2,3,4 { }: 1010, 0101], 2) with two coded-from-original blocks, ocb 1 and ocb 2 , which were received from CR 3 and CR 4 , respectively, without further coding. In this case, there is a saving on the cost of coding.
If the Interest cannot be satisfied by the CR, PIT-IN of the arrival interface f will be checked. If there is a matched entry PIT in− f (p, Z, z), CR will aggregate the PIT-IN entry and the Interest using equation (3) and will then update the erefore, the incremental Interest (p, ΔX, Δx) � (p, X, x)\(p, W, w) will be determined. e CR will split the incremental requests for (p, ΔX, Δx) into several Interests using Algorithm 1. If one of the Interests, e.g., (p, X j , x j ), needs to be transmitted over the interface j, PIT-OUT of interface j will be obtained. If there is a matching PIT-OUT entry PIT out− j (p, V, v), a new incremental Interest for (p, ΔX j , Δx j ) � (p, X j , x j )\ (p, V, v) will be generated using equation (5) and transmitted over interface j if ΔX j ≠ null. e PIT-OUT entry will then be updated to PIT out− j (p, V ′ , v ′ , facelist), where V ′ � V∪X j and v ′ � max v, x j , and interface f will be added to facelist. Algorithm 3 describes the process used to forward an Interest. e complexity of Algorithm 3 is O(n), where n is the number of sub-Interests.

Forwarding Data. When data packet D(p, Y, block, Fq)
is received by a CR over interface f, the PIT-OUT of interface f will be checked in the tables. If there is no PIT-OUT match, the data D(p, Y, block, Fq) will be discarded directly since the CR has not requested the block; otherwise, the caching flag will be checked and the matching PIT-OUT entry will be updated according to Algorithm 4. If Fq � 1, the block carried by data will be cached into CS; otherwise, it will be temporarily cached into CACHE. e CR will then check whether more chunks can be obtained by decoding the blocks cached in CS and CACHE. If the CR has already received enough blocks of content p to recover the content, the chunks decoded from the received blocks will be cached into CS and all of the blocks of content p that were cached in CACHE will be deleted. In this case, CR can satisfy any Interest of content p.
If interface i is included in the facelist of the matching PIT-OUT entry, the corresponding PIT-IN entry is PIT in− i (p, Z, z). If Y ∩ Z ≠ ∅, CR checks whether the PIT in− i (p, Z, z) can be satisfied using blocks cached in CS and CACHE. If it can be satisfied, CR will generate z linearly independent combinations of the blocks specified by the set Z and will send z data packets over interface i. Each data packet carries a coded block and PIT in− i (p, Z, z)  e network will try to deliver chunks without introducing extra traffic in order to increase the independence of the blocks cached in CRs. When the CR receives a data packet D(p, Y, block, Fq) carrying a chunk (i.e., |Y| � 1), if z � |Z|, the data packet D(p, Y, block, Fq) will be sent out over interface i without further processing or waiting for other data packets. e CR will then update PIT in− i (p, Z, z) the CR will delete the PIT-IN entry. In this case, the time to download chunk Y will be reduced and the cost of coding and decoding is saved without introducing additional traffic.

Cache Policy.
Network coding-enabled ICN (NC-ICN) will divide the content into n original blocks. For traditional NC-ICN, each coded block is a linear combination of the n original blocks. e n coded blocks will be cached by n ′ (n ′ ≥ n) CRs along their forwarding paths to a group of consumers. Figure 3(a) shows that for traditional NC-ICN, network N1 will provide m coded blocks, cb 1 , . . . , cb m , to consumers in group G1 and network N2 will provide the remaining (n − m) coded blocks. During the process of responding to consumers in group G1, m ′ (m ′ ≥ m) coded blocks generated by cb 1 , . . . , cb m will be cached by multiple CRs in network N1, while n ′ (n ′ ≥ (n − m)) coded blocks will be cached by multiple CRs in network N2. At a later time, when the consumers in G2 multicast their Interests for n coded blocks of content p, these Interests will be received first by CRs in network N1. Each CR will respond to the Interest with its cached coded blocks independently, and thus t(t > m) coded blocks cached in network N1 may be provided to consumers in G2. However, the maximum number of independent coded blocks that a consumer can receive from network N1 is m. In this case, at least (t − m) blocks are not beneficial to the consumer and are a waste of resources. erefore, the conventional in-network caching strategy is not suitable for NC-ICN.
To address this issue, we propose to introduce a simple cache mechanism to guarantee that the blocks provided to consumers will be independent. In RSCB, only original blocks and coded-from-original blocks will be cached by CRs. None of coded blocks that were encoded by other Discrete Dynamics in Nature and Society coded blocks can be cached in the network. e received/ decoded original blocks can be cached by any CR. e coded-from-original blocks can only be cached by a single CR, which is the immediate downstream neighbor of the CR that generated the block.
us, any n coded-from-original blocks cached in the network will be linearly independent, where n is the number of blocks required to recover the content. Fq in data packet D(p, S, block, Fq) is a caching flag that indicates whether the block is cacheable or not.
Input: I(p, X, x) ← Interest arriving on interface f, CS p (W, w)←W is the set of blocks specified by I(p, X, x) and w is the number of blocks; (1) if |X| � x or x � w then (2) for each block cs i in CS p do (3) if cs i is an original block then (4) Fq � 1; a data packet D(p, Y i , cs i , Fq) and send over interface f; (9) end for (10) else (11) Generate x coded blocks, ob 1 , . . ., and ob x , using the blocks in CS p ; (12) if all the blocks in CS p are original blocks then (13) Fq � 1;  Discrete Dynamics in Nature and Society Since CRs have limited storage capacity, a cache replacement policy is required. When cache replacement occurs, the candidate content p that is to be discarded is selected using the existing content-level cache replacement policy, e.g., least recently used (LRU). Assume t units of cache space are required to cache newly received/decoded blocks and the cache space used to cache the candidate content p is n units (one unit for each block). If t ≥ n, content p is deleted and t � t − n. e first step is repeated until only some of cached blocks of the candidate content p i need to be discarded to cache new blocks, i.e., t < n i . Chunk-level cache replacement is then introduced to discard blocks in content p i . Firstly, any original blocks that are contained in the cached coded-from-original blocks are discarded, i.e., the information contained in the original blocks ob 1 and ob 2 may also be contained in the coded-from-original blocks ocb 1 � α 11 ob 1 + α 12 ob 2 . Secondly, the remaining original blocks are coded into a single coded-from-original block. Finally, the received coded-from-original blocks are randomly discarded. ese three steps are performed in turn until there is sufficient space for the newly received/decoded blocks, as described in Algorithm 5. e complexity of Algorithm 5 is O(n), where n is the number of evicted content. If content is rarely accessed, only the coded-fromoriginal blocks will be cached to respond to future Interests. Since a single coded-from-original block can respond to an aggregated Interest containing multiple requests for different chunks sent by multiple consumers, our chunk-level  (13) if the block carried by data is an original block and |Z| � z then (14) Send data D(p, Y, block, Fq) over interface i;   Discrete Dynamics in Nature and Society cache replacement policy can effectively increase the cache efficiency.

Simulation
In this section, the performance of our model is investigated by comparison with other three schemes: chunk-level CCN strategy (CCN) [4], NC-CCN [21], and CodingCache (CC) [16]. e caching strategy leave copy everywhere (LCE) is incorporated into the above strategies. In LCE, each block or chunk is cached by all CRs on the forwarding path between the content provider and the consumer.
6.1. Simulation Model. BRITE [27,28] was used to generate the network topology, since it can roughly reflect the actual Internet topology. e Dijkstra algorithm was used to generate the FIB tables. All links have a bandwidth of 1 Gbps. ere were a total of 1000 end hosts that were connected to 100 CRs and 10 original content providers were randomly connected to the CRs. 10,000 files were equally partitioned into 400 classes. Each content packet was 1 GB and was divided into 10 generations with each generation containing 10 chunks; each chunk size was 10 MB. In our simulation, only chunks in the same generation could be encoded. e content popularity follows a Zipf distribution with α ∈ [0.3, 0.7, 1, 1.5, 2]. Interests sent by consumers follow a Poisson process. e request number was defined as the number of Interests sent by consumers during the processing period. In our simulations, each CR was equally configured to have a cache space of 0.1%, 0.25%, 0.5%, 1%, and 2% of the overall content catalog size. e default cache size of each CR was set to 10 GB for caching, i.e., 1% of the total content catalog size. Random linear network coding (RLNC) was used for coding. e size of a finite field was 2 8 [29]. e coefficient vector and the generation ID are contained within the signed info field of the data packet. e performance of all four strategies was evaluated under the same simulation environment.
where w i (t) � 0 if the chunk i is sent from a cache or an aggregated Interest; otherwise, w i (t) � 1. N(t) is the number of chunks received by all consumers. t indicates that the data were collected from time (t − Δt) to t [20]. (v) Traffic: the total traffic to deliver the data packets over the whole request process. (vi) Average number of Interests: the average number of Interest packets that were handled by each CR for each chunk that was successfully received by the consumer, as in [21].

Simulation
Results. Due to its network coding-based content delivery and caching strategies, our proposed RSCB always achieves the best performance in terms of having the shortest average download time, the highest cache hit ratio, the lowest server hit reduction ratio, and the lowest transmission volume. RSCB ensures that consumers will receive sufficient independent coded blocks within a single round and the coded blocks cached by the CRs can be used as multiple chunks. Figure 4 plots the average download time of the four caching strategies for different system parameters. e average download time decreases as the Zipf parameter α is increased, as shown in Figure 4(a), since a larger Zipf parameter α indicates that the Interests sent by consumers are concentrated on a smaller set of contents. As the number of requests increases, chunks that have already been requested will be cached on more CRs and thus consumers can retrieve chunks directly from the CRs, which are much closer to the consumers. erefore, the average download time will be reduced ( Figure 4(b)). RSCB performs much better even for a small Zipf parameter and a low number of Interests, since it can retrieve chunks or coded blocks from multiple CRs simultaneously. Compared with other schemes, RSCB provides consumers with enough independent coded blocks in a single round.
In RSCB, one coded-from-original block can be used to respond to an aggregated Interest for multiple chunks requested by different consumers. For instance, the codedfrom-original block, ocb 1 � α 11 ob 1 + α 12 ob 2 , can satisfy the Interest for chunk ob 1 from consumer 1 and the Interest for chunk ob 2 from consumer 2, as shown in Figure 1(b). us, RSCB achieves the best caching performance, in terms of average download hops ( Figure 5), cache hit ratio (Figure 6), and server hit reduction ratio (Figure 7). Figure 8 shows the traffic for different caching schemes, and it can be seen that RSCB has the lowest transmission volume. Moreover, we can see that as the number of requests increases, RSCB has a higher traffic saving too due to its Interest aggregation scheme which saves on traffic required to deliver n − (n 1 + n 2 ) blocks, as per equation (3).
RSCB can also aggregate Interests for different chunks into a single Interest. As shown in Figure 9, the average number of Interests processed by the CR is much lower compared with other schemes. In ICN, a consumer requests content with N chunks by sending out N Interests, and thus the CR needs to process N Interests. However, in RSCB, only one aggregated Interest containing several Interests will be processed by the CR. is can reduce the cost of transmitting and processing the Interest.

Conclusion and Discussion
In this paper, we have proposed a request-specific codedblock strategy to reduce the transmission volume. Additionally, a chunk-level on-path non-cooperative coded caching and replacing strategy has been proposed to  improve the caching efficiency. Our method enables a consumer to multicast a set of Interests in order to obtain multiple content chunks simultaneously from multiple CRs. e traffic can be reduced by encoding chunks that meet in an intermediate CR and have been requested by different consumers. A novel Interest forwardingresponding strategy has been proposed to guarantee that the minimum number of coded blocks will be requested and that the blocks will be linearly independent. A network coding-based caching and replacing mechanism has been designed to guarantee that the cached blocks can be reused. A chunk-level coded cache replacement strategy has been proposed to discard blocks. Rather than discarding the original blocks, the CR will encode the original blocks into a single coded-from-original block to release cache space when cache replacement is required. A single coded-fromoriginal block can satisfy multiple Interests from different consumers for a set of its component original blocks.
erefore, this will increase the caching diversity without requiring extra cache space. e simulation results have confirmed that the RSCB scheme outperforms the other three strategies.
However, although there are many benefits in deploying network coding in ICN, it also introduces some additional costs for computation and communication. Some studies have already proven that RLNC is a practical method which has acceptable costs. Since ICN is a new architecture, there are still many issues that need to be resolved before ICN can be deployed, such as efficient operation of PIT and FIB at a chunk level [30,31].

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.