Detecting Anomalous LAN Activities under Differential Privacy

Anomaly detection has emerged as a popular technique for detecting malicious activities in local area networks (LANs). Various aspects of LAN anomaly detection have been widely studied. Nonetheless, the privacy concern about individual users or their relationship in LAN has not been thoroughly explored in the prior work. In some realistic cases, the anomaly detection analysis needs to be carried out by an external party, located outside the LAN. Thus, it is important for the LAN admin to release LAN data to this party in a private way in order to protect privacy of LAN users; at the same time, the released data must also preserve the utility of being able to detect anomalies. This paper investigates the possibility of privately releasing ARP data that can later be used to identify anomalies in LAN. We present four approaches and show that they satisfy different levels of differential privacy - a rigorous and provable notion for quantifying privacy loss in a system. Our real-world experimental results confirm practical feasibility of our approaches. With a proper privacy budget, all of our approaches preserve more than 75% utility of detecting anomalies in the released data.


I. INTRODUCTION
Security of local area networks (LANs) has been getting more attention in the last few decades. Traditional LAN defense mechanisms based on a firewall are no longer effective in preventing malware infection since malware can simply circumvent the firewall or infect the network through other means [1,2]. A prominent example is the recent emergence of ransomware that can infect LAN devices via phishing attacks; these attacks remain effective even if the LAN's firewall is active and configured correctly [3,4]. In addition, with the rise of the Internet-of-things (IoT), the so-called "smart" devices have become widely popular and, at the same time, are also extremely vulnerable to malware attacks [5]. These devices may be infected from the outside world and introduce malware to the LAN. *  To overcome this challenge, several anomaly detection techniques have been proposed to detect malicious activities in LAN. Among those, techniques based on the Address Resolution Protocol (ARP) are shown to be promising in detecting anomalous activities in LAN without requiring a change to existing devices [6,7], making it suitable to the current IoT networks.
Despite this success, there still remains a severe privacy concern to LAN users, which has not been thoroughly explored in the previous work. Often times, the anomaly detection must be performed by an entity outside LAN [8,9,10] or thirdparty software [11,12]. Thus, it is equally important to ensure privacy of the data exposed to this external and potentially malicious entity. For instance, a LAN admin in an enterprise may choose to outsource an anomaly detection analysis to an external widely-popular service, e.g., Microsoft's Anomaly Detector [11], or the admin simply wants to release some features of network data for transparency or academic purposes. In either case, it would require the LAN admin to output network data (which is an input to the anomaly detection algorithm) to an untrusted party. Doing so may lead to having such party learn privacy-sensitive information about the LAN users. For example, it may directly disclose personally identifiable information (PII), e.g., IP/MAC addresses, which can be used to uncover the identity of LAN users. It may also cause an indirect information leakage by revealing information about access patterns (e.g., the time of the day that a specific user is online) or relationship between users [13].
While it is possible to simply erase all users' sensitive information from the output data, this kind of technique does not provide strong and provable privacy guarantees. A motivated adversary may still be able to deanonymize users through other means, e.g., performing a side-channel analysis [14] or correlating the remaining network traces with the physical world data [15]. Therefore, there is a need for a technique with rigorous privacy guarantees, while preserving arXiv:2204.06704v1 [cs.CR] 14 Apr 2022 the utility of detecting anomalies in the LAN environment.
Contributions: To this end, the goal of this paper is to investigate the possibility of privately publishing ARP data that can later be used to identify anomalies in LAN. Our work presents the following contributions: • Privacy Notions for ARP Publication. We identify four concrete privacy notions in the context of ARPdata publication. Each notion is defined over a different type of information that needs to be privacy-protected as well as the probability that this protection holds. Specifically, they are derived from the widely-known differential privacy [16] notion, which allows us to mathematically prove whether a specific algorithm adheres to any of these notions. We argue that this is a necessary and essential step towards designing, implementing and deploying any privacy-preserving approach into the real world. Without it, it is doubtful whether any meaningful guarantee can be obtained from our approaches. • Releasing ARP for Anomaly Detection with Various Degrees of Privacy. We present four approaches capable of privately releasing ARP data that still preserves the utility of detecting LAN anomalies. Our approach provides a wide range of privacy-preserving degrees, making them suitable to different scenarios: -The first approach requires small additive perturbations to the input ARP data in exchange for privacy protection of user relationship. -The second approach perturbs the input data by a relatively higher amount but it can attain a stronger privacy protection guarantee for each individual LAN device/user. -The third and fourth are variants of the first two approaches that require even smaller data perturbations; however, they sacrifice some small probability that the privacy guarantee will not hold, making them an appropriate option for scenarios where data utility needs to be maximized.
• Practicality via Real-world Deployment. We demonstrate practicality of our approaches by implementing and deploying them as part of a large-scale real-world project, called ASEAN-Wide Cyber-Security Research Testbed Project 1 . Overall, the aim of this project is three-fold: (1) to capture network data from multiple LANs across the ASEAN region, (2) to determine malware behaviors based on the captured data and (3) to make the captured data sharable in the public domain. Our work fits perfectly in this project as it fulfills the third goal by providing a privacy-preserving mechanism for releasing captured ARP data. • Evaluation on Real-world Dataset. We evaluate our approaches on a real-world ARP dataset captured from 3 LANs over 30 weeks. The experimental result shows feasibility of our approaches as they introduce only low error values (< 10 in the root-mean-square error) to the original data. In addition, we assess utility of the 1 https://www.nict.go.jp/en/asean ivo/ASEAN IVO 2020 Project03.html released data by testing it on the existing LAN anomaly detector [6]. The result is promising as our approaches can achieve 75% anomaly detection rate. Organization: The rest of the paper is organized as follows: Section II overviews existing work related to LAN anomaly detection and differential privacy. The background in Address Resolution Protocol and differential privacy are discussed in Section III. Section IV describes the system and adversarial models targeted in this work. Section V presents privacy notions in the context of releasing ARP data. Section VI and Section VII present four approaches and prove that they satisfy privacy notions defined in the previous section. Experiments are carried out and reported in Section VIII. Several issues are discussed in Section IX. Finally, the paper concludes in Section X.
II. RELATED WORK Differential privacy in anomaly detection. To the best of our knowledge, there has been no prior work that proposes a release mechanism for ARP data with differential privacy guarantees while retaining the utility of anomaly detection in the LAN setting. The closest related work can be found in [17], where the authors employ PINQ differential privacy framework [18] to detect network-wide traffic anomalies. The main difference between our work and the work in [17] lies in the type and magnitude of the released data as well as the privacy guarantee. The work in [17] aims to privately release link-level traffic volumes of ISP whose overall value tends to be much larger than noise introduced by any differentially-private release mechanism. On the other hand, our work operates on more restricted input (ARP-degree) which generally contains a much smaller value, making it more noise-sensitive than ISP's traffic volume. Reducing this sensitivity poses a main challenge addressed in this work. Further, the work in [17] provides no privacy protection guarantee for individual network users. Achieving this guarantee is non-trivial, as discussed in Section VI-B.
Besides the work in [17], several existing work focuses on providing anomaly detection with differential privacy guarantees in non-networked settings, e.g., web browsing [19], social network [20], health care [21], or syndrome surveillance [22]. Due to the difference in the target setting, the aforementioned techniques are not directly applicable to our work.

LAN anomaly detection.
There are a number of existing research that aims to detect anomalies in LAN without providing privacy protection. Zhang et al. [23] present an approach based on honeypot to detect malicious LAN activities. Yeo et al. [24] propose a framework to monitor a network traffic and detect anomalies in the Wireless LAN (WLAN) environment via the IEEE 802.11 MAC protocol. Nonetheless, this approach is specific to wireless LAN and thus cannot be directly applied to the wired LAN setting. Our approaches are based on ARP requests, making them suitable for both wired and wireless LAN environments.
Several prior work focuses on detecting LAN anomalies based on ARP-related data. Whyte et al. [25] propose an anomaly detection approach that distinguishes anomalous activities through statistical analyses of ARP traffic. Yasami et al. [7] propose to model normal ARP traffic behaviors using Hidden Markov Model. Farahmand et al. [26] detect LAN anomalies based on four features: traffic rate, burstiness, dark space and sequential scan. Matsufuji et al. [6] present an anomaly detection algorithm based on the degree of destination of ARP requests.

A. Address Resolution Protocol (ARP)
In a nutshell, ARP is a request-response protocol that provides a mapping between dynamic IP addresses and permanent link-layer addresses (also known as MAC addresses), allowing one computer to discover a MAC address of another from its IP address. This protocol is essential in a LAN environment since it enables communication between any two computers within the same sub-network as follows: In LAN, when one computer needs to connect with another, it uses ARP to broadcast a request asking for the MAC address associated with the IP address of the destination computer. Therefore, an ARP request contains the requester's IP and MAC addresses as well as the destination's IP address. Upon receiving the ARP request, every computer checks whether the received IP address matches with one of its network interfaces. If it does, it unicasts an ARP response back to the requester along with its IP and MAC addresses. At the end of this process, the requester successfully retrieves the destination's MAC address and can use this information to construct Ethernet frames for transmitting subsequent data to the target computer.
Similar to other network protocols, ARP involves using sensitive data that has previously been shown to be directly (e.g., IP address) or indirectly (e.g., traffic volume [15]) linkable to the identity of network users. Hence, this privacy concern must be taken into account when designing an approach for releasing ARP data.

B. Differential Privacy (DP)
Consider a setting in which there are n users who send individual data to a trusted curator. The curator then applies an algorithm M and outputs these results to an untrusted party. In a strong notion of privacy, the data of an individual must be kept private from strong adversaries -even ones who get a hand on the data of the other users.
The differential privacy (DP) is a viewpoint of this notion given in a seminal paper by Dwork, McSherry, Nissim, and Smith [16]. First, we say that two databases X and X are neighboring if they differ by exactly one database entry. The differential privacy is then satisfied if changing X to X does not change the probability of observing an output of M by very much. With differential privacy, presence of a single entry will not affect the published output by much. Therefore, outputs from a differentially-private algorithm cannot be used to infer about any single entry from the input dataset.
Definition 1 (Differential Privacy). An algorithm M : X → Y satisfies (ε, δ)-differential privacy ((ε, δ)-DP) if, for every pair of neighboring datasets X and X and every subset S ∈ Y, where ε is referred as a privacy budget. We will refer to (ε, 0)-DP as ε-DP. Intuitively, smaller values of ε and δ lead to a stronger privacy guarantee. Conversely, higher values of ε and δ imply a weaker guarantee with possibly better utility/accuracy of the released data.
A related notion of differential privacy is the concentrated differential privacy, which aims to control the moments of the privacy loss variable: Definition 2 (Rényi Divergence). Let P and P be probability densities. The Rényi divergence of order λ ∈ (1, ∞) between P and P is defined as: Definition 3 (Concentrated Differential Privacy [27]). An algorithm M : X → Y satisfies ρ-zero-concentrated differential privacy (ρ-zCDP) if, for every pair of neighboring datasets X and X and every λ ∈ (1, ∞), One useful property of the differential privacy is that it is preserved under post-processing.
There may be some certain situations in which we want to apply multiple DP algorithms, e.g., releasing continual or time-series data. In this case, the resulting algorithm is also differentially private. However, every new DP algorithm comes with a cost of privacy loss, as stated in the following proposition.
To introduce one of the most ubiquitous ε-DP algorithms, we start with the 1 -sensitivity of a randomized algorithm M : X → R k , which is the maximum 1 change in the output as a result of modifying a single datum. We denote this sensitivity as ∆ M , and formally define it as: Theorem 1 (Laplace mechanism [28]). Let M : X → R k be an algorithm with sensitivity ∆ M and Y i be a noise generated by sampling from a Laplace distribution at scale = ∆ M /ε, i.e., Y i ∼ Laplace(∆ M /ε), then the randomized algorithm A defined by is ε-DP.
In addition to the Laplace mechanism, the Gaussian mechanism is also commonly used to provide ρ-zCDP: Theorem 2 (Gaussian mechanism [27]). Let M : X → R k be an algorithm with sensitivity ∆ M and Y i be a noise generated by sampling from a Gaussian distribution at scale In view of Proposition 2, a composition of N Laplace We see that, for successive use of a DP mechanism, the Gaussian mechanism gives comparatively smaller noise than the Laplace mechanism. The following lemma shows how the two definitions of differential privacy are related.
IV. SYSTEM AND ADVERSARIAL MODELS Figure 1 illustrates the system model considered in this work. We consider a system in which an entity, called Admin, possesses a LAN consisting of n User-s (i.e., computing devices). In addition, Admin introduces a monitoring device to this LAN in order to observe ARP requests of all User-s. We denote V jk to be aggregate ARP requests originated from User k, measured and accumulated at the j th interval. In this work, we assume the time interval to be in a unit of "a week", since this time scale allows us to use data collected from a long period of time without losing too much privacy budget from the composition (Proposition 2). V j is denoted the result after appending all ARP requests of all User-s generated in week j, i.e. V j = {V j1 , V j2 , ..., V jn }.
As shown in Figure 1, our system starts by having the monitoring node (periodically) send aggregate ARP requests -V = {V 1 , ..., V t } -to Admin, corresponding to step in Figure 1. Admin is interested in learning whether the LAN as a whole has had any anomalous activities for the last t weeks in a private way. Thus, in step , he proceeds to apply a certain algorithm Algo with the goal of hiding sensitive information from the input V and then releases the output D to an external entity Analyst in step . In step , Analyst in turn performs an anomaly detection analysis on D and returns the result O back to Admin. O contains O i that allows Admin to identify whether the LAN contains an anomaly at week i. We summarize notation used throughout the paper in Table I   TABLE I Adversarial Model: Analyst is assumed to be honest-butcurious, i.e, he always honestly applies an anomaly detection algorithm on any given input data and returns the correct output to Admin. However, during the process, he may attempt to learn sensitive information about User-s or their relationship, and use it for his own benefits.
Goal & Scope: In this work, we focus on addressing privacy concerns in the aforementioned system, where data from LAN is exposed to an external party. Hence, we do not consider other LAN settings capable of handling and processing this data locally, e.g., LANs in a large corporate with its own internal anomaly detection tool.
The goal of this work is to design approaches that can be appropriately used as the algorithm Algo in step of Figure 1. In other words, our approaches must allow the process of releasing ARP data with some levels of provable privacy guarantees. Besides privacy, utility of the privatized/released data for anomaly detection is also important. We must ensure that the privatized value does not change by a significant amount, compared to the non-privatized counterpart; otherwise, it will not be useful in detecting anomalies.

V. DP NOTIONS FOR ARP-REQUEST DATA
In this section, we describe 4 variants of differential privacy notions related to our system model. The summary of DP notions discussed throughout this Section is shown in Table II. To understand privacy (i.e., what concrete information needs to be private and hidden from Analyst) in our target scenario, we first describe the characteristic of ARP-request data. Figure 2 illustrates an example of a LAN that consists of 3 User-s producing 4 ARP requests over a specific time interval. We User-s 1 define the (ARP-request) "degree" of User k as the number of User-s that receives ARP requests from User k. In this example, the degrees of User 1, 2 and 3 are 2, 2 and 0, respectively. Using this model, we can view V j -aggregate ARP-request data at week j -as a directed graph, where User can be represented by a node; whereas an arrow (or a directed edge) from node s to node r indicates ARP request(s) generated by User s and sent to User r in the same time interval. The degree of User k is then equivalent to the number of directed edges originating from User k.
As a directed graph, V j can not directly represent a database entry, required by Definition 1. Thus, the aforementioned notion of differential privacy does not accurately capture the privacy guarantee in our scenario. Fortunately, there was prior work focusing on expressing differential privacy of a graph database. Specifically, the work in [29] presents notions of differential privacy between graphs by first defining two types of neighboring graphs: two graphs are edge-neighboring if they differ by a single edge. Likewise, they are nodeneighboring if they differ by a single node.
We now proceed to present two notions of privacy in edgeneighboring graphs: Definition 4 ((ε, δ)-edge-DP). Let G be the set of graphs between User-s. An algorithm M : G → Y satisfies (ε, δ)edge-differential privacy or (ε, δ)-edge-DP if, for every pair of edge-neighboring graphs G and G and every subset S ⊆ Y, Definition 5 (ε-edge-DP). An algorithm satisfies ε-edgedifferential privacy (ε-edge-DP) if and only if it satisfies (ε, 0)edge-DP.
Since an edge in our system refers to ARP requests between a pair of User-s, Definition 4 and 5 provide privacy protection for these ARP requests. This means that an algorithm satisfying ε-edge-DP/(ε, δ)-edge-DP is guaranteed to reveal no information about all ARP requests exchanged between any pair of User-s, resulting in hiding the ARP relationship of all User-s. This, for example, could hide the source of infection in LAN as it is common for malware to utilize ARP as the first step to discover and infect other LAN User-s.
Nonetheless, the guarantee provided by these definitions is not strong enough to protect privacy of individual User-s. To achieve this stronger guarantee, we adopt the following notions: Definition 6 ((ε, δ)-node-DP). Let G be the set of graphs between User-s. An algorithm M : G → Y satisfies (ε, δ)node-differential privacy or (ε, δ)-node-DP if, for every pair of node-neighboring graphs G and G and every subset S ⊆ Y, Definition 7 (ε-node-DP). An algorithm satisfies ε-nodedifferential privacy (ε-node-DP) if and only if it satisfies (ε, 0)node-DP.
Indeed, by removing a node we also have to remove all of its edges. One then has that ε-node-DP is stronger than ε-edge-DP. In our scenario, an algorithm satisfying ε-node-DP/(ε, δ)-node-DP prevents information leakage about presence or absence of any individual User.
Remark: recall δ represents an upper bound of the probability that an algorithm fails to satisfy the ε-DP notion. As an example, an algorithm satisfying (ε, δ)-node-DP has at most δ probability that will leak some information about an individual node in a graph. To make (ε, δ)-edge/node-DP notions meaningful in practice, one must minimize this failure probability by ensuring that δ is negligible in terms of number of data points (#p) considered in the DP notion [28]. One way to achieve this is to set δ to: δ = δ /#p for some small δ In (ε, δ)-node-DP notion, #p is the number of nodes; whereas, in (ε, δ)-edge-DP, #p corresponds to the number of possible directed edges ≈ (#nodes) 2 . Thus, it is easy to see that δ in (ε, δ)-edge-DP must be set smaller than that in (ε, δ)node-DP in order to attain the negligible probability.

VI. RELEASING ARP-REQUEST DATA WITH ε-EDGE/NODE-DP
In this section, we present two approaches, called naïve and histogram-based; the former guarantees ε-edge-DP while the latter is proven to satisfy the ε-node-DP notion. Later in Section VII, we describe variants of these approaches that satisfy the more relaxed (ε, δ)-edge/node-DP notions.

A. Naïve Approach
The naïve approach is described in Algorithm 1. In the rest of this section, we discuss non-trivial details of this approach and show that it indeed satisfies ε-edge DP.
Theorem 3. The naïve approach as described in Algorithm 1 is ε-edge-DP.

Algorithm 1: Naïve Approach
Input: Proof. Let V j ∈ G be the directed graph of ARP requests in week j. Let M be the algorithm that computes the weekly total degrees and D j = M(V j ) (Line 2 of Algorithm 1), which also corresponds to the total number of edges in V j . To preserve εedge-DP of each User's ARP requests, one can simply use the Laplace mechanism. To do so, we need to find an upper bound of the sensitivity ∆ M . Let V j be an edge-neighboring graph of V j in week j and D j = M(V j ) . Then, ∆ M = |D j −D j | ≤ 1 and we have the following Laplace mechanism A (Line 2-3) guarantee the ε/t-node DP: Algorithm 1 can then be represented as: where P is a post-processing function (Line 4-5) that: (i) precludes a negative output by thresholding it to 0, and (ii) rounds a non-negative privatized value into the closest integer in order to prevent the floating point attack [30]. By Proposition 1 and 2, we can conclude that this algorithm is tε/t-edge-DP or ε-edge-DP.
To prevent excessive information loss, one needs the Laplace noise to be smaller than D j , i.e., t/ε < E[D j ] or ε > t/E[D j ]. This can be achieved in realistic settings, e.g., ε = 2 in our experiment (Section VIII) where t = 30 and the lower quartile of D j is 20.
On the other hand, a similar analysis for the ε-node-DP results in much bigger Laplace noises; consider two nodeneighboring directed graphs V j , V j of n User-s. The degrees D j , D j defined as above satisfy |D j − D j | ≤ n, which cannot be improved further. Thus, in order to employ the Laplace mechanism, the noises have to be sampled from Laplace(tn/ε). In contrast to the edge-DP regime, the scale of the noise comes with a factor of n. As a result, for a large number of User-s, it is no longer feasible to preserve both privacy and utility at the same time.

B. Histogram-based Approach
As seen in the previous subsection, the naïve approach can not be used to satisfy ε-node-DP in practice due to its high sensitivity, leading to too strong additive noises which in turn significantly lower utility of the released data. Instead, we propose a second approach utilizing a histogram that helps reduce the ε-node-DP sensitivity to a reasonable amount.
Our histogram-based approach is shown in Algorithm 2. The rationale behind this approach is to transform the degree data in such a way that its sensitivity is minimized when any User is removed from V j . Naturally, a histogram is a good fit for this approach since it provides a way to partition data into disjoint groups/bins, where each bin in this case represents a range of degrees. Thus, this approach first computes the degrees of each User in a specific week and uses this degree data to construct a histogram, as shown in Line 2 of Algorithm 2. This histogram data minimizes the ε-node-DP sensitivity because removing a User from the histogram data affects only one bin, i.e., the one this User belongs, and it only decreases its bin count by one; other histogram bins are unaffected by this change. We then can apply the Laplace mechanism on each bin (Line 3), threshold and round the resulting value to the closest integer (Line 5-6) and finally return this noisy histogram as an output.
We now formally show that the histogram-based approach satisfies ε-node-DP.
Theorem 4. The histogram-based approach as described in Algorithm 2 is ε-node-DP.
Proof. Let V j and V j be node-neighboring directed graph at time j, i.e., V j can be obtained from V j by adding or removing a single node. Let M : G → R k be the algorithm that computes the histogram of the degrees, i.e., the entries of M(V j ) and M(V j ) are the count of nodes by their degrees. Then M(V j ) and M(V j ) differ by one in the entry corresponding to the degree of User j, who only exists in either V j or V j . Therefore, Observe that Line 2-7 of Algorithm 2 can be written as a randomized algorithm A : G → R k defined by where Y i ∼ Laplace(t/ε) and P corresponds to the thresholdthen-round function computed on all bin counts (Line 5-6). It follows from Theorem 1 and Proposition 1 that A is ε/tnode-DP.
Then, we can define Algorithm 2 as a randomized algorithm A as follows: By Proposition 2, we have that the histogram-based approach (described in Algorithm 2) is tε/t-node-DP or ε-node-DP.

VII. RELEASING ARP-REQUEST DATA WITH (ε, δ)-EDGE/NODE-DP
The approaches in the previous section require adding a noise proportional to t, which may not scale well in practice when t is large. We explore an alternative by instead adopting the Gaussian Mechanism in order to reduce additive noise from O(t) to O( √ t). We call these variants, naïve-δ and histogrambased-δ, which guarantee (ε, δ)-edge-DP and (ε, δ)-node-DP respectively.

A. Naïve-δ Approach
In conjunction with the naïve approach (Algorithm 1) which gives a strong privacy guarantee by adding considerably large amount of noises, we develop here another approach that adds less noises, but provides a weaker (ε, δ)-edge DP guarantee. The algorithm is described in Algorithm 3. Similar to Algorithm 1, we round the noisy outputs to the nearest integers to protect the data from floating point attacks. In the rest of this section, we discuss non-trivial details of this approach and show that it indeed satisfies (ε, δ)-edge DP.
Proof. Let V j ∈ G be the directed graph of ARP requests in week j. Let M be the algorithm that computes the weekly total degrees and D j = M(V j ) (Line 3 of Algorithm 3). As in the proof of Theorem 3, the edge-sensitivity ∆ M satisfies ∆ M ≤ 1. Observe that Line 3-6 of Algorithm 3 can be written as a randomized algorithm A : G → R k defined by where Y i ∼ N (0, t/2ρ) and P corresponds to the thresholdthen-round function computed on all bin counts (Line 5-6). It follows from Theorem 2 and Proposition 1 that A is ρ/t-zCDP.
Then, we can define Algorithm 3 as a randomized algorithm A as follows: By Proposition 2, we have that the Algorithm 3 is tρ/t-zCDP or ρ-zCDP. Using Lemma 1 and recalling the definition of ρ in Line 1 of Algorithm 3, we conclude that this algorithm is also (ε, δ)-edge-DP.

B. Histogram-based-δ Approach
We aim to construct an (ε, δ)-node-DP with less noises compared to the ε-node-DP algorithm in Section VI-B. We still rely on a histogram-based approach as it has small sensitivity upon adding/removing a node. Our histogram-based-δ approach is described in Algorithm 4.
Proof. Let V j and V j be node-neighboring directed graph at time j, i.e., V j can be obtained from V j by adding or removing a single node. Let M : G → R k be the algorithm that computes the histogram of the degrees, i.e., the entries of M(V j ) and M(V j ) are the count of nodes by their degrees. As in the proof of Theorem 4, the node-sensitivity ∆ M satisfies ∆ M ≤ 1 Looking at Algorithm 4, we observe that Line 3-7 can be written as a randomized algorithm A : G → R k defined by where Y i ∼ N (0, t/2ρ) and P corresponds to the thresholdthen-round function computed on all bin counts (Line 6-7). It follows from Theorem 2 and Proposition 1 that A is ρ/tnode-DP.
Then, we can define Algorithm 4 as a randomized algorithm A as follows: By Proposition 2, we have that the histogram-based approach (described as in Algorithm 4) is tρ/t-zCDP or ρ-zCDP. From the definition of ρ in Line 1 of Algorithm 4), we conclude using Lemma 1 that this algorithm is also (ε, δ)-node-DP.

VIII. EVALUATION
In this section, we evaluate our approaches by deploying them as part of a large-scale research project and reporting their utility from a real-world dataset extracted from such project.  Independent of our work, the first phase of this project involves capturing, collecting and analyzing LAN data in Southeast Asian countries. To achieve this task, a small monitoring device, implemented atop of a raspberry-Pi 3B in Figure 3, is introduced and placed into several LANs across the ASEAN region. This monitoring device observes and captures the network traffic flowing within a LAN and periodically outputs the captured data to our server, in which such data is analyzed and a model of ASEAN malware is eventually created.

A. Real-world Deployment
Deployment. Our work plays an important role in the second phase of this research project. It allows us to privately share aggregate ARP data collected from the previous phase with other project members as well as to the public domain. Our approaches enable a release mechanism of ARP-request data that still retains the utility of LAN anomaly detection. To assess utility, we evaluated our approaches on a subset of data captured and extracted from this research project.
The extracted dataset contains all ARP-request data observed and collected from 3 real-world LANs over a 30week period. These LANs are located in: (1) The University of Tokyo, Japan (thus, its dataset is labeled as JPN), (2) Prince of Songkla University -Phuket Campus, Thailand (HKT) and (3) Prince of Songkla University -Hatyai Campus, Thailand (HDY). Details about these monitored LANs can be found in Table III.
Parameter Selection. As we collected ARP requests over a 30-week period, t = 30. The naïve approach involves no other parameters. Meanwhile, the histogram-based approach consists of an additional set of parameters: the number of bins and the width of each bin. Intuitively, a larger number of bins leads to smaller bin counts. In such case, the noise injected by our approach would become too large, severely decreasing utility of the released data. To avoid this problem, we select the number of histogram bins to be relatively small -3. Specifically, we choose the first two bins to correspond to the number of User-s whose degrees are 1 and 2, respectively; the third bin contains the number of User-s with degree ≥ 3.
Finally, the approaches in Section VII consist of another parameter δ. Recall from the remark in Section V that δ must be negligible with respect to the number of data points (#p). In other words: δ = δ /#p for some small δ In our target system, #p corresponds to n and n 2 for the node-DP and edge-DP notions, respectively; See Table III for the number of User-s (n) in each monitored LAN. Unless stated otherwise, we use δ = 0.01 for all experiments. Nonetheless, the impact of different δ values on the utility is also assessed in the next subsection.

RMSE.
In the context of differential privacy, one common utility metric is defined as an error between the released privatized values z * and the non-privatized aggregates z. We adopt a similar approach and select the root-mean-square error (RMSE) as our first evaluation metric: Impact of ε. Recall that ε refers to a privacy budget in the DP notion and a lower value of ε implies stronger privacy, while possibly sacrificing utility. Figure 4 shows the impact of ε on the utility of the proposed approaches. Unsurprisingly, we achieve lower errors and thus better utility from a higher ε. For all 3 monitored LANs, ε = 5 seems to be a pragmatic choice in order to maintain a low error (< 10) for all approaches.
Next, we show how much utility can be improved by using the approaches in Section VII instead of their counterparts in Section VI. The result, illustrated in Figure 5, suggests that both naïve-δ and histogram-based-δ approaches enjoy higher utility (i.e., a utility gain) when ε ≤ 4. However, as the ε gets larger, this utility gain becomes smaller; in fact, the naïve-δ approach incurs a utility loss when ε ≥ 8 for all monitored LANs. This result suggests using the approaches in Section VII only when one needs stronger privacy, i.e., small ε. Figure 5 also indicates the histogram-based-δ approach significantly outperforms the naïve-δ approach in terms of the utility gain. For ε ≤ 4, the histogram-based-δ approach provides ≥ 28% utility gain, while a smaller amount of utility gain (≤ 20%) can be realized in the naïve-δ approach. This is expected because the histogram-based-δ approach introduces a smaller value of δ (see the remark in Section V), making the additive noise smaller and thus resulting in the higher utility gain.
In addition, n also has a direct impact to δ and hence to the overall utility. As seen in Figure 5, among all monitored LANs, HDY has the highest number of User-s and therefore suffers the lowest utility gain.
Impact of δ . We now assess the impact of δ on the utility of our approaches. Figure 6 shows RMSE of the naïve-δ and histgoram-based-δ approaches for different values of δ . As expected, increasing δ results in a decrease in RMSE and thus improves the utility of our approaches. This decrease is logarithmic as a function of δ .
The utility gain of the naïve-δ and histgoram-based-δ approaches with respect to their original counterparts is illustrated in Figure 7. Our approaches benefit from the higher utility gain when δ is larger. For most δ values, the histogrambased-δ approach provides a positive utility gain over the histogram-based approach. Meanwhile, a utility gain can be achieved from the naïve-δ approach when δ ≥ 10 −3 .
This experimental result suggests that both naïve-δ and histogram-based-δ approaches still provide a utility advantage over their original counterparts even for δ smaller than 10 −2 (up to 10 −3 for the naïve-δ approach and 10 −6 for the histogram-based-δ approach). In practice, one may choose to opt for smaller δ if a stronger privacy guarantee is needed.

C. Utility Assessment: Anomaly Detection Accuracy
Anomaly detection algorithm. In addition to low errors, it is also essential that outputs produced by our approaches can still be useful in identifying anomalous activities in LAN. Hence, we further evaluate utility of our approaches by assessing them via a LAN anomaly detector. In this experiment, we consider our approaches to preserve the utility of anomaly detection if the anomaly detector classifies the privatized data the same way as the original (non-privatized) data.
For the anomaly detector, we choose an approach based on exponentially weighted moving average and variance [31] proposed by Matsufuji et.al. [6] since it is tailored specifically  for detecting LAN anomalies based on ARP data, which is also the focus in this work. All parameter values are selected based on the recommendation from [6]. It is worth noting that the anomaly detector in [6] only supports input of type univariate time series. However, the histogram-based approach and its variant produce a multivariate time series output (i.e., a time series of histograms), and hence cannot be used directly as input to the anomaly detector. To address this issue, we perform a simple transformation that converts two consecutive histograms into a single variable using the L 1 distance function; the result of this transformation is then given as input to the anomaly detector. More formally, the transformation is defined as: Metrics. In this experiment, we evaluate utility of our approaches using two metrics: true positive rate (T P R) and Based on these definitions, T P R and F 1 metrics can be formulated as: A high value of T P R implies that a high percentage of anomalies detected in the original data is also captured as an anomaly in the privatized data. On the other hand, a high value of F 1 implies relatively small values of F P and F N compared to T P .
Results. Figure 8 and 9 show the utility of our approaches evaluated using T P R and F 1 metrics, respectively. First, we can see that ε does not affect utility of the naïve and naïveδ approaches as both approaches still provide almost perfect utility scores in all monitored LANs.
On the other hand, the histogram-based and histogrambased-δ approaches yield low utility for small ε. The utility scores then become higher as ε increases. For HKT, both approaches achieve a reasonable score of > .75 with ε = 5. Meanwhile, ε must be set to 6 in order to achieve the same utility score in HDY. JPN requires the highest ε (= 12) in order for the histogram-based-δ approach to perform 75% T P R.
Lastly, the results also confirm that the histogram-based-δ approach significantly outperforms the naïve-δ approach in terms of utility. Thus, we recommend to deploy the histogram-based-δ approach over the histogram-based approach when one needs to publish ARP-request data with user privacy protection (i.e., corresponding to the node-DP notion); whereas, if edge-DP is sufficient, the naïve approach is a more reasonable choice over the naïve-δ approach as the former provides a stronger privacy guarantee while both approaches achieve the similar utility performance.
Comparison with RMSE. In most cases, the utility results from T P R and F 1 metrics are consistent with the previous results measured using RMSE in Section VIII-B. That is, a higher ε leads to higher utility with lower RMSE and higher T P R and F 1 . On the other hand, an extremely low value of ε (e.g., ε = 1) renders the output data useless as it can no longer be used to reveal anomalies due to its low T P R/F 1 . There is, however, one exception: the naïve and naïve-δ approaches surprisingly can still attain high T P R and F 1 utility despite low ε. This indicates that such approaches are more robust to additive noises than other approaches.

IX. DISCUSSION
ARP Fields. Our approaches take as input ARP-degree data, which in turn makes use of only 5 fields in ARP packets: SHA, SPA, THA, TPA, and OPER. In this work, we choose to discard the rest of the ARP fields (i.e., Hardware Type/Length (HTYPE/HLEN) and Protocol Type/Length (PTYPE/PLEN)) from our analysis. This is because, in practice, these discarded fields usually have fixed values that contain neither sensitive information nor anything meaningful to our approaches. For instance, since ARP is only applicable to IPv4, the PLEN field is always set to the value of 4 indicating the size of an IPv4 address; or HTYPE usually contains the value of 1 representing the ubiquitous Ethernet hardware type. As these fields are generally constant for all ARP packets, their absence does not affect privacy or utility to our approaches. DP Mechanisms. In this work, we focus on releasing ARPdegrees in differentially private manners. Publishing degrees has sensitivity of 1 (removing a user's ARP request alters the total ARP-degrees by 1), which is small compared to the number of ARP requests sent by all users. Thus we choose the noise perturbation methods, namely the Laplace and the Gaussian mechanism, to privatize the ARP-degrees. Another well-known differential privacy mechanism is the randomized response, whose standard deviation is O √ N ε [32], which is worse than the standard deviation of the Laplace and Gaussian mechanism, which is O 1 ε . There are also differential privacy mechanisms based on data synthesis [33]. However, as anomaly detection algorithms look for "spiking" behaviors at a particular time interval, these data synthetic approaches, which try to replicate the distribution of the data as a whole, will not be able to retain the spikes as well as the perturbation mechanisms.
Time Interval. In our evaluation, we consider the time interval for ARP-data collection to be in a unit of a week. Albeit a bit long, this design choice is necessary as it allows us to incorporate all data (which spans for 30 weeks) into our analysis with higher utility rate and without losing too much privacy budget.
To illustrate this point, we conduct a new experiment on the JPN network where we aggregate and process ARP data on a shorter period, i.e., every day instead of every week. Compared to the original experiment, we have observed a drastic decrease in the utility rate for all our approaches. As an example, for the naïve approach with ε = 4, the RMSE has increased by a factor of 6 (from 10 to 60), while the T P R and F 1 score have reduced substantially from 1.0 to ≈ 0.6.
Utility Metrics. We evaluate our approaches using two utility metrics: RMSE and Anomaly Detection Accuracy. We select the former because it is one of the most common metrics for measuring utility from a DP mechanism [34]. Intuitively, it tells us "how far apart the privatized data is from the original data". Since an anomalous activity appears as an unusual value in the data, a privacy-preserving mechanism with small RMSE would not perturb that value by much, allowing such activity to be detected from the privatized data. Besides RMSE, there are other similar metrics with the same purpose, e.g., Mean Absolute Error. Even though we do not include them in this work, we expect the results from such metrics to be in line with our current results.
Nonetheless, the RMSE does not directly indicate the "true" utility in this work since our end goal is to detect LAN anomalies, not minimize error rates. To this end, we choose to include Anomaly Detection Accuracy as our second metric. This metric realistically gives us an idea of how effective our approaches are when performing on a real-world LAN anomaly detector [6].
Finally, we do not consider other utility metrics that target different types of data publication. For example, L p -Error [35] and Hausdorff Distance [36] are geared towards measuring utility in location privacy protection. Also, informationtheoretic metrics [37] require the input to be generated from a probability distribution, which is not the case in this work.

X. CONCLUSION
This paper presents four approaches to privately releasing ARP-request data that can later be used for identifying anomalies in LAN. We prove that the naïve approach satisfies edgedifferential privacy, and thus provides privacy protection on the user-relationship level. On the other hand, the histogram-based approach can provide node-differential privacy, thus leaking no information about a presence of each individual user. We also propose two alternatives, named naïve-δ and histogram-basedδ, which require even smaller additive noises than their original counterparts in exchange for a small probability that the privacy guarantee will not hold. Feasibility of our approaches is demonstrated via real-world experiments in which we show that, with a reasonable privacy budget value, our approaches yield low errors (< 10 in RMSE) and also preserve more than 75% utility of detecting LAN anomalies.