Presentation is loading. Please wait.

Presentation is loading. Please wait.

Thwarting Passive Privacy Attacks in Collaborative Filtering Rui Chen Min Xie Laks V.S. Lakshmanan HKBU, Hong Kong UBC, Canada UBC, Canada Introduction.

Similar presentations


Presentation on theme: "Thwarting Passive Privacy Attacks in Collaborative Filtering Rui Chen Min Xie Laks V.S. Lakshmanan HKBU, Hong Kong UBC, Canada UBC, Canada Introduction."— Presentation transcript:

1 Thwarting Passive Privacy Attacks in Collaborative Filtering Rui Chen Min Xie Laks V.S. Lakshmanan HKBU, Hong Kong UBC, Canada UBC, Canada Introduction Recently, collaborative filtering has been increasingly deployed in a wide range of applications. As a standard practice, many collaborative filtering systems release related-item lists (RILs) as a means of engaging users. eBay Amazon Netflix Release of RILs brings substantial privacy risks w.r.t. a fairly simple attack model, known as passive privacy attack. In a passive privacy attack, an adversary possesses some background knowledge in the form of a set of items rated by a target user, and seeks to infer for some other item, called a target item, whether it has been rated/bought by the user, from the public RIL releases published by the recommender system. Fig. 1. A sample user-item rating matrix and its public RILs Example. Consider a recommender system with the user-item rating matrix in Fig. 1. Suppose at time T1 an attacker knows that Alice (user 5) has bought items i2, i3, i7 and i8, and intends to learn if Alice has bought a sensitive item i6. The adversary then monitors the temporal changes of the public RILs of i2, i3, i7 and i8. Let the new ratings during (T1, T2] be the shaded ones. At T2, by comparing the RILs with those at T1, the attacker observes that i6 appears or moves up in the RILs of i2, i3, i7 and i8, and consequently infers that Alice has bought i6. Attack Model and Privacy Model Attack model. We propose the concept of attack window to model a real- world adversary. An adversary performs passive privacy attacks by comparing any two RIL releases within his attack window. Fig. 2. An illustration of attack windows Privacy model. We propose a novel inference-proof privacy notion, known as δ-bound, tailored for passive privacy attacks in collaborative filtering. Let Tran(u) denote the transaction of user u, i.e., the set of items bought by u. Definition (δ-bound) Let B be the background knowledge on user u in the form of a subset of items drawn from Tran(u), i.e., B ⊂ Tran(u). A recommender system satisfies δ-bound with respect to a given attack window W if by comparing any two RIL releases R1 and R2 within W, where i ∈ (I − B) is any item that either appears afresh or moves up in R2, and 0 ≤ δ ≤ 1 is the given privacy requirement. We directly anonymize RILs because in real-life recommender systems an adversary does not have access to the underlying matrices. This is critical to both data utility and scalability. Our solution employs two anonymization mechanisms: suppression, a popular mechanism used in privacy-preserving data publishing, and permutation, a novel mechanism tailored to our problem, which permutes an item that has moved up in an RIL to a position equal to or lower than its original position. Identify potential privacy threats. For each item i, identify the set of items whose successive RILs at time T1 and T2 are distinguished by i. Label them with either suppress or permute and record permute position. Determine anonymization locations. Identify the itemsets that can be used to infer some target item with probability > δ. Calculate a set of anonymization locations by modeling the process as a weighted minimum hitting set (WMHS) problem in order to minimize the resultant utility loss. Perform anonymization operations. First suppress all items labeled suppress and then permute items labeled permute without generating new privacy threats. If a permutation without violating the privacy requirement cannot be found, suppression will be used instead. Extend to multiple release. Secure a new RIL release with respect to all previous |W|-1 RIL releases, where |W| is the size of an attack window. Anonymization Algorithm Experimental Evaluation Conclusion Our work is the first remedy to passive privacy attacks in collaborative filtering. In this paper, we proposed the δ-bound model for thwarting passive privacy attacks and developed anonymization algorithms to achieve δ-bound by means of a novel anonymization mechanism called permutation. Fig. 3. Utility results on: MovieLens (a)–(d); Flixster (e)–(h) Fig. 5. Efficiency results on: MovieLens (a)–(d); Flixster (e)–(h) Fig. 4. Attack success probability results on: MovieLens (a)–(d); Flixster (e)–(h)


Download ppt "Thwarting Passive Privacy Attacks in Collaborative Filtering Rui Chen Min Xie Laks V.S. Lakshmanan HKBU, Hong Kong UBC, Canada UBC, Canada Introduction."

Similar presentations


Ads by Google