Presentation is loading. Please wait.

Presentation is loading. Please wait.

CMSC 414 Computer and Network Security Lecture 22 Jonathan Katz.

Similar presentations


Presentation on theme: "CMSC 414 Computer and Network Security Lecture 22 Jonathan Katz."— Presentation transcript:

1 CMSC 414 Computer and Network Security Lecture 22 Jonathan Katz

2 Rest of the semester…  Today: database privacy  Next 2-3 lectures: PL security, buffer overflows, malicious software –Tuesday’s class will cover material relevant for HW4 –Thursday’s class will be a guest lecture – material will still be covered on the final  Last 2-3 lectures: Topics in network security –Network security in practice

3 Database security

4 Overview  Want to be able to discern statistical trends without violating privacy  Questions –How to obtain the raw data in the first place? –How to maintain privacy of the data while still making it useful for data mining?  Serious real-world problem –Medical databases are protected by Federal laws –Data mining on credit card transactions, web browsing –Netflix dataset

5 Obtaining sensitive data  How do you get people to give honest answers to sensitive questions? –Shall we try it?

6 Randomized response  Respondent flips a coin/rolls a die…  …and answers the question incorrectly with some known probability q  (The result of the die toss is not known to the interviewer)  Why does this preserve privacy? –What is Pr[yes | answer yes] ?

7 Analysis of randomized response  Generating an estimate: –Say the fraction of “yes” in the population is p –Pr[“yes”] = p(1-q) + (1-p)q –Solve for p given q and Pr[“yes”] –E.g., q=1/4 gives: p = 2Pr[“yes”] – 0.5  Shall we try it…?

8 Randomized response  Other variants that provide better estimators  Extensions to provide estimators of other statistics –E.g., correlations between responses

9 “Privacy-preserving data mining”

10 The setup…  A user (or group of users) has authorized access to certain data in a database, but not to all data –E.g., user is allowed to learn certain entries only –E.g., user is allowed to learn aggregate data but not individual data (e.g., allowed to learn the average salary but not individual salaries) –E.g., allowed to learn trends (i.e., data mining) but not individual data  Note: we are assuming that authentication/access control is taken care of already…

11 Two models  Non-interactive data disclosure –User given access to “all data” (possibly after the data is anonymized in some way)  Interactive mechanisms –User given the ability to query the database –We will mostly focus on this model

12 The problem  A user may be able to learn unauthorized information via inference –Combining multiple pieces of authorized data –Combining authorized data with “out-of-band” knowledge 87% of people identified by ZIP code + gender + date of birth Given ZIP+DOB, may be able to infer gender from other entries  This is a (potentially) serious real-world problem –See the article by Sweeney for many examples

13 Example  Say not allowed to learn any individual’s salary NameUIDYears of serviceSalary Alice00112$65,000 Bob0101$40,000 Charlie01120$70,000 Debbie10030$80,000 Evan1014$50,000 Frank1108$58,000 Give me Alice’s salary Request denied!

14 Example NameUIDYears of serviceSalary Alice00112$65,000 Bob0101$40,000 Charlie01120$70,000 Debbie10030$80,000 Evan1014$50,000 Frank1108$58,000 Give me the list of all names Give me the list of all salaries Alice Bob Charlie Debbie Evan Frank $65,000 $40,000 $70,000 $80,000 $50,000 $58,000 Solution: return data in order that is independent of the table (e.g.: random, sorted) $40,000 $50,000 $58,000 $65,000 $70,000 $80,000

15 Example NameUIDYears of serviceSalary Alice00112$65,000 Bob0101$40,000 Charlie01120$70,000 Debbie10030$80,000 Evan1014$50,000 Frank1108$58,000 Give me all names and UIDs Give me all UIDs and salaries (Alice, 001) (Bob, 010) (Charlie, 011) (Debbie, 100) (Evan, 101) (Frank, 110) (001, $65,000) (010, $40,000) (011, $70,000) (100, $80,000) (101, $50,000) (110, $58,000)

16 Example NameUIDYears of serviceSalary Alice00112$65,000 Bob0101$40,000 Charlie01120$70,000 Debbie10030$80,000 Evan1014$50,000 Frank1108$58,000 Give me all names with their years of service (Alice, 12) (Bob, 1) (Charlie, 20) (Debbie, 30) (Evan, 4) (Frank, 8) Give me the list of all salaries $40,000 $50,000 $58,000 $65,000 $70,000 $80,000 External knowledge: more years  higher pay

17 Some solutions  In general, an unsolved (unsolvable?) problem  Some techniques to mitigate the problem –Inference during database design E.g., recognize dependencies between columns Split data across several databases (next slide) –Inference detection at query time Store the set of all queries asked by a particular user, and look for disallowed inferences before answering any query Note: will not prevent collusion among multiple users Can also store the set of all queries asked by anyone, and look for disallowed inference there  As always, tradeoff security and functionality

18 Using several databases  DB1 stores (name, address), accessible to all  DB2 stores (UID, salary), accessible to all  DB3 stores (name, UID), accessible to admin  What if I want to add data for “start-date” (and make it accessible to all)? –Adding to DB2 can be problematic (why?) –Adding to DB1 seems ok (can we prove this?)

19 Statistical databases  Database that only provides data of a statistical nature (average, standard deviation, etc.) –Pure statistical database: only stores statistical data –Statistical access to ordinary database: stores all data but only answers statistical queries  Focus on the second type –Aim is to prevent inference about any particular piece of information –Two general approaches: query restriction and data/output perturbation

20 Query restriction  Basic idea: only allow queries that involve more than some threshold t of users  Example: Say we want to hide individual salaries  Only allow queries about the average salary of a set S of people, where |S| > 20 (say)

21 Query restriction  Query restriction itself may reveal information  Example: say averages released only if there are at least 2 data points being averaged –Request the average salary of all employees whose GPA is ≥X –No response means that there are fewer than 2 employees with GPA ≥X –If query(GPA ≥ X) answered but query(GPA ≥ X+  ) is not, there is at least one employee whose GPA lies between X and X+ 

22 Query restriction  Query restriction alone doesn’t work when multiple queries are allowed, or when the database changes  Determine a person’s salary using two queries  Determine a person’s salary after a raise

23 Query restriction  Can use more complicated forms of query restriction based on all prior history –E.g., if query for S was asked, do not allow query for a set S’ if |S’  S| is “small”  Drawbacks –Maintaining the entire query history is expensive –Difficult to specify what constitutes a privacy “breach” –NP-complete (in general) to determine whether a breach has occurred... –Does not address adversary’s external information

24 Query restriction  Comparing queries pairwise may not be enough  Example –Say you want information about user i –Let S, T be arbitrary sets (that are very different), not containing i –Ask for Avg(S), Avg(T), and Avg(S  T  {i})  Inference can be very difficult to detect and prevent…

25 Query restriction  Apply query restriction globally, or per-user? –If the former, usability limited –If the latter, security can be compromised by colluding users

26 Query restriction  Example: say we do not want an adversary to learn any value exactly  Consider the table with x = y = z = 1, where it is known that x, y, z  {0,1,2}  User requests sum(x, y, z), gets response 3  User requests max(x, y, z) –If user learns the answer, can deduce that x = y = z = 1 –But if the request is denied, the user can still deduce that x = y = z = 1 (!!)

27 Query restriction  We can try to “look ahead”, and not respond to any query for which there is a subsequent query that will reveal information regardless of whether we respond or not sum(x, y, z) max(x, y, z) deny? respond? deny

28 Query restriction with “look-aheads”  Problems –May need to look more than 1 level deep –Computationally infeasible, even if only looking 1 level deep –Does it even work? Denying the request for sum(x, y, z) reveals that x = y = z  Even if answers don’t uniquely reveal a value, they may leak lots of partial information  What can we prove about it?

29 Query restriction  A different approach is to use “simulatable auditing”  Deny query if there is some database for which that query would leak information  This fixes the previous problem: –Learning sum(x, y, z) = 3 and then seeing that max(x, y, z) is denied no longer proves that x = y = z = 1  Even more computationally expensive  Restricts usability  Again, can we claim that it even works?

30 Perturbation  Purposely add “noise” –Data perturbation: add noise to entire table, then answer queries accordingly (or release entire perturbed dataset) –Output perturbation: keep table intact, but add noise to answers

31 Perturbation  Trade-off between privacy and utility!  No randomization – bad privacy but perfect utility  Complete randomization – perfect privacy but no utility

32 Data perturbation  One technique: data swapping –Substitute and/or swap values, while maintaining low-order statistics FBio3.0 FCS3.0 FEE4.0 FPsych3.0 MBio4.0 MCS3.0 MEE3.0 MPsych4.0 FBio4.0 FCS3.0 FEE3.0 FPsych4.0 MBio3.0 MCS4.0 MEE4.0 MPsych3.0

33 Data perturbation  Second technique: (re)generate the table based on derived distribution –For each sensitive attribute, determine a probability distribution that best matches the recorded data –Generate fresh data according to the determined distribution –Populate the table with this fresh data  Queries on the database can never “learn” more than what was learned initially

34 Data perturbation  Data cleaning/scrubbing: remove sensitive data, or data that can be used to breach anonymity  k-anonymity: ensure that any “identifying information” is shared by at least k members of the database  Example…

35 Example: 2-anonymity RaceZIPSmoke?Cancer? Asian02138YY Asian02139YN Asian02141NN Asian02142YY Black02138NN Black02139NY Black02141YY Black02142NN White02138YY White02139NN White02141YY White02132YY -02138 -02139 -02141 -02142 -02138 -02139 -02141 -02142 -02138 -02139 -02141 -02132 Asian0213x Asian0213x Asian0214x Asian0214x Black0213x Black0213x Black0214x Black0214x White0213x White0213x White0214x White0213x

36 Problems with k-anonymity  Hard to find the right balance between what is “scrubbed” and utility of the data  Not clear what security guarantees it provides –For example, what if I know that someone Asian in ZIP Code 0214x smokes? –Again, does not deal with out-of-band information


Download ppt "CMSC 414 Computer and Network Security Lecture 22 Jonathan Katz."

Similar presentations


Ads by Google