Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimizing Password Composition Policies Jeremiah Blocki Saranga Komanduri Ariel Procaccia Or Sheffet To appear at EC 2013.

Similar presentations


Presentation on theme: "Optimizing Password Composition Policies Jeremiah Blocki Saranga Komanduri Ariel Procaccia Or Sheffet To appear at EC 2013."— Presentation transcript:

1 Optimizing Password Composition Policies Jeremiah Blocki Saranga Komanduri Ariel Procaccia Or Sheffet To appear at EC 2013

2 1

3 Password Composition Policy password Password Composition Policy 2

4 How Do Users Respond? Password1 3

5 Problem: Predictable Passwords 1. password abc qwerty 6. monkey 7. letmein 8. dragon …. 25. password1 4

6 Predictable Responses 1. password abc qwerty 6. monkey 7. letmein 8. dragon …. 25. password1 5

7 Previous Work Initial password composition policies designed without empirical data [BDP, 2006]. User’s respond to password composition policies in predictable ways [KSKMBCCE, 2011] Trivial password choices vary widely across contexts [BX, 2012]. No theoretical models of password composition policies. 6

8 Our Contributions We initiate an algorithmic study of password composition policies. Theoretical Model Security Goal Policy Structure User Model 7

9 Outline User Model Policy Structure Goal Algorithms and Reductions Experiments 8

10 Rankings Model User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword12345password letmein12345password abc baseballPassw0rd letmeinPassw0rd passwordPassw0rdabc123baseballiloveyou ………………… qwerty1 qwerty Each User: Passwords P ordered by preference. n = 7 (number of users). 9

11 Rankings Model: Example 1 User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword12345password letmein12345password abc baseballPassw0rd letmeinPassw0rd passwordPassw0rdabc123baseballiloveyou ………………… qwerty1 qwerty Allowed PasswordsAll Passwords 10

12 Rankings Model: Example 1 Pr[ | A] = 3/7 Pr[letmein | A] = 2/7 Pr[ | A]=Pr[12345 | A]=1/7 User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword12345password letmein12345password abc baseballPassw0rd letmeinPassw0rd passwordPassw0rdabc123baseballiloveyou ………………… qwerty1 qwerty 11

13 Rankings Model: Example 2 User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword12345password letmein12345password abc baseballPassw0rd letmeinPassw0rd passwordPassw0rdabc123baseballiloveyou ………………… qwerty1 qwerty Allowed PasswordsAll Passwords 12

14 Warm-up Fact: Let A’  A then for any w  A’ Pr[w|A] ≤ Pr[w|A’] User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword12345password letmein12345password abc baseballPassw0rd letmeinPassw0rd passwordPassw0rdabc123baseballiloveyou ………………… qwerty1 qwerty Initially one person uses letmein as their password. letmein 13

15 Warm-up Fact: Let A’  A then for any w  A’ Pr[w|A] ≤ Pr[w|A’] User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword12345password letmein12345password abc baseballPassw0rd letmeinPassw0rd passwordPassw0rdabc123baseballiloveyou ………………… qwerty1 qwerty Every user who used letmein before is still using the same password. 14

16 Outline User Model Policy Structure – Positive Rules – Negative Rules – Singleton Rules Goal Algorithms and Reductions Experiments 15

17 Positive Rules 16

18 Positive Rules - Example Rules R 1,…,R m  P R 1 = {w | Length(w)  14}. Active Rules: S  {1,…,m}. A {1} = {w | Length(w)  14}. 17

19 Negative Rules 18

20 Negative Rules - Example Rules R 1,…,R m  P R 1 = {w | Length(w) < 8}. Active Rules: S  {1,…,m}. A {1} = P - {w | Length(w) < 8}. 19

21 Singleton Rules Rule R w = {w} for each w  P. Can allow/ban any individual password. Special Case of Positive Rules/Negative Rules. 20

22 Outline User Model Policy Structure Goal Algorithms and Reductions Experiments 21

23 Online Attack password 22 Guess Limit: k-strikes policy p(k, A) – probability of a successful untargeted attack given A.

24 The Wrong Goal 23

25 p(k,A) - Example p(1,A) = Pr[111111] = 3/7 p(2,A) = p(1,A) + Pr[letmein] = 5/7 p(3,A) = p(2,A) + Pr[123456]= 6/7 User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword12345password letmein12345password abc baseballPassw0rd letmeinPassw0rd passwordPassw0rdabc123baseballiloveyou ………………… qwerty1 qwerty 24

26 Goal: Optimize p(k,A) Goal: Find a password composition policy S  {1,…,m} which minimizes p(k,A S ) for some k. p(k, A) – Fraction of accounts an adversary can crack with k guesses per account given policy A. p(1, A): minimum entropy of the password distribution resulting from policy A. 25

27 Even Better Goal? Universal Approximation: Find a password composition policy S  {1,…,m} such that p(k,A S ) ≤ c p(k,A S’ ) for some constant c and every k, S’  {1,…,m}. Thm: Universal approximation is unachievable in general. 26

28 Outline User Model Policy Structure Goal Algorithms and Reductions Experiments 27

29 Results Rankings Model Constant kLarge k Singleton RulesPNP-Hard APX-Hard (UGC) Positive RulesPNP-Hard Negative Rulesn 1/3 -approx is NP-HardNP-Hard This Talk: k=1 n 1/3 -approx is NP-Hard Parameters: n, m, |P| 28

30 Negative Rules are Hard! Theorem: Unless P = NP no polynomial time algorithm can even approximate p(1,A S ) to a factor of n 1/3-  in the negative rules setting. 29

31 Reduction Maximum Independent Set: g vertices e edges Theorem [Hastad 1996]: NP-Hard to distinguish the following two cases (1) any independent set has size at most K = g  or (2) the maximum independent set has size g 1- . 30

32 Reduction (Preference Lists) Preference Lists: Type 1 W1W1 …W1W1 W2W2 …W2W2 ……… WKWK …WKWK B1B1 …BgBg ……… Observation: Unless we ban W 1,…,W K we have p(1,A S ) ≥ g/n 31

33 Reduction (Preference Lists) Preference Lists: Type 2 (for each edge e = {u,v}) (u,v,1)…(u,v,g) (v,u,1)…(v,u,g) X…X ……… Observation: If for any edge e = {u,v} we ban (u,v,1),…,(u,v,g) and (v,u,1),…,(v,u,g) then p(1,A S ) ≥ g/n. 32

34 Reduction (Preference Lists) Preference Lists: Type 3 (for each vertex v, i  j  [K]) (v,i,j,1)…(v,i,j,g) (v,j,i,1)…(v,j,i,g) X…X ……… Observation: If we ban (v,i,j,1),…,(v,i,j,g) and (v,j,i,1),…,(v,j,i,g) then p(1,A S ) ≥ g/n. 33

35 Reduction (Rules) R u,1 R v,2 R w,3 R x,4 K=4 Preference Lists: Type 1 W1W1 …W1W1 W2W2 …W2W2 ……… WKWK …WKWK B1B1 …BgBg ……… s t 34

36 Reduction (Rules) R u,1 R v,2 R w,3 R x,4 K=4 Preference Lists: Type 2 (edge e = {u,x}) (u,x,1)…(u,x,g) (x,u,1)…(x,u,g) X…X ……… s t p(1,A S ) ≥ g/n 35

37 Reduction (Rules) R u,1 R v,2 R w,3 R x,4 K=4 Preference Lists: Type 2 (edge e = {u,s}) (u,s,1)…(u,s,g) (s,u,1)…(s,u,g) X…X ……… s t 36

38 Reduction (Rules) R u,1 R v,2 R w,3 R x,4 K=4 Preference Lists: Type 2 (edge e = {s,t}) (s,t,1)…(s,t,g) (t,s,1)…(t,s,g) X…X ……… s t 37

39 Reduction (Rules) R u,1 R v,2 R w,3 R w,4 K=5 Preference Lists: Type 1 W1W1 …W1W1 W2W2 …W2W2 ……… WKWK …WKWK B1B1 …BgBg ……… s t p(1,A S ) ≥ g/n 38

40 Reduction (Rules) R u,1 R v,2 R w,3 R w,4 K=5 Preference Lists: Type 1 W1W1 …W1W1 W2W2 …W2W2 ……… WKWK …WKWK B1B1 …BgBg ……… s t R v,5 39

41 Reduction (Rules) R u,1 R v,2 R w,3 R w,4 K=5 s t R v,5 Preference Lists: Type 3 (for each vertex u, i  j  [K]) (v,2,5,1)…(v,2,5,g) (v,5,2,1)…(v,5,2,g) X…X ……… p(1,A S ) ≥ g/n 40

42 Reduction (Rules) R u,1 R v,2 R w,3 R w,4 K=4 s t Preference Lists: Type 3 (w, i=4, j=2) (w,4,2,1)…(w,4,2,g) (w,2,4,1)…(w,2,4,g) X…X ……… 41

43 Reduction (Rules) ObservationConclusion Unless we ban W 1,…,W K we have p(1,A S ) ≥ g/n. For each i, we must ban R u,i for some vertex u. If for any edge e = {u,v} we ban (u,v,1),…, (u,v,g) and (v,u,1),…,(v,u,g) then p(1,A S ) ≥ g/n. For any edge e = {u,v}, and i,j  [K] we cannot ban both R u,i and R v,j. If we ban (v,i,j,1),…,(v,i,j,g) and (v,j,i,1),…,(v,j,i,g) then p(1,A S ) ≥ g/n. For i  j we cannot ban both R u,i and R u,j. Impossible unless there is an independent set of size K! 42

44 Reduction Independent Set of Size K?max S  [m] p(1,A S ) Yes1/n Nog/n where n = O(g 3 ) 43

45 Results Rankings Model Constant kLarge k Singleton RulesPNP-Hard APX-Hard (UGC) Positive RulesPNP-Hard Negative Rulesn 1/3 -approx is NP-HardNP-Hard This Talk: k=1 P Parameters: n, m, |P| 44

46 Key Difference: Positive vs. Negative Let S w = {i | w  R i } (all rules R i that contain w). Negative Rules: Ban w - activate any rule in S w. Positive Rules: Ban w - deactivate all rules in S w. 45

47 Positive Rules Fact: Let S*  {1,…m} denote the optimal solution, and let S  S* then either (1) p(1,A S ) = p(1,A S* ), or (S is optimal) (2) S-S w  S*, where Pr[w|A S ] = p(1,A S ). All rules R i that contain the most popular word in A S. 46

48 Positive Rules Fact: Let S*  {1,…m} denote the optimal solution, and let S  S* then either (1) p(1,A S ) = p(1,A S* ), or (S is optimal) (2) S-S w  S*, where Pr[w|A S ] = p(1,A S ). Proof: Suppose for contradiction that w  A S*, and observe that. Therefore,. Contradiction! 47

49 Positive Rules Algorithm Iterative Elimination: Initialize: S 0 = {1,…,m} Repeat: (Ban w - current most popular password) S i+1 = S i – S w Claim: One of the S i ’s must be the optimal solution! 48

50 Results Rankings Model Constant kLarge k Singleton RulesPNP-Hard APX-Hard (UGC) Positive RulesPNP-Hard Negative Rulesn 1/3 -approx is NP-HardNP-Hard This Talk: k=1 Question: What if we don’t have access to the full preference lists of each user? What if we don’t want to run in time n? Parameters: n, m 49

51 Results Rankings Model Constant kLarge k Singleton RulesPNP-Hard APX-Hard (UGC) Positive RulesPNP-Hard Negative Rulesn 1/3 -approx is NP-HardNP-Hard This Talk: k=1 Sampling Algorithm: ε-approximation with probability 1-δ Parameters: m, 1/ε, 1/δ 50

52 Sampling Algorithm Sample: q(A) returns w with probability P[w|A]. Idea: Run iterative elimination. In each round use sampling to estimate the probability of the most popular word. 51

53 Sampling Lemma Lemma: Let s=100 log (m/  )/  2 denote the number of samples in each round, and let BAD i denote the event that in iteration i, there exists a password w s.t. (e.g., our probability estimate off by  /2). Then Pr[  i.BAD i ]≤  # times w sampled 52

54 Sampling Lemma Partition P into buckets. … … B0B0 B1B1 BiBi w Contains at mot 2 i+1 /  such passwords. 53

55 Sampling Lemma Partition P into buckets. s=100 log (m/  )/  2 … … B0B0 B1B1 BiBi Chernoff Bounds: Contains at most 2 i+1 /  such passwords. w 54

56 Sampling Lemma Partition P into buckets. s=100 log (m/  )/  2 … … B0B0 B1B1 BiBi Contains at most 2 i+1 /  passwords. Union Bound: 55

57 Sampling Lemma Partition P into buckets. s=100 log (m/  )/  2 … … B0B0 B1B1 BiBi Union Bound (buckets): 56

58 Sampling Lemma Partition P into buckets. s=100 log (m/  )/  2 … … B0B0 B1B1 BiBi Union Bound (rounds): 57

59 Sampling Algorithm 58

60 Results Rankings Model Constant kLarge k Singleton RulesPNP-Hard APX-Hard (UGC) Positive RulesPNP-Hard Negative Rulesn 1/3 -approx is NP-HardNP-Hard This Talk: k=1 59

61 Rankings Model - Large k Theorem: It is NP-Hard to optimize p(k,A) in the rankings model when k is a parameter. Theorem: In the normalized probabilities model there is an efficient algorithm to optimize p(k,A) for any k. 60

62 Reduction Vertex Cover Edge {u,v} UV {u,v} …… g vertices e edges Question: Is there a vertex cover of size t? k = g+e-t-1 n=2e preference lists e+g passwords Uncover {u,v} by banning u or v u v x 61

63 Reduction Vertex Cover n vertices m edges Question: Is there a vertex cover C of size t? k = m+n-t-1 u v Edge e = {u,v} UV ee …… p(k,A) < 1 p(k+1,A) = 1 x Set A = P - C 62

64 Outline User Model Policy Structure Goal Algorithms and Reductions Experiments – RockYou Dataset – Rules – Results 63

65 RockYou Dataset 64

66 0.5 1 Normalized Probabilities RockYou: initial distribution over P. 0.5 letmein (0.1) P A 1 letmein (0.2) A 65

67 Normalized Probabilities Rankings Model Constant kLarge k Singleton RulesPNP-Hard APX-Hard (UGC) Positive RulesPNP-Hard Negative Rulesn 1/3 -approx is NP-HardNP-Hard Normalized Probabilities Model Constant kLarge k Singleton RulesPP Positive RulesPNP-Hard Negative RulesNP-Hard 66

68 Rankings vs Normalized Probabilities Let x,y  A 1  A 2.Suppose that Pr[x|A 1 ]>Pr[y|A 1 ]. Is Pr[x|A 2 ]>Pr[y|A 2 ]? Rankings Model (x = password, y = ): User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword123456passwordletmein ………………… qwerty1 qwerty 67

69 Rankings vs Normalized Probabilities Let x,y  A 1  A 2.Suppose that Pr[x|A 1 ]>Pr[y|A 1 ]. Is Pr[x|A 2 ]>Pr[y|A 2 ]? Rankings Model (x = password, y = ): No! User 1User 2User 3User 4User 5User 6User 7 password123456letmeinpassword123456passwordletmein ………………… qwerty1 qwerty 68

70 Rankings vs Normalized Probabilities Let x,y  A 1  A 2.Suppose that Pr[x|A 1 ]>Pr[y|A 1 ]. Is Pr[x|A 2 ]>Pr[y|A 2 ]? Normalized Probabilities: Yes! 69

71 70

72 Base Line Results 71

73 Results 72

74 Discussion Optimal solution was better under negative rules. However, sampled solutions were much better with positive rules. Interesting Directions: – Additional Rules? – Is the Normalized Probabilities Model reasonable? – General experiment in preference list model? 73

75 Open Questions Efficient approximation algorithm in negative rules setting with normalized probabilities assumption? Adversary with limited background knowledge about the user (e.g., age, gender, birthday). 74


Download ppt "Optimizing Password Composition Policies Jeremiah Blocki Saranga Komanduri Ariel Procaccia Or Sheffet To appear at EC 2013."

Similar presentations


Ads by Google