Download presentation

Presentation is loading. Please wait.

Published byKamron Edgecomb Modified about 1 year ago

1
Regret Minimizing Audits: A Learning-theoretic Basis for Privacy Protection Jeremiah Blocki, Nicolas Christin, Anupam Datta, Arunesh Sinha Carnegie Mellon University

2
Motivation Goal: treatment Rigid access control hinders treatment Permissive access control ⇒ privacy violations Breach 2

3
A real problem 3

4
Audits Audits: one way to address the problem ◦ Permissive access control If in doubt allow access ◦ Log the accesses ◦ Human auditors review the accesses later and find violations Adhoc approaches in practice ◦ FairWarning audit tool implements simple heuristics, e.g., flag all celebrity access 4

5
Desiderata Principled study of the audit process ◦ A model for audit process ◦ Properties of the audit mechanism ◦ Audit mechanism which provably satisfies the property 5

6
Auditing Challenges Organization’s economic tradeoff Employee’s incentives unknown How to optimally allocate budget for auditing, with no knowledge about adversary incentives? 6 Reputation loss Audit cost

7
Audit Algorithm by Example OverviewAudit ModelLow Regret Algorithm Auditing budget: $3000/ cycle Cost for one inspection: $ 1 00 Only 30 inspections per cycle Auditor 1 00 accesses 30 accesses 70 accesses Access divided into 2 types Loss from 1 violation (internal, external) $500, $ $250, $500

8
Audit Algorithm Choices 8 Only 30 inspections Consider 4 possible allocations of the available 30 inspections 1.0 Weights Choose allocation probabilistically based on weights OverviewAudit ModelLow Regret Algorithm

9
No. of Access Audit Algorithm Run Updated weights Observed Loss $2000$ 1 500$ $750$ $ Learn from experience: weights updated using observed and estimated loss 2 4 Actual Violation Ext. Caught Int. Caught OverviewAudit ModelLow Regret Algorithm Estimated Loss

10
Main Contributions A game model for the audit process Defining a desirable property of audit mechanisms, namely low regret An efficient audit mechanism RMA that provably achieves low regret o Better bound on regret than existing algorithms that achieve low regret 10 OverviewAudit ModelLow Regret Algorithm

11
Repeated Game Model Game model The interaction repeats for each audit cycle (typically called rounds of repeated game) Typical actions in one round ◦ Emp action: (access, violate) = ([30,70], [2,4]) ◦ Org action: inspection = ([ 1 0,20]) Inspect Reputation loss Audit Cost Access, Violate One audit cycle (round) 11 Imperfection OverviewAudit ModelLow Regret Algorithm

12
Game Payoffs Organization’s payoff ◦ Audit cost depends on the number of inspections ◦ Reputation loss depends on the number of violations caught Employee’s payoff unknown Reputation loss Audit cost 12 OverviewAudit ModelLow Regret Algorithm

13
Regret Intuition 13 Is it possible to audit as well as the best strategy in hindsight ? OverviewAudit ModelLow Regret Algorithm

14
Regret by Example $5 $6 $0$ ,1 3, 2 Payoff of Org only Players Emp Org: s Round 1 3, 1 2 ( 1 ) Round 2 3,2 1 (-5) Total Payoff Unknown -4 Org : s 1 1 (2) 1 (-5) -3 Total regret(s, s 1 ) = (–5) – (–6) = 1 regret(s, s 1 ) = 1 /2 Strategy: outputs an action for every round Emp Org 14 Players Emp Org:s Round 1 3, 1 2 ( $6 ) Round 2 3, 2 1 ( $0) Total Payoff Unknown $6 Org:s 1 1 ($5) 1( $0) $5 OverviewAudit ModelLow Regret Algorithm

15
Meaning of Regret Low regret of s w.r.t. s 1 means s performs as well as s 1 Desirable property of an audit mechanism ◦ Low regret w.r.t all strategies in a given set of strategies ◦ regret → 0 as T → ∞ 15 OverviewAudit ModelLow Regret Algorithm

16
Regret minimization Multiplicative weight update (MWU) ◦ is a standard algorithm that achieves low regret w.r.t. to all strategies in a given set The regret bound of MWU is ◦ N: number of strategies in the given set ◦ T: number of rounds of the game ◦ All payoffs scaled to lie in [0, 1 ] 16 OverviewAudit ModelLow Regret Algorithm

17
Why not MWU? Imperfect information ◦ Org never learns the true action (violation) of the employee ◦ RMA regret bound: O((ln N)/T) Best known bounds [ACFS03] : O((N 1 /3 ln N)/T 1 /3 ) Idea: estimate the payoff that would have been received Sleeping strategies: unavailable strategies ◦ Some inspections unavailable due to budgetary constraints ◦ We use techniques from [BM05] 17 [ACFS03] P. Auer, N. Cesa-Bianchi, Y. Freund, R. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM Journal on Computing, 2003 [BM05] A. Blum and Y. Mansour, “From external to internal regret,”in COLT 2005 OverviewAudit ModelLow Regret Algorithm

18
Regret Minimizing Audits (RMA) 18 New audit cycle starts. Find AWAKE Pick s in AWAKE with probability D t (s) ∝ w s Update weight* of strategies s in AWAKE Estimate payoff vector Pay using Pay(s) Violation caught; obtain payoff Pay(s) w s = 1 for all strategies s * OverviewAudit ModelLow Regret Algorithm

19
Guarantees of RMA With probability RMA achieves the regret bound ◦ N is the set of strategies ◦ T is the number of rounds ◦ All payoffs scaled to lie in [0, 1 ] 19 OverviewAudit ModelLow Regret Algorithm

20
Related Work Authorization proof recorded in audit log [Vaughan et al. 2008] Analyze audit logs to detect and resolve access control policy misconfigurations [Bauer et al. 2008] Mechanically checkable complaince proof constructed using evidence from audit logs [Cederquist et al. 2007] Mechanically checking policy compliance over incomplete audit logs [Garg et al. 2011] 20

21
Take Away Message Future Work ◦ Evaluation over real hospital audit logs ◦ Analyze performance with more complex adversary models Worst case + rational Learning technique for effective auditing with imperfect information

22
22

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google