Presentation is loading. Please wait.

Presentation is loading. Please wait.

Contextual Integrity & its Logical Formalization 18739A: Foundations of Security and Privacy Anupam Datta Fall 2009.

Similar presentations


Presentation on theme: "Contextual Integrity & its Logical Formalization 18739A: Foundations of Security and Privacy Anupam Datta Fall 2009."— Presentation transcript:

1 Contextual Integrity & its Logical Formalization 18739A: Foundations of Security and Privacy Anupam Datta Fall 2009

2 Privacy in Organizations 2  Huge amount of personal information available to various organizations  Hospitals, financial institutions, websites, universities, census bureau, social networks…  Information shared within and across organizational boundaries to enable useful tasks to be completed  Personal health information has to be shared to enable diagnosis, treatment, billing, …  Fundamental Question:  Do organizational processes respect privacy expectations while achieving purpose?

3 Approach What is Privacy? Express and Enforce Privacy Policies Contextual Integrity (CI): - Transfer of personal information governed by contextual norms Logic of Privacy and Utility: - Formalization of CI; automated methods for checking compliance with privacy regulations (HIPAA, GLBA, COPPA,…)

4 Contextual Integrity [Nissenbaum 2004]  Philosophical framework for privacy  Central concept: Context  Examples: Healthcare, banking, education  What is a context?  Set of interacting agents in roles  Roles in healthcare: doctor, patient, …  Informational norms  Doctors should share patient health information as per the HIPAA rules  Norms have a specific structure (descriptive theory)  Systematic method at arriving at contextual norms (prescriptive theory)  Purpose  Improve health  Some interactions should happen - patients should share personal health information with doctors

5 Outline 1. Motivating Example 2. Framework u Model u Logic of Privacy and Utility 3. Workflows and Responsibility 4. Algorithmic Results u Workflow Design assuming agents responsible u Auditing logs when agents irresponsible 5. Summary

6 MyHealth@Vanderbilt++ Workflow Health Answer Health Question Appointment Request Health Question Now that I have cancer, Should I eat more vegetables? Health Question Yes! except broccoli Health Answer Humans + Electronic system Secretary Nurse Doctor Patient

7 Outline 1. Motivating Example 2. Framework u Model u Logic of Privacy and Utility 3. Workflows and Responsibility 4. Algorithmic Results u Workflow Design assuming agents responsible u Auditing logs when agents irresponsible 5. Conclusions

8 Informational Norms “ In a context, the flow of information of a certain type about a subject (acting in a particular capacity/role) from one actor (could be the subject) to another actor (in a particular capacity/role) is governed by a particular transmission principle.” Contextual Integrity [Nis2004]

9 Model Alice  Communication via send actions:  Sender: Bob in role Patient  Recipient: Alice in role Nurse  Subject of message: Bob  Tag: Health Question  Message: Now that ….  Data model & knowledge evolution:  Agents acquire knowledge by:  receiving messages  deriving additional attributes based on data model  Health Question  Protected Health Information Bob Now that I have cancer, Should I eat more vegetables? Health Question contents(msg) vs. tags (msg)

10 Model  State determined by knowledge of each agent  Transitions change state  Set of concurrent send actions  Send(p,q,m) possible only if agent p knows m K0K0 K 13 K 11... K 12 A 11 A 12 A 13 Concurrent Game Structure G = [Barth,Datta,Mitchell,Sundaram 2007]

11 Logic of Privacy and Utility  Syntax  ::= send(p 1,p 2,m)p 1 sends p 2 message m | contains(m, q, t)m contains attrib t about q | tagged(m, q, t)m tagged attrib t about q | inrole(p, r)p is active in role r | t  t’Attrib t is part of attrib t’ |    |  |  x.  Classical operators |  U  |  S  | O  Temporal operators | >  Strategy quantifier  Semantics Formulas interpreted over concurrent game structure

12 Informational Norms “ In a context, the flow of information of a certain type about a subject (acting in a particular capacity/role) from one actor (could be the subject) to another actor (in a particular capacity/role) is governed by a particular transmission principle.” Contextual Integrity [Nis2004]

13 13 Privacy Regulation Example (GLB Act) Financial institutions must notify consumers if they share their non-public personal information with non-affiliated companies, but the notification may occur either before or after the information sharing occurs Exactly as CI says! Sender roleSubject role Attribute Recipient role Transmission principle

14 Structure of Privacy Rules Allow message transmission if: at least one positive norm is satisfied; and all negative norms are satisfied [Barth,Datta,Mitchell,Nissenbaum 2006]

15 HIPAA – Healthcare Privacy HIPAA consists primarily of positive norms: share phi if some rule explicitly allows it (2), (3), (5), (6) Exception: negative norm about psychotherapy notes (4)

16 COPPA – Children Online Privacy COPPA consists primarily of negative norms children can share their protected info only if parents consent (7) (condition) (8) (obligation – future requirements)

17 Specifying Privacy  MyHealth@Vanderbilt In all states, only nurses and doctors receive health questions G  p1, p2, q, m send(p1, p2, m)  contains(m, q, health-question)  inrole(p2, nurse)  inrole(p2, doctor)

18 Specifying Utility  MyHealth@Vanderbilt Patients have a strategy to get their health questions answered  p inrole(p, patient)  > F  q, m. send(q, p, m)  contains(m, p, health-answer)

19 Outline 1. Motivating Example 2. Framework u Model u Logic of Privacy and Utility 3. Workflows and Responsibility 4. Algorithmic Results u Workflow Design assuming agents responsible u Auditing logs when agents irresponsible 5. Conclusions 6. Differentially Private Workflows (WIP)

20 Nurse Secretary MyHealth@Vanderbilt Improved Patient Doctor Health Answer Health Question Appointment Request Health Question Now that I have cancer, Should I eat more vegetables? Health Question Yes! except broccoli Health Answer Assign responsibilities to roles & workflow engine Doctor should answer health questions

21 Graph-based Workflow  Graph (R, R x R), where R is the set of roles  Edge-labeling function permit: R x R  2 T, where T is the set of attributes  Responsibility of workflow engine Allow msg from role r 1 to role r 2 iff tags(msg)  permit(r 1, r 2 )  Responsibility of human agents in roles Tagging responsibilities  ensure messages are correctly tagged Progress responsibilities  ensure messages proceed through workflow

22 MyHealth Responsibilities  Tagging Nurses should tag health questions G  p, q, s, m. inrole(p, nurse)  send(p, q, m)  contains(m, s, health-question)  tagged(m, s, health-question)  Progress  Doctors should answer health questions G  p, q, s, m. inrole(p, doctor)  send(q, p, m)  contains(m, s, health-question)  F  m’. send(p, s, m’)  contains(m’, s, health-answer)

23 Abstract Workflow  Responsibility of workflow engine  LTL formula   Feasible (enforceable) if  is a safety formula without the contains() predicate  Responsibility of each role r  LTL formula  r  Feasible if agents have a strategy to discharge their responsibilities  p.   inrole(p, r)  >  r Graph-based workflows are a special case

24 Outline 1. Motivating Example 2. Framework u Model u Logic of Privacy and Utility 3. Workflows and Responsibility 4. Algorithmic Results u Workflow Design assuming agents responsible u Abstract workflows u Auditing logs when agents irresponsible u Only graph-based workflows 5. Conclusions

25 Workflow Design Results  Theorems: Assuming all agents act responsibly, checking whether workflow achieves  Privacy is in PSPACE (in the size of the formula describing the workflow)  Utility is decidable Algorithms implemented in model-checkers, e.g. SPIN, MOCHA Potential for small model theorems

26 Outline 1. Motivating Example 2. Framework u Model u Logic of Privacy and Utility 3. Workflows and Responsibility 4. Algorithmic Results u Workflow Design assuming agents responsible u Abstract workflows u Auditing logs when agents irresponsible u Only graph-based workflows 5. Summary

27 Auditing Results: Summary  Definitions  Accountability = Causality + Irresponsibility  Design of audit log  Captures causality relation  Algorithm  Finding agents accountable for policy violation in graph- based workflows using audit log  Algorithm uses oracle:  O(msg) = contents(msg)

28 Policy compliance/violation  Strong compliance [BDMN2006, BDMS 2007]  Action does not violate current policy requirements  Future policy requirements after action can be met  Formally, Policy History Contemplated Action Judgment Future Reqs

29 Local Compliance  Locally compliant policy  Agents can determine strong compliance based on their local view of history

30 Causality  Lamport Causality [1978]

31 Accountability & Audit Log  Accountability  Causality + Irresponsibility  Audit log design  Records all Send(p,q,m) and Receive(p,q,m) events executed  Maintains causality structure  O(1) operation per event logged

32 Auditing Algorithm

33 Central Lemma

34 Main Theorem

35 MyHealth Example 1. Policy violation: Secretary Candy receives health-question mistagged as appointment-request 2. Construct causality graph G and search backwards using BFS Candy received message m from Patient Jorge. u O(m) = health-question, but tags(m) = appointment- request. u Patient responsible for health-question tag. u Jorge identified as accountable

36 Summary 1. Framework u Concurrent game model u Logic of Privacy and Utility u Temporal logic (LTL, ATL*) 2. Organizational Process as Workflow u Role-based responsibility for human and mechanical agents 3. Algorithmic Results u Workflow design assuming agents responsible u Privacy, utility decidable (model-checking) u Auditing logs when agents irresponsible u From policy violation to accountable agents Using oracle Automated

37 Thanks Questions?

38

39 Deciding Privacy  PLTL model-checking problem is PSPACE decidable G |= tags-correct U agents-responsible  privacy-policy G : concurrent game structure Result applies to finite models (#agents, msgs,…)

40 Deciding Utility  ATL* model-checking of concurrent game structures is  Decidable with perfect information  Undecidable with imperfect information  Theorem: There is a sound decision procedure for deciding whether workflow achieves utility  Intuition:  Translate imperfect information into perfect information by considering possible actions from one player’s point of view

41 Local communication game  Quotient structure under invisible actions, G p  States: Smallest equivalence relation K 1 ~ p K 2 if K 1  K 2 and a is invisible to p  Actions: [K]  [K’] if there exists K 1 in [K] and K 2 in [K’] s.t. K 1  K 2  Lemma: For all LTL formulas  visible to p, G p |= >  implies G |= >   The truth value of formulas visible to p depends only on actions that p can observe, e.g. send (*, p, *), send(p, *, *) Locality again!

42 Related Languages ModelSenderRecipientSubjectAttributesPastFutureCombination RBACRoleIdentity  XACMLFlexible o  o  EPALFixedRoleFixed  o  P3PFixedRoleFixed  o  o LPURole   Legend:  unsupported opartially supported  fully supported  LPU fully supports attributes, combination, temporal conditions, and utility


Download ppt "Contextual Integrity & its Logical Formalization 18739A: Foundations of Security and Privacy Anupam Datta Fall 2009."

Similar presentations


Ads by Google