Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inside Microsofts Secure Windows Initiative Steve Lipner Director of Security Engineering Strategy Security Business Unit Microsoft Corporation.

Similar presentations


Presentation on theme: "Inside Microsofts Secure Windows Initiative Steve Lipner Director of Security Engineering Strategy Security Business Unit Microsoft Corporation."— Presentation transcript:

1 Inside Microsofts Secure Windows Initiative Steve Lipner Director of Security Engineering Strategy Security Business Unit Microsoft Corporation

2 Agenda Who Am I? Who Am I? What is SWI? What is SWI? SD 3 + c SD 3 + c Secure Development Process Secure Development Process Threat Models Threat Models Relative Attack Surface Relative Attack Surface Open Questions Open Questions

3 Who is this guy? Been at Microsoft for 3.5 years Been at Microsoft for 3.5 years Always in security Always in security Started working in security in 1970 Started working in security in 1970 Experience includes A1 systems, firewalls, consulting, other stuff Experience includes A1 systems, firewalls, consulting, other stuff Pragmatic Pragmatic A chief conspirator! A chief conspirator!

4 What is SWI? Secure Windows Initiative Secure Windows Initiative Work across Microsoft Work across Microsoft Focus on securing products Focus on securing products Security Features != Secure Features Security Features != Secure Features Two sub-groups Two sub-groups Defensive SWI Defensive SWI Offensive SWI Offensive SWI

5 Building Software for People Software Security Privacy Reliability Supportable Manageable Deployable Compatible Affordable International Accessible Usable (Features) Doable (Schedule, $, skills) You cannot build software for people in a vacuum

6 Building Software for People SoftwareSecurityPrivacy Reliability Supportable Manageable Deployable Compatible Affordable International Accessible Usable (Features) Doable (Schedule, $, skills)

7 SD 3 + Communications Clear security commitment Full member of the security community Microsoft Security Response Center A Security Framework Secure by Design Secure by Default Secure in Deployment Communications Secure architecture & code Threat analysis Reduce vulnerabilities Reduce attack surface area Unused features off by default Only require minimum privilege Protect, detect, defend, recover, manage Process: How tos, architecture guides People: Training

8 SD 3 At Work – MS Windows Server 2003 Unaffected The underlying DLL (NTDLL.DLL) not vulnerable Code made more conservative during Security Push Even if it was running IIS 6.0 doesnt have WebDAV enabled by default Even if it did have WebDAV enabled Maximum URL length in IIS 6.0 is 16kb by default (>64kb needed) Even if it was vulnerable IIS 6.0 not running by default on Windows Server 2003 Even if it there was an exploitable buffer overrun Would have occurred in w3wp.exe which is now running as network service Even if the buffer was large enough Process halts rather than executes malicious code, due to buffer-overrun detection code (-GS)

9 Secure Product Development Timeline Secure questions during interviews Concept / RequirementsDesignsComplete Test plans Complete CodeCompleteShipPostShip Threat analysis SWI Review Group member training Data mutation & Least Priv Tests Security sign-off criteria determined Review old defects Check-ins checked Secure coding guidelines Use tools Security audit Learn & Refine Externalreview Security push

10 Threat Analysis You cannot build secure applications unless you understand threats You cannot build secure applications unless you understand threats Adding security features does not mean you have secure software Adding security features does not mean you have secure software We use SSL! We use SSL! Find issues before the code is created Find issues before the code is created Find different bugs than code review and testing Find different bugs than code review and testing Implementation bugs vs higher-level design issues Implementation bugs vs higher-level design issues Approx 50% of issues come from threat models Approx 50% of issues come from threat models

11 Threat Modeling Process Create model of app (DFD, UML etc) Create model of app (DFD, UML etc) Build a list of assets that require protection Build a list of assets that require protection Categorize threats to each attack target node with STRIDE Categorize threats to each attack target node with STRIDE Spoofing, Tampering, Repudiation, Info Disclosure, Denial of Service, Elevation of Privilege Spoofing, Tampering, Repudiation, Info Disclosure, Denial of Service, Elevation of Privilege Build threat tree for each threat Build threat tree for each threat Derived from hardware fault trees Derived from hardware fault trees Rank threats by risk Rank threats by risk Risk = Potential * Damage Risk = Potential * Damage DREAD: Damage potential, Reproducibility, Exploitability, Affected Users, Discoverability DREAD: Damage potential, Reproducibility, Exploitability, Affected Users, Discoverability

12

13 1.0 User 5.0 Service client request Payroll request Payroll response Portion of DFD Internet Data Centre Potentially sensitive Payroll information (Info Disc threat - Privacy issue) User privilege Required S – T – R – I – D – E – Data flow Data flow

14 Information Disclosure Threat to Payroll Data Threat #1 (I) View payroll data 1.1 Traffic is unprotected 1.2 Attacker views traffic Sniff traffic with protocol analyzer Listen to router traffic Router is unpatched Compromise router Guess router password 1.0 View payroll data (I) 1.1 Traffic is unprotected (AND) 1.2 Attacker views traffic Sniff traffic with protocol analyzer Listen to router traffic Router is unpatched (AND) Compromise router Guess router password

15 Applying Risk (W.I.P.) Threat #1 (I) View payroll data 1.1 Traffic is unprotected 1.2 Attacker views traffic Sniff traffic with protocol analyzer Listen to router traffic Router is unpatched Compromise router Guess router password Damage potential Affected Users -or- Damage Reproducibility Exploitability Discoverability -or- Chance

16 Applying Risk (W.I.P.) Using Risk = Chance*Damage Threat #1 (I) View payroll data 1.1 Traffic is unprotected 1.2 Attacker views traffic Sniff traffic with protocol analyzer Listen to router traffic Router is unpatched Compromise router Guess router password Damage = 9 Chance=10 Chance=9 Chance=5Chance=3Chance=1 AND = min(C 1, C 2, Cn) OR = max(C1, C2, Cn) max( , min( , )) Calculated Chance=3 max(1.2.1, 1.2.2) Calculated Chance=9 min(1.1, 1.2) Calculated Chance = 9 Gotta fix it! Risk = 9 * 9 81

17 Designing to a Threat Model Threat types have mitigation techniques Threat types have mitigation techniques Spoofing Spoofing Authentication (authn), good credential storage Authentication (authn), good credential storage Tampering Tampering Authorization (authz), MAC, signing Authorization (authz), MAC, signing Repudiation Repudiation Authn, Authz, signing, logging, trusted third party Authn, Authz, signing, logging, trusted third party Info Disclosure Info Disclosure Authz, encryption Authz, encryption Denial of Service Denial of Service Filtering, Authn, Authz Filtering, Authn, Authz Elev of Priv Elev of Priv Dont run with elevated privs Dont run with elevated privs

18 Threat Mitigation Techniques & Technologies Threat Type (STRIDE) Mitigation Technique Mitigation Technique Technology SpoofingAuthentication NTLM X.509 certs PGP keys Basic Digest Kerberos SSL/TLS

19 Patching Policy Password Policy Defense in depth Threat Mitigation Threat #1 (I) View payroll data 1.1 Traffic is unprotected 1.2 Attacker views traffic Sniff traffic with protocol analyzer Listen to router traffic Router is unpatched Compromise router Guess router password Look for high-level AND clauses SSL/TLS, WS-Security, IPSec etc. Encryption

20 Coding to a Threat Model Threat models help you determine the most dangerous portions of the application Threat models help you determine the most dangerous portions of the application Prioritize security push efforts Prioritize security push efforts Prioritize on-going code reviews Prioritize on-going code reviews Help determine the defense mechanisms to use Help determine the defense mechanisms to use Determine data flow Determine data flow All input is evil, until proven otherwise All input is evil, until proven otherwise

21 Testing to a Threat Model Testers have problems Testers have problems Most are not security testers (read: evil) Most are not security testers (read: evil) What needs testing? What needs testing? How do you test? How do you test? Each threat in the model must have a test plan Each threat in the model must have a test plan The threat model helps drive testing concepts The threat model helps drive testing concepts Allows for Whitehat and Blackhat testing Allows for Whitehat and Blackhat testing Prove the mitigations work Prove the mitigations work Prove they dont work :-) Prove they dont work :-)

22 Testing to a Threat Model Mitigation techniques have blackhat testing techniques Mitigation techniques have blackhat testing techniques Spoofing Spoofing Authentication Authentication Brute force creds, cred replay, downgrade to less secure authn, view creds on wire Brute force creds, cred replay, downgrade to less secure authn, view creds on wire Good credential storage Good credential storage Use Information Disclosure attacks Use Information Disclosure attacks Tampering Tampering Authorization Authorization Attempt authz bypass Attempt authz bypass MAC, signing MAC, signing Tamper and re-hash? Tamper and re-hash? Create invalid hash data Create invalid hash data Force app to use less secure protocol (no SSL) Force app to use less secure protocol (no SSL)

23 Testing to a Threat Model Repudiation Repudiation Authn & Authz Authn & Authz See Spoofing and Tampering See Spoofing and Tampering Signing Signing See Tampering See Tampering Logging Logging Prevent auditing, spoof log entries (CR/LF) Prevent auditing, spoof log entries (CR/LF) Trusted third party Trusted third party DoS the third party DoS the third party Info Disclosure Info Disclosure NOTE: Is there any PII/sensitive data in the data? NOTE: Is there any PII/sensitive data in the data? Authorization Authorization See Tampering See Tampering Encryption Encryption View on-the-wire data View on-the-wire data Kill process and scavenge for sensitive data Kill process and scavenge for sensitive data Failure leads to disclosure in error messages Failure leads to disclosure in error messages

24 Testing to a Threat Model Denial of Service Denial of Service Filtering Filtering Flooding, malformed data Flooding, malformed data Authn & Authz Authn & Authz See Spoofing and tampering See Spoofing and tampering Resource pressure Resource pressure Elev of Priv Elev of Priv Dont run with elevated privs Dont run with elevated privs Spend more time here! Spend more time here!

25 Threat Modeling Notes Scenario-driven Scenario-driven Note infrastructure mitigating techniques vs. application mitigating techniques Note infrastructure mitigating techniques vs. application mitigating techniques Determine privilege to initiate data flow Determine privilege to initiate data flow Helps determine chance of attack Helps determine chance of attack Be wary of unauthenticated data flows Be wary of unauthenticated data flows Attackers follow the path of least resistance Attackers follow the path of least resistance All information disclosure threats are potentially privacy issues All information disclosure threats are potentially privacy issues Any non-mitigated threat is a potential vulnerability Any non-mitigated threat is a potential vulnerability All security features must mitigate one or more threats All security features must mitigate one or more threats Work on the higher-risk items first Work on the higher-risk items first

26 Relative Attack Surface Simple way of measuring potential for attack Simple way of measuring potential for attack Goal of a product should be to reduce attack surface Goal of a product should be to reduce attack surface Lower privilege Lower privilege Turn features off Turn features off Defense in depth Defense in depth Does not address code quality Does not address code quality Hard to compare dissimilar products Hard to compare dissimilar products On-going work by Microsoft Research On-going work by Microsoft Research

27 The Simple Process Old Vulns Determine Attack Vector(s) Apply Bias Σ RASQ Think of it as Cyclomatic Complexity for Security!

28 Sample Windows Data Points Open sockets Open sockets Open RPC endpoints Open RPC endpoints Open named pipes Open named pipes Services Services Services running by default Services running by default Services running as SYSTEM Services running as SYSTEM Active Web handlers Active Web handlers Active ISAPI Filters Active ISAPI Filters Dynamic Web pages Dynamic Web pages Executable vdirs Executable vdirs Enabled Accounts Enabled Accounts Enabled Accounts in admin group Enabled Accounts in admin group Null Sessions to pipes and shares Null Sessions to pipes and shares Guest account enabled Guest account enabled Weak ACLs in FS Weak ACLs in FS Weak ACLs in Registry Weak ACLs in Registry Weak ACLs on shares Weak ACLs on shares Scripting Scripting

29 Relative Attack Surface IIS Checklist

30 Windows Server 2003 Reduced Attack Profile 20+ services off by default 20+ services off by default 20+ services run in lower privilege 20+ services run in lower privilege IIS6 off by default IIS6 off by default Minimal functionality by default Minimal functionality by default All code runs in low privilege by default All code runs in low privilege by default More restrictive ACLs throughout More restrictive ACLs throughout Internet Explorer is an HTML 3.2 browser Internet Explorer is an HTML 3.2 browser. directory no longer searched first. directory no longer searched first No games installed No games installed UDDI Server written in C# UDDI Server written in C# All Active Directory traffic is signed/sealed All Active Directory traffic is signed/sealed SMB packet signing for Domain Controller traffic SMB packet signing for Domain Controller traffic Defense in depth measures Defense in depth measures safer string handling functions safer string handling functions OS compiled with VC++ /GS flag OS compiled with VC++ /GS flag Detects some kinds of stack-based buffer overruns at run time Detects some kinds of stack-based buffer overruns at run time Impersonation privilege Impersonation privilege

31 Changing the Process: Our Ultimate Goal Not to inject security bugs into the code in the first place! Not to inject security bugs into the code in the first place! Short term: remove existing flaws Short term: remove existing flaws Longer term: dont add flaws to the code Longer term: dont add flaws to the code You cant do this through code review You cant do this through code review …or testing …or testing They only remove existing flaws They only remove existing flaws You have to teach people to do the right things…! You have to teach people to do the right things…! You must change the process! You must change the process!

32 The Turkish-İ problem (Applies also to Azerbaijan!) Turkish has four letter Is Turkish has four letter Is i (U+0069) I (U+0049) ı (U+0131) İ (U+0130) i (U+0069) I (U+0049) ı (U+0131) İ (U+0130) In Turkish locale UC("file")==FİLE In Turkish locale UC("file")==FİLE // Do not allow "FILE://" URLsFILE:// if(url.ToUpper().Left(4) == "FILE") return ERROR; getStuff(url); // Only allow " URLs if(url.ToUpper(CULTURE_INVARIANT).Left(4) == " HTTP") getStuff(url); else return ERROR; İ

33 Summary Who Am I? Who Am I? What is SWI? What is SWI? SD 3 + c SD 3 + c Secure Development Process Secure Development Process Threat Models Threat Models Relative Attack Surface Relative Attack Surface

34 How can you help? When is a threat model complete? When is a threat model complete? How does privacy apply to TMs? How does privacy apply to TMs? A more complete taxonomy of mitigation techniques and technologies A more complete taxonomy of mitigation techniques and technologies A more complete taxonomy of attack techniques A more complete taxonomy of attack techniques Is Relative Attack Surface accurate? Is Relative Attack Surface accurate? Is it worthwhile? Is it worthwhile?

35 © 2003 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. Microsoft makes no warranties, express or implied, in this summary.

36 Backup Slides

37 DREAD Rankings Damage Potential Damage Potential Minor [1] Complete Subversion [10] Minor [1] Complete Subversion [10] Reproducibility Reproducibility Rare [1] Every Time [10] Rare [1] Every Time [10] Exploitability Exploitability NSA Only [1] My Mom [10] NSA Only [1] My Mom [10] Affected Users Affected Users 10% [1] 100% [10] 10% [1] 100% [10] Discoverability Discoverability Very Subtle [1] Already on Bugtraq [10] Very Subtle [1] Already on Bugtraq [10]


Download ppt "Inside Microsofts Secure Windows Initiative Steve Lipner Director of Security Engineering Strategy Security Business Unit Microsoft Corporation."

Similar presentations


Ads by Google