Presentation on theme: "1 Dependability in the Internet Era. 2 Outline The glorious past (Availability Progress) The dark ages (current scene) Some recommendations."— Presentation transcript:
1 Dependability in the Internet Era
2 Outline The glorious past (Availability Progress) The dark ages (current scene) Some recommendations
3 Preview The Last 5 Years: Availability Dark Ages Ready for a Renaissance? Things got better, then things got a lot worse! 9% 99% 99.9% 99.99% % Computer Systems Telephone Systems Cell phones Internet Availability
4 DEPENDABILITY: The 3 ITIES RELIABILITY / INTEGRITY: Does the right thing. (also MTTF>>1) AVAILABILITY: Does it now. (also 1 >> MTTR ) MTTF+MTTR System Availability: If 90% of terminals up & 99% of DB up? (=>89% of transactions are serviced on time ). Holistic vs. Reductionist view Security Integrity Reliability Availability
5 Fail-Fast is Good, Repair is Needed Improving either MTTR or MTTF gives benefit Simple redundancy does not help much. Lifecycle of a module fail-fast gives short fault latency High Availability is low UN-Availability is low UN-Availability Unavailability ~ MTTR MTTF MTTF
6 Fault Model Failures are independent So, single fault tolerance is a big win Hardware fails fast (dead disk, blue-screen) Software fails-fast (or goes to sleep) Software often repaired by reboot: –Heisenbugs Operations tasks: major source of outage –Utility operations –Software upgrades
7 Disks (raid) the BIG Success Story Duplex or Parity: masks faults 1M hours (~100 years) But –controllers fail and –have 1,000s of disks. Duplexing or parity, and dual path gives perfect disks Wal-Mart never lost a byte (thousands of disks, hundreds of failures). Only software/operations mistakes are left.
8 Fault Tolerance vs Disaster Tolerance Fault-Tolerance: mask local faults –RAID disks –Uninterruptible Power Supplies –Cluster Failover Disaster Tolerance: masks site failures –Protects against fire, flood, sabotage,.. –Redundant system and service at remote site.
9 Case Study - Japan "Survey on Computer Security", Japan Info Dev Corp., March (trans: Eiichi Watanabe). Vendor (hardware and software) 5 Months Application software 9 Months Communications lines1.5 Years Operations 2 Years Environment 2 Years 10 Weeks 1,383 institutions reported (6/84 - 7/85) 7,517 outages, MTTF ~ 10 weeks, avg duration ~ 90 MINUTES To Get 10 Year MTTF, Must Attack All These Areas 42% 12% 25% 9.3% 11.2 % Vendor Environment Operations Application Software Tele Comm lines
10 Case Studies - Tandem Trends MTTF improved Shiftfrom Hardware & Maintenance to from 50% to 10% toSoftware (62%) & Operations (15%) NOTE: Systematic under-reporting ofEnvironment Operations errors Application Software
11 Dependability Status circa 1995 ~4-year MTTF => 5 9s for well-managed sys. Fault Tolerance Works. Hardware is GREAT (maintenance and MTTF). Software masks most hardware faults. Many hidden software outages in operations: –New Software. –Utilities. Make all hardware/software changes ONLINE. Software seems to define a 30-year MTTF ceiling. Reasonable Goal: 100-year MTTF. class 4 today => class 6 tomorrow.
12 Whats Happened Since Then? Hardware got better Software got better (even though it is more complex) Raid is standard, Snapshots coming standard Cluster in a box: commodity failover Remote replication is standard.
13 Availability well-managed nodes well-managed packs & clones well-managed GeoPlex Masks some hardware failures Masks hardware failures, Operations tasks (e.g. software upgrades) Masks some software failures Masks site failures (power, network, fire, move,…) Masks some operations failures Availability Un-managed
14 Outline The glorious past (Availability Progress) The dark ages (current scene) Some recommendations
15 Progress? MTTF improved from MTTR has not improved much since 1970 failover Hardware and Software online change (pNp) is now standard Then the Internet arrived: –No project can take more than 3 months. –Time to market is everything –Change is good.
16 The Internet Changed Expectations 1990 Phones delivered % ATMs delivered 99.99% Failures were front-page news. Few hackers Outages last an hour 2000 Cellphones deliver 90% Web sites deliver 98% Failures are business-page news Many hackers. Outages last a day This is progress?
17 Why (1) Complexity Internet sites are MUCH more complex. –NAP –Firewall/proxy/ipsprayer –Web –DMZ –App server –DB server –Links to other sites –tcp/http/html/dhtml/dom/xml/co m/corba/cgi/sql/fs/os… Skill level is much reduced
18 One of the Data Centers (500 servers)
19 A Schematic of HotMail ~7,000 servers 100 backend stores with 120TB (cooked) 3 data centers Links to –Passport –Ad-rotator –Internet Mail gateways –… ~ 1B messages per day 150M mailboxes, 100M active ~400,000 new per day.
20 Why (2) Velocity No project can take more than 13 weeks. Time to market is everything Functionality is everything Faster, cheaper, badder Schedule Quality Functionality trend
21 Why (3) Hackers Hackers are a new increased threat Any site can be attacked from anywhere Motives include ego, malice, and greed. Complexity makes it hard to protect sites. Concentration of wealth makes attractive target: Why did you rob banks? Willie Sutton: Cause thats where the money is! Note: Eric Raymonds How to Become a Hacker is the positive use of the term, here I mean malicious and anti-social hackers.
22 How Bad Is It? Connectivity is poor.
23 How Bad Is It? Median monthly % ping packet loss for 2/ 99
24 Microsoft.Com Operations mis-configured a router Took a day to diagnose and repair. DOS attacks cost a fraction of a day. Regular security patches.
25 BackEnd Servers are More Stable Generally deliver 99.99% TerraServer for example single back-end failed after 2.5 y. Went to 4-node cluster Fails every 2 mo. Transparent failover in 30 sec. Online software upgrades So… % in backend… Year 1 Through 18 Months Down 30 hours in July (hardware stop, auto restart failed, operations failure) Down 26 hours in September (Backplane failure, I/O Bus failure)
26 eBay: A very honest site Publishes operations log.Publishes operations log. Has 99% of scheduled uptimeHas 99% of scheduled uptime Schedules about 2 hours/week down.Schedules about 2 hours/week down. Has had some operations outagesHas had some operations outages Has had some DOS problems.Has had some DOS problems.
27 Outline The glorious past (Availability Progress) The dark ages (current scene) Some recommendations
28 Not to throw stones but… Everyone has a serious problem. The BEST people publish their stats. The others HIDE their stats (check Netcraft to see who I mean). We have good NODE-level availability 5-9s is reasonable. We have TERRIBLE system-level availability 2-9s is the goal.
29 Recommendation #1 Continue progress on back-ends. –Make management easier (AUTOMATE IT!!!) –Measure –Compare best practices –Continue to look for better algoritims. Live in fear –We are at 10,000 node servers –We are headed for 1,000,000 node servers
30 Recommendation #2 Current security approach is unworkable: –Anonymous clients –Firewall is clueless –Incredible complexity We cant win this game! So change the rules (redefine the problem): –No anonymity –Unified authentication/authorization model –Single-function devices (with simple interfaces) –Only one-kind of interface (uddi/wsdl/soap/…).
31 References Adams, E. (1984). Optimizing Preventative Service of Software Products. IBM Journal of Research and Development. 28(1): Anderson, T. and B. Randell. (1979). Computing Systems Reliability. Garcia-Molina, H. and C. A. Polyzois. (1990). Issues in Disaster Recovery. 35th IEEE Compcon Gray, J. (1986). Why Do Computers Stop and What Can We Do About It. 5th Symposium on Reliability in Distributed Software and Database Systems Gray, J. (1990). A Census of Tandem System Availability between 1985 and IEEE Transactions on Reliability. 39(4): Gray, J. N., Reuter, A. (1993). Transaction Processing Concepts and Techniques. San Mateo, Morgan Kaufmann. Lampson, B. W. (1981). Atomic Transactions. Distributed Systems -- Architecture and Implementation: An Advanced Course. ACM, Springer-Verlag. Laprie, J. C. (1985). Dependable Computing and Fault Tolerance: Concepts and Terminology. 15th FTCS Long, D.D., J. L. Carroll, and C.J. Park (1991). A study of the reliability of Internet sites. Proc 10th Symposium on Reliable Distributed Systems, pp , Pisa, September Darrell LongDarrell Long, Andrew Muir and Richard Golding, ``A Longitudinal Study of Internet Host Reliability,'' Proceedings of the Symposium on Reliable Distributed Systems, Bad Neuenahr, Germany: IEEE, September 1995, p. 2-9Richard Golding They have even better for-fee data as well, but for-free is really excellent. eBay is an Excellent benchmark of best Internet practices Network traffic/quality report, dated, but the others have died off!