Presentation is loading. Please wait.

Presentation is loading. Please wait.

Terminology and empirical measures General methods to mask faults.

Similar presentations


Presentation on theme: "Terminology and empirical measures General methods to mask faults."— Presentation transcript:

1 Terminology and empirical measures General methods to mask faults.
Heisenbugs: A Probabilistic Approach to Availability Jim Gray Microsoft Research ½ the slides are not shown (are hidden, so view with PPT to see them all Outline Terminology and empirical measures General methods to mask faults. Software-fault tolerance Summary

2 Heisenbugs: A Probabilistic Approach to Availability
There is considerable evidence that (1) production systems have about one bug per thousand lines of code (2) these bugs manifest themselves in stochastically: failures are due to confluence of rare events, (3) system mean-time-to-failure has a lower bound of a decade or so. To make highly available systems, architects must tolerate these failures by providing instant repair (un-availability is approximated by repair_time/time_to_fail so cutting the repair time in half makes things twice as good. Ultimately, one builds a set of standby servers which have both design diversity and geographic diversity. This minimizes common-mode failures.

3 Dependability: The 3 ITIES
Reliability / Integrity: does the right thing. (Also large MTTF) Availability: does it now. (Also small MTTR MTTF+MTTR System Availability: if 90% of terminals up & 99% of DB up? (=>89% of transactions are serviced on time). Holistic vs. Reductionist view Security Integrity Reliability Availability

4 High Availability System Classes Goal: Build Class 6 Systems
System Type Unmanaged Managed Well Managed Fault Tolerant High-Availability Very-High-Availability Ultra-Availability Unavailable (min/year) 50,000 5,000 500 50 5 .5 .05 Availability 90.% 99.% 99.9% 99.99% 99.999% % % Class 1 2 3 4 6 7 UnAvailability = MTTR/MTBF can cut it in ½ by cutting MTTR or MTBF

5 Demo: looking at some nodes
Look at Internet Node availability: 92% mean, 97% median Darrell Long (UCSC) ftp://ftp.cse.ucsc.edu/pub/tr/ ucsc-crl ps.Z "A Study of the Reliability of Internet Sites" ucsc-crl ps.Z "Estimating the Reliability of Hosts Using the Internet" ucsc-crl ps.Z "A Study of the Reliability of Hosts on the Internet" ucsc-crl ps.Z "A Longitudinal Survey of Internet Host Reliability"

6 Sources of Failures Power Failure: 2000 hr 1 hr Phone Lines
MTTF MTTR Power Failure: hr 1 hr Phone Lines Soft >.1 hr .1 hr Hard hr 10 hr Hardware Modules: 100,000hr 10hr (many are transient) Software: 1 Bug/1000 Lines Of Code (after vendor-user testing) => Thousands of bugs in System! Most software failures are transient: dump & restart system. Useful fact: 8,760 hrs/year ~ 10k hr/year

7 To Get 10 Year MTTF, Must Attack All These Areas
Case Study - Japan "Survey on Computer Security", Japan Info Dev Corp., March (trans: Eiichi Watanabe). Vendor 4 2 % Tele Comm lines 1 2 % 1 1 . 2 Environment % 2 5 % Application Software 9 . 3 % Operations Vendor (hardware and software) Months Application software Months Communications lines Years Operations Years Environment Years 10 Weeks 1,383 institutions reported (6/84 - 7/85) 7,517 outages, MTTF ~ 10 weeks, avg duration ~ 90 MINUTES To Get 10 Year MTTF, Must Attack All These Areas

8 Case Studies - Tandem Trends Reported MTTF by Component
SOFTWARE Years HARDWARE Years MAINTENANCE Years OPERATIONS Years ENVIRONMENT Years SYSTEM Years Problem: Systematic Under-reporting

9 Many Software Faults are Soft
After Design Review Code Inspection Alpha Test Beta Test 10k Hrs Of Gamma Test (Production) Most Software Faults Are Transient MVS Functional Recovery Routines :1 Tandem Spooler :1 Adams >100:1 Terminology: Heisenbug: Works On Retry Bohrbug: Faults Again On Retry Adams: "Optimizing Preventative Service of Software Products", IBM J R&D,28.1,1984 Gray: "Why Do Computers Stop", Tandem TR85.7, 1985 Mourad: "The Reliability of the IBM/XA Operating System", 15 ISFTCS, 1985.

10 Summary of FT Studies Current Situation: ~4-year MTTF => Fault Tolerance Works. Hardware is GREAT (maintenance and MTTF). Software masks most hardware faults. Many hidden software outages in operations: New Software. Utilities. Must make all software ONLINE. Software seems to define a 30-year MTTF ceiling. Reasonable Goal: 100-year MTTF class 4 today => class 6 tomorrow.

11 Fault Tolerance vs Disaster Tolerance
Fault-Tolerance: mask local faults RAID disks Uninterruptible Power Supplies Cluster Failover Disaster Tolerance: masks site failures Protects against fire, flood, sabotage,.. Redundant system and service at remote site. Use design diversity There have been a variety of technologies introduced to address your growing need for high-availability servers. The simplest of these is Data Mirroring, which continuously duplicates all disk writes onto a mirrored set of disks, possibly at a remote disaster recovery site. Today you can get Data Mirroring products for Windows NT Server from a few vendors, including Octopus (http://www.octopus.com) and Vinca (http://www.vinca.com). These solutions provide excellent protection for your data, even in the event of a metropolis-wide disaster. However, they’re not high-availability solutions that have the ability to detect all types of hardware or software failure, and they have at best limited abilities to automatically restart applications. (For example, users must manually reconnect to the new server, plus any applications running on the recovery server are canceled as if it had been the server that failed.) Server Mirroring like Novell SFT III (Server Fault Tolerance) is a high-availability capability that both protects your data and provides for automatic detection of failures plus restart of selected applications. Server Mirroring provides excellent reliability, but at a very high cost since it requires an idle “standby” server that does no productive work except when the primary server fails. There are also very few applications which can take advantage of proprietary server mirroring solutions like Novell SFT III. At the high end are true “fault tolerant” systems like the excellent “NonStop” systems from Tandem. These systems are able to detect and almost instantly recover from virtually any single hardware or software failure. Most bank transactions, for example, run on this type of system. This level of reliability comes with a very high price tag, however, and each solution is based on a proprietary, single-vendor set of hardware. And, finally, there’s another high-availability technology which seems to offer the best of all these capabilities: clustering...

12 Outline General methods to mask faults.
Terminology and empirical measures General methods to mask faults. Software-fault tolerance Summary

13 Fault Model Failures are independent So, single fault tolerance is a big win Hardware fails fast (blue-screen) Software fails-fast (or goes to sleep) Software often repaired by reboot: Heisenbugs Operations tasks: major source of outage Utility operations Software upgrades

14 Fault Tolerance Techniques
FAIL FAST MODULES: work or stop SPARE MODULES : instant repair time. INDEPENDENT MODULE FAILS by design MTTFPair ~ MTTF2/ MTTR (so want tiny MTTR) MESSAGE BASED OS: Fault Isolation software has no shared memory. SESSION-ORIENTED COMM: Reliable messages detect lost/duplicate messages coordinate messages with commit PROCESS PAIRS :Mask Hardware & Software Faults TRANSACTIONS: give A.C.I.D. (simple fault model)

15 Example: the FT Bank Modularity & Repair are KEY:
vonNeumann needed 20,000x redundancy in wires and switches We use 2x redundancy. Redundant hardware can support peak loads (so not redundant)

16 Fail-Fast is Good, Repair is Needed
Lifecycle of a module fail-fast gives short fault latency High Availability is low UN-Availability Unavailability ­ MTTR MTTF Improving either MTTR or MTTF gives benefit Simple redundancy does not help much.

17 Hardware Reliability/Availability (how to make it fail fast)
Comparitor Strategies: Duplex: Fail-Fast: fail if either fails (e.g. duplexed cpus) vs Fail-Soft: fail if both fail (e.g. disc, atm,...) Note: in recursive pairs, parent knows which is bad. Triplex: Fail-Fast: fail if 2 fail (triplexed cpus) Fail-Soft: fail if 3 fail (triplexed FailFast cpus)

18 Redundant Designs have Worse MTTF!
THIS IS NOT GOOD: Variance is lower but MTTF is worse Simple redundancy does not improve MTTF (sometimes hurts). This is just an example of the airplane rule.

19 Add Repair: Get 104 Improvement

20 When To Repair? Chances Of Tolerating A Fault are 1000:1 (class 3)
A 1995 study: Processor & Disc Rated At ~ 10khr MTTF Computed Single Observed Failures Double Fails Ratio 10k Processor Fails 14 Double ~ 1000 : 1 40k Disc Fails, 26 Double ~ 1000 : 1 Hardware Maintenance: On-Line Maintenance "Works" 999 Times Out Of 1000. The chance a duplexed disc will fail during maintenance?1:1000 Risk Is 30x Higher During Maintenance => Do It Off Peak Hour Software Maintenance: Repair Only Virulent Bugs Wait For Next Release To Fix Benign Bugs

21 OK: So Far ? HOW DO WE GET RELIABLE EXECUTION?
Hardware fail-fast is easy Redundancy plus Repair is great (Class 7 availability) Hardware redundancy & repair is via modules. How can we get instant software repair? We Know How To Get Reliable Storage RAID Or Dumps And Transaction Logs. We Know How To Get Available Storage Fail Soft Duplexed Discs (RAID 1...N). ? HOW DO WE GET RELIABLE EXECUTION? ? HOW DO WE GET AVAILABLE EXECUTION?

22 Software-fault tolerance
Outline Terminology and empirical measures General methods to mask faults. Software-fault tolerance Summary

23 } { } { Key Idea Architecture Hardware Faults
Software Masks Environmental Faults Distribution Maintenance Software automates / eliminates operators So, In the limit there are only software & design faults. Software-fault tolerance is the key to dependability INVENT IT!

24 Software Techniques: Learning from Hardware
Recall that most outages are not hardware. Most outages in Fault Tolerant Systems are SOFTWARE Fault Avoidance Techniques: Good & Correct design. After that: Software Fault Tolerance Techniques: Modularity (isolation, fault containment) Design diversity N-Version Programming: N-different implementations Defensive Programming: Check parameters and data Auditors: Check data structures in background Transactions: to clean up state after a failure Paradox: Need Fail-Fast Software

25 Fail-Fast and High-Availability Execution
Software N-Plexing: Design Diversity N-Version Programming Write the same program N-Times (N > 3) Compare outputs of all programs and take majority vote Process Pairs: Instant restart (repair) Use Defensive programming to make a process fail-fast Have restarted process ready in separate environment Second process “takes over” if primary faults Transaction mechanism can clean up distributed state if takeover in middle of computation.

26 What Is MTTF of N-Version Program?
First fails after MTTF/N Second fails after MTTF/(N-1),... so MTTF(1/N + 1/(N-1) /2) harmonic series goes to infinity, but VERY slowly for example 100-version programming gives ~4 MTTF of 1-version programming Reduces variance N-Version Programming Needs REPAIR If a program fails, must reset its state from other programs. => programs have common data/state representation. How does this work for Database Systems? Operating Systems? Network Systems? Answer: I don’t know.

27 Why Process Pairs Mask Faults: Many Software Faults are Soft
After Design Review Code Inspection Alpha Test Beta Test 10k Hrs Of Gamma Test (Production) Most Software Faults Are Transient MVS Functional Recovery Routines :1 Tandem Spooler :1 Adams >100:1 Terminology: Heisenbug: Works On Retry Bohrbug: Faults Again On Retry Adams: "Optimizing Preventative Service of Software Products", IBM J R&D,28.1,1984 Gray: "Why Do Computers Stop", Tandem TR85.7, 1985 Mourad: "The Reliability of the IBM/XA Operating System", 15 ISFTCS, 1985.

28 Process Pair Repair Strategy
If software fault (bug) is a Bohrbug, then there is no repair “wait for the next release” or “get an emergency bug fix” or “get a new vendor” If software fault is a Heisenbug, then repair is reboot and retry or switch to backup process (instant restart) PROCESS PAIRS Tolerate Hardware Faults Heisenbugs Repair time is seconds, could be mili-seconds if time is critical Flavors Of Process Pair: Lockstep Automatic State Checkpointing Delta Checkpointing Persistent

29 How Takeover Masks Failures
Server Resets At Takeover But What About Application State? Database State? Network State? Answer: Use Transactions To Reset State! Abort Transaction If Process Fails. Keeps Network "Up" Keeps System "Up" Reprocesses Some Transactions On Failure

30 PROCESS PAIRS - SUMMARY
Transactions Give Reliability Process Pairs Give Availability Process Pairs Are Expensive & Hard To Program Transactions + Persistent Process Pairs => Fault Tolerant Sessions & Execution When Tandem Converted To This Style Saved 3x Messages Saved 5x Message Bytes Made Programming Easier

31 SYSTEM PAIRS FOR HIGH AVAILABILITY
Primary Backup Programs, Data, Processes Replicated at two sites. Pair looks like a single system. System becomes logical concept Like Process Pairs: System Pairs. Backup receives transaction log (spooled if backup down). If primary fails or operator Switches, backup offers service.

32 SYSTEM PAIR CONFIGURATION OPTIONS
Backup Mutual Backup: each has1/2 of Database & Application Hub: One site acts as backup for many others In General can be any directed graph Stale replicas: Lazy replication Primary Primary Primary Backup Primary Primary Backup Copy Copy Copy

33 SYSTEM PAIRS FOR: SOFTWARE MAINTENANCE
( P r i m a r y ) ( B a c k u p ) ( P r i m a r y ) ( B a c k u p ) V 1 V 1 V 1 V 2 S t e p 1 : B o t h s y s t e m s a r e r u n n i n g V 1 . S t e p 2 : B a c k u p i s c o l d - l o a d e d a s V 2 . ( B a c k u p ) ( P r i m a r y ) ( B a c k u p ) ( P r i m a r y ) V 1 V 2 V 2 V 2 S t e p 3 : S W I T C H t o B a c k u p . S t e p 4 : B a c k u p i s c o l d - l o a d e d a s V 2 D 3 . Similar ideas apply to: Database Reorganization Hardware modification (e.g. add discs, processors,...) Hardware maintenance Environmental changes (rewire, new air conditioning) Move primary or backup to new location.

34 SYSTEM PAIR BENEFITS Protects against ENVIRONMENT: weather
utilities sabotage Protects against OPERATOR FAILURE: two sites, two sets of operators Protects against MAINTENANCE OUTAGES work on backup software/hardware install/upgrade/move... Protects against HARDWARE FAILURES backup takes over Protects against TRANSIENT SOFTWARE ERRORR Allows design diversity different sites have different software/hardware)

35 } { } { Key Idea Architecture Hardware Faults
Software Masks Environmental Faults Distribution Maintenance Software automates / eliminates operators So, In the limit there are only software & design faults. Many are Heisenbugs Software-fault tolerance is the key to dependability INVENT IT!

36 References Adams, E. (1984). “Optimizing Preventative Service of Software Products.” IBM Journal of Research and Development. 28(1): Anderson, T. and B. Randell. (1979). Computing Systems Reliability. Garcia-Molina, H. and C. A. Polyzois. (1990). Issues in Disaster Recovery. 35th IEEE Compcon Gray, J. (1986). Why Do Computers Stop and What Can We Do About It. 5th Symposium on Reliability in Distributed Software and Database Systems Gray, J. (1990). “A Census of Tandem System Availability between 1985 and 1990.” IEEE Transactions on Reliability. 39(4): Gray, J. N., Reuter, A. (1993). Transaction Processing Concepts and Techniques. San Mateo, Morgan Kaufmann. Lampson, B. W. (1981). Atomic Transactions. Distributed Systems -- Architecture and Implementation: An Advanced Course. ACM, Springer-Verlag. Laprie, J. C. (1985). Dependable Computing and Fault Tolerance: Concepts and Terminology. 15’th FTCS Long, D.D., J. L. Carroll, and C.J. Park (1991). A study of the reliability of Internet sites. Proc 10’th Symposium on Reliable Distributed Systems, pp , Pisa, September 1991. Darrell Long, Andrew Muir and Richard Golding, ``A Longitudinal Study of Internet Host Reliability,'' Proceedings of the Symposium on Reliable Distributed Systems, Bad Neuenahr, Germany: IEEE, September 1995, pp. 2-9


Download ppt "Terminology and empirical measures General methods to mask faults."

Similar presentations


Ads by Google