On Cosmic Rays, Bat Droppings and what to do about them David Walker Princeton University with Jay Ligatti, Lester Mackey, George Reis and David August.

Slides:



Advertisements
Similar presentations
Types and Programming Languages Lecture 7 Simon Gay Department of Computing Science University of Glasgow 2006/07.
Advertisements

IHP Im Technologiepark Frankfurt (Oder) Germany IHP Im Technologiepark Frankfurt (Oder) Germany ©
Threads Cannot be Implemented As a Library Andrew Hobbs.
Compiler Construction by Muhammad Bilal Zafar (AP)
May 7, A Real Problem  What if you wanted to run a program that needs more memory than you have?
REDUNDANT ARRAY OF INEXPENSIVE DISCS RAID. What is RAID ? RAID is an acronym for Redundant Array of Independent Drives (or Disks), also known as Redundant.
On Cosmic Rays, Bat Droppings and what to do about them David Walker Princeton University with Jay Ligatti, Lester Mackey, George Reis and David August.
Maintaining Data Integrity in Programmable Logic in Atmospheric Environments through Error Detection Joel Seely Technical Marketing Manager Military &
Time Bounds for General Function Pointers Robert Dockins and Aquinas Hobor (Princeton University) (NUS) TexPoint fonts used in EMF. Read the TexPoint manual.
1 A Real Problem  What if you wanted to run a program that needs more memory than you have?
3. Hardware Redundancy Reliable System Design 2010 by: Amir M. Rahmani.
“THREADS CANNOT BE IMPLEMENTED AS A LIBRARY” HANS-J. BOEHM, HP LABS Presented by Seema Saijpaul CS-510.
CS 536 Spring Code generation I Lecture 20.
1: Operating Systems Overview
2. Introduction to Redundancy Techniques Redundancy Implies the use of hardware, software, information, or time beyond what is needed for normal system.
Chapter 4 Processor Technology and Architecture. Chapter goals Describe CPU instruction and execution cycles Explain how primitive CPU instructions are.
On Cosmic Rays, Bat Droppings, and what to do about them David Walker Princeton University with Jay Ligatti, Lester Mackey, George Reis and David August.
CS 104 Introduction to Computer Science and Graphics Problems Software and Programming Language (2) Programming Languages 09/26/2008 Yang Song (Prepared.
Composing Dataflow Analyses and Transformations Sorin Lerner (University of Washington) David Grove (IBM T.J. Watson) Craig Chambers (University of Washington)
1 paper I design and implementation of the aegis single-chip secure processor using physical random functions, isca’05 nuno alves 28/sep/06.
3.1Introduction to CPU Central processing unit etched on silicon chip called microprocessor Contain tens of millions of tiny transistors Key components:
1.3 Executing Programs. How is Computer Code Transformed into an Executable? Interpreters Compilers Hybrid systems.
TASK ADAPTATION IN REAL-TIME & EMBEDDED SYSTEMS FOR ENERGY & RELIABILITY TRADEOFFS Sathish Gopalakrishnan Department of Electrical & Computer Engineering.
P51UST: Unix and Software Tools Unix and Software Tools (P51UST) Compilers, Interpreters and Debuggers Ruibin Bai (Room AB326) Division of Computer Science.
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
An Introduction to Programming and Object-Oriented Design Using Java By Jaime Niño and Fred Hosch Slides by Darwin Baines and Robert Burton.
Fault-tolerant Typed Assembly Language Frances Perry, Lester Mackey, George A. Reis, Jay Ligatti, David I. August, and David Walker Princeton University.
15-740/ Oct. 17, 2012 Stefan Muller.  Problem: Software is buggy!  More specific problem: Want to make sure software doesn’t have bad property.
Intro to Architecture – Page 1 of 22CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Introduction Reading: Chapter 1.
Seattle June 24-26, 2004 NASA/DoD IEEE Conference on Evolvable Hardware Self-Repairing Embryonic Memory Arrays Lucian Prodan Mihai Udrescu Mircea Vladutiu.
Fault-Tolerant Systems Design Part 1.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
Computer Engineering Rabie A. Ramadan Lecture 1. 2 Welcome Back.
ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) Availability Copyright 2004 Daniel J. Sorin Duke University.
CprE 458/558: Real-Time Systems
Chapter 7 Low-Level Programming Languages. 2 Chapter Goals List the operations that a computer can perform Discuss the relationship between levels of.
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
The Instruction Set Architecture. Hardware – Software boundary Java Program C Program Ada Program Compiler Instruction Set Architecture Microcode Hardware.
CSE 311 Foundations of Computing I Lecture 28 Computability: Other Undecidable Problems Autumn 2011 CSE 3111.
SAFE KERNEL EXTENSIONS WITHOUT RUN-TIME CHECKING George C. Necula Peter Lee Carnegie Mellon U.
Methodology to Compute Architectural Vulnerability Factors Chris Weaver 1, 2 Shubhendu S. Mukherjee 1 Joel Emer 1 Steven K. Reinhardt 1, 2 Todd Austin.
Riyadh Philanthropic Society For Science Prince Sultan College For Woman Dept. of Computer & Information Sciences CS 251 Introduction to Computer Organization.
CS203 – Advanced Computer Architecture Dependability & Reliability.
Chapter 8 Fault Tolerance. Outline Introductions –Concepts –Failure models –Redundancy Process resilience –Groups and failure masking –Distributed agreement.
Modularity Most useful abstractions an OS wants to offer can’t be directly realized by hardware Modularity is one technique the OS uses to provide better.
14 Compilers, Interpreters and Debuggers
The consensus problem in distributed systems
Memory COMPUTER ARCHITECTURE
PROGRAMMING LANGUAGES
CFTP ( Configurable Fault Tolerant Processor )
nZDC: A compiler technique for near-Zero silent Data Corruption
Fault Tolerance In Operating System
Maintaining Data Integrity in Programmable Logic in Atmospheric Environments through Error Detection Joel Seely Technical Marketing Manager Military &
Threads and Memory Models Hal Perkins Autumn 2011
3.1 Introduction to CPU Central processing unit etched on silicon chip called microprocessor Contain tens of millions of tiny transistors Key components:
Closure Representations in Higher-Order Programming Languages
Threads and Memory Models Hal Perkins Autumn 2009
EEC 688/788 Secure and Dependable Computing
InCheck: An In-application Recovery Scheme for Soft Errors
EEC 688/788 Secure and Dependable Computing
Language-based Security
Types and Type Checking (What is it good for?)
CSE 451: Operating Systems Autumn 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 596 Allen Center 1.
CSE 451: Operating Systems Autumn 2001 Lecture 2 Architectural Support for Operating Systems Brian Bershad 310 Sieg Hall 1.
CSE 451: Operating Systems Winter 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 412 Sieg Hall 1.
EEC 688/788 Secure and Dependable Computing
Lecture 4: Instruction Set Design/Pipelining
COMP755 Advanced Operating Systems
Seminar on Enterprise Software
Presentation transcript:

On Cosmic Rays, Bat Droppings and what to do about them David Walker Princeton University with Jay Ligatti, Lester Mackey, George Reis and David August

A Little-Publicized Fact = 23

How do Soft Faults Happen? High-energy particles pass through devices and collides with silicon atom Collision generates an electric charge that can flip a single bit “Galactic Particles” Are high-energy particles that penetrate to Earth’s surface, through buildings and walls “Solar Particles” Affect Satellites; Cause < 5% of Terrestrial problems Alpha particles from bat droppings

How Often do Soft Faults Happen?

NYC Tucson, AZ Denver, CO Leadville, CO IBM Soft Fail Rate Study; Mainframes; 83-86

How Often do Soft Faults Happen? NYC Tucson, AZ Denver, CO Leadville, CO IBM Soft Fail Rate Study; Mainframes; [Zeiger-Puchner 2004] Some Data Points: 83-86: Leadville (highest incorporated city in the US): 1 fail/2 days 83-86: Subterrean experiment: under 50ft of rock: no fails in 9 months 2004: 1 fail/year for laptop with 1GB ram at sea-level 2004: 1 fail/trans-pacific roundtrip [Zeiger-Puchner 2004]

How Often do Soft Faults Happen? Soft Error Rate Trends [Shenkhar Borkar, Intel, 2004] we are approximately here 6 years from now

How Often do Soft Faults Happen? Soft Error Rate Trends [Shenkhar Borkar, Intel, 2004] Soft error rates go up as: Voltages decrease Feature sizes decrease Transistor density increases Clock rates increase we are approximately here 6 years from now all future manufacturing trends

Mitigation Techniques Hardware: error-correcting codes redundant hardware Pros: fast for a fixed policy Cons: FT policy decided at hardware design time mistakes cost millions one-size-fits-all policy expensive Software and hybrid schemes: replicate computations Pros: immediate deployment policies customized to environment, application reduced hardware cost Cons: for the same universal policy, slower (but not as much as you’d think).

Mitigation Techniques Hardware: error-correcting codes redundant hardware Pros: fast for fixed policy Cons: FT policy decided at hardware design time mistakes cost millions one-size-fits-all policy expensive Software and hybrid schemes: replicate computations Pros: immediate deployment policies customized to environment, application reduced hardware cost Cons: for the same universal policy, slower (but not as much as you’d think). It may not actually work! much research in HW/compilers community completely lacking proof

Agenda Answer basic scientific questions about software- controlled fault tolerance: Do software-only or hybrid SW/HW techniques actually work? For what fault models? How do we specify them? How can we prove it? Build compilers that produce software that runs reliably on faulty hardware Moreover: Let’s not replace faulty hardware with faulty software.

Lambda Zap: A Baby Step Lambda Zap [ICFP 06] a lambda calculus that exhibits intermittent data faults + operators to detect and correct them a type system that guarantees observable outputs of well-typed programs do not change in the presence of a single fault expressive enough to implement an ordinary typed lambda calculus End result: the foundation for a fault-tolerant typed intermediate language

Lambda zap models simple data faults only The Fault Model v1 ---> v2 Not modelled: memory faults (better protected using ECC hardware) control-flow faults (ie: faults during control-flow transfer) instruction faults (ie: faults in instruction opcodes) Goal: to construct programs that tolerate 1 fault observers cannot distinguish between fault-free and 1-fault runs

Lambda to Lambda Zap: The main idea let x = 2 in let y = x + x in out y

Lambda to Lambda Zap: The main idea let x = 2 in let y = x + x in out y let x1 = 2 in let x2 = 2 in let x3 = 2 in let y1 = x1 + x1 in let y2 = x2 + x2 in let y3 = x3 + x3 in out [y1, y2, y3] atomic majority vote + output replicate instructions

Lambda to Lambda Zap: The main idea let x = 2 in let y = x + x in out y let x1 = 2 in let x2 = 2 in let x3 = 7 in let y1 = x1 + x1 in let y2 = x2 + x2 in let y3 = x3 + x3 in out [y1, y2, y3]

Lambda to Lambda Zap: The main idea let x = 2 in let y = x + x in out y let x1 = 2 in let x2 = 2 in let x3 = 7 in let y1 = x1 + x1 in let y2 = x2 + x2 in let y3 = x3 + x3 in out [y1, y2, y3] but final output unchanged corrupted values copied and percolate through computation

Lambda to Lambda Zap: Control-flow let x = 2 in if x then e1 else e2 let x1 = 2 in let x2 = 2 in let x3 = 2 in if [x1, x2, x3] then [[ e1 ]] else [[ e2 ]] majority vote on control-flow transfer recursively translate subexpressions

Lambda to Lambda Zap: Control-flow let x = 2 in if x then e1 else e2 let x1 = 2 in let x2 = 2 in let x3 = 2 in if [x1, x2, x3] then [[ e1 ]] else [[ e2 ]] majority vote on control-flow transfer (function calls replicate arguments, results and function itself) recursively translate subexpressions

Almost too easy, can anything go wrong?...

Faulty Optimizations let x1 = 2 in let x2 = 2 in let x3 = 2 in let y1 = x1 + x1 in let y2 = x2 + x2 in let y3 = x3 + x3 in out [y1, y2, y3] In general, optimizations eliminate redundancy, fault-tolerance requires redundancy. CSE let x1 = 2 in let y1 = x1 + x1 in out [y1, y1, y1]

The Essential Problem voters depend on common value x1 let x1 = 2 in let y1 = x1 + x1 in out [y1, y1, y1] bad code:

let x1 = 2 in let x2 = 2 in let x3 = 2 in let y1 = x1 + x1 in let y2 = x2 + x2 in let y3 = x3 + x3 in out [y1, y2, y3] The Essential Problem voters depend on common value x1 let x1 = 2 in let y1 = x1 + x1 in out [y1, y1, y1] bad code: good code: voters do not depend on a common value

The Essential Problem voters depend on a common value let x1 = 2 in let y1 = x1 + x1 in out [y1, y1, y1] bad code: let x1 = 2 in let x2 = 2 in let x3 = 2 in let y1 = x1 + x1 in let y2 = x2 + x2 in let y3 = x3 + x3 in out [y1, y2, y3] good code: voters do not depend on a common value (red on red; green on green; blue on blue)

A Type System for Lambda Zap Key idea: types track the “color” of the underlying value & prevents interference between colors Colors C ::= R | G | B Types T ::= C int | C bool | C (T1,T2,T3)  (T1’,T2’,T3’)

Sample Typing Rules (x : T) in G G |--z x : T G |--z C n : C int Judgement Form: G |--z e : T where z ::= C |. simple value typing rules: G |--z C true : C bool

Sample Typing Rules G |--z e1 : R bool G |--z e2 : G bool G |--z e3 : B bool G |--z e4 : T G |--z e5 : T G |--z if [e1, e2, e3] then e4 else e5 : T Judgement Form: G |--z e : T where z ::= C |. G |--z e1 : R int G |--z e2 : G int G |--z e3 : B int G |--z e4 : T G |--z out [e1, e2, e3]; e4 : T sample expression typing rules: G |--z e1 : C int G |--z e2 : C int G |--z e1 + e2 : C int

Theorems Theorem 1: Well-typed programs are safe, even when there is a single error. Theorem 2: Well-typed programs executing with a single error simulate the output of well- typed programs with no errors [with a caveat]. Theorem 3: There is a correct, type- preserving translation from the simply-typed lambda calculus into lambda zap [that satisfies the caveat].

Conclusions Semi-conductor manufacturers are deeply worried about how to deal with soft faults in future architectures (10+ years out) It’s a killer app for proofs and types

end!

The Caveat

out [2, 3, 3] bad, but well-typed code: outputs 3 after no faults out [2, 3, 3] outputs 2 after 1 fault out [2, 2, 3] Goal: 0-fault and 1-fault executions should be indistinguishable Solution: computations must independent, but equivalent

The Caveat modified typing: G |--z e1 : R U G |--z e2 : G U G |--z e3 : B U G |--z e4 : T G |--z e1 ~~ e2 G |--z e2 ~~ e G |-- out [e1, e2, e3]; e4 : T see Lester Mackey’s 60 page TR (a single-semester undergrad project)

Function O.S. follows

Lambda Zap: Triples let [x1, x2, x3] = e1 in e2 Elimination form: “triples” (as opposed to tuples) make typing and translation rules very elegant so we baked them right into the calculus: [e1, e2, e3] Introduction form: a collection of 3 items not a pointer to a struct each of 3 stored in separate register single fault effects at most one

Lambda to Lambda Zap: Control-flow let f = \x.e in f 2 let [f1, f2, f3] = \x. [[ e ]] in [f1, f2, f3] [2, 2, 2] majority vote on control-flow transfer

Lambda to Lambda Zap: Control-flow let f = \x.e in f 2 let [f1, f2, f3] = \x. [[ e ]] in [f1, f2, f3] [2, 2, 2] majority vote on control-flow transfer (M; let [f1, f2, f3] = \x.e1 in e2) ---> (M,l=\x.e1; e2[ l / f1][ l / f2][ l / f3]) operational semantics:

Related Work Follows

Software Mitigation Techniques Examples: N-version programming, EDDI, CFCSS [Oh et al. 2002], SWIFT [Reis et al. 2005],... Hybrid hardware-software techniques: Watchdog Processors, CRAFT [Reis et al. 2005],... Pros: immediate deployment would have benefitted Los Alamos Labs, etc... policies may be customized to the environment, application reduced hardware cost Cons: For the same universal policy, slower (but not as much as you’d think).

Software Mitigation Techniques Examples: N-version programming, EDDI, CFCSS [Oh et al. 2002], SWIFT [Reis et al. 2005], etc... Hybrid hardware-software techniques: Watchdog Processors, CRAFT [Reis et al. 2005], etc... Pros: immediate deployment: if your system is suffering soft error-related failures, you may deploy new software immediately would have benefitted Los Alamos Labs, etc... policies may be customized to the environment, application reduced hardware cost Cons: For the same universal policy, slower (but not as much as you’d think). IT MIGHT NOT ACTUALLY WORK!