Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 AEGIS: A Single-Chip Secure Processor G. Edward Suh Massachusetts Institute of Technology.

Similar presentations


Presentation on theme: "1 AEGIS: A Single-Chip Secure Processor G. Edward Suh Massachusetts Institute of Technology."— Presentation transcript:

1 1 AEGIS: A Single-Chip Secure Processor G. Edward Suh Massachusetts Institute of Technology

2 2 New Security Challenges Computing devices are becoming distributed, unsupervised, and physically exposed –Computers on the Internet (with untrusted owners) –Embedded devices (cars, home appliances) –Mobile devices (cell phones, PDAs, laptops) Attackers can physically tamper with devices –Invasive probing –Non-invasive measurement –Install malicious software Software-only protections are not enough

3 3 Distributed Computation How can we “trust” remote computation? Example: Distributed Computation on the Internet (SETI@home, etc.) DistComp() { x = Receive(); result = Func(x); Send(result); } Receive() { … } Send(…) { … } Func(…) { … } Need a secure platform –Authenticate “itself (device)” –Authenticate “software” –Guarantee the integrity and privacy of “execution”

4 4 Existing Approaches Sensors to detect attacks Expensive Continually battery-powered Tamper-Proof Package: IBM 4758 Trusted Platform Module (TPM) A separate chip (TPM) for security functions Decrypted “secondary” keys can be read out from the bus

5 5 Our Approach Build a secure platform with a “single-chip” processor (AEGIS) as the only trusted hardware component Protected Environment Memory I/O Security Kernel (trusted part of an OS) A single chip is easier and cheaper to protect The processor authenticates itself, identifies the security kernel, and protects program state in off-chip memory Protect Identify

6 6 Contributions Integration of Physical Random Functions (PUFs) –Cheap and secure way to authenticate the processor Architecture to minimize the trusted code base –Efficient use of protection mechanisms –Reduce the code to be verified Off-chip protection mechanisms –Should be secure, yet efficient –No existing trusted platform provides proper off-chip protection –Integrity verification and encryption algorithms A fully-functional RTL implementation –Area Estimate –Performance Measurement

7 7 Physical Uncloneable Functions

8 8 Authenticating the Processor Each processor should be unique (has a secret key) so that a user can authenticate the processor Sign/MAC a message the key  A server can authenticate the processor Encrypt for the processor key  Only the processor can decrypt Challenge  Embedding secrets in the processor in a way that is resistant to physical attacks

9 9 Problem EEPROM/ROM Processor Probe Adversaries can physically extract secret keys from EEPROM while processor is off Trusted party must embed and test secret keys in a secure location EEPROM adds additional complexity to manufacturing Storing digital information in a device in a way that is resistant to physical attacks is difficult and expensive.

10 10 Our Solution: Physical Random Functions (PUFs) Generate keys from a complex physical system Security Advantage –Keys are generated on demand  No non-volatile secrets –No need to program the secret –Can generate multiple master keys What can be hard to predict, but easy to measure? Physical System Processor Challenge (c-bits) configure characterize Response (n-bits) Use as a secret Can generate many secrets by changing the challenge Hard to fully characterize or predict

11 11 Silicon PUF – Concept [by Gassend et al] Because of random process variations, no two Integrated Circuits even with the same layouts are identical –Variation is inherent in fabrication process –Hard to remove or predict –Relative variation increases as the fabrication process advances Experiments in which identical circuits with identical layouts were placed on different ICs show that path delays vary enough across ICs to use them for identification. Combinatorial Circuit Challenge c-bits Response n-bits

12 12 A (Simple) Silicon PUF [VLSI’04] Each challenge creates two paths through the circuit that are excited simultaneously. The digital response of 0 or 1 is based on a comparison of the path delays by the arbiter We can obtain n-bit responses from this circuit by either duplicate the circuit n times, or use n different challenges Only use standard digital logic  No special fabrication … c-bit Challenge Rising Edge 1 if top path is faster, else 0 DQ 1 1 0 0 1 1 0 0 1 1 0 0 101001 01 G

13 13 PUF Experiments Fabricated 200 “identical” chips with PUFs in TSMC 0.18  on 5 different wafer runs Security –What is the probability that a challenge produces different responses on two different PUFs? Reliability –What is the probability that a PUF output for a challenge changes with temperature? –With voltage variation?

14 14 Inter-Chip Variation Apply random challenges and observe 100 response bits Measurement noise for Chip X = 0.9 bits Distance between Chip X and Y responses = 24.8 bits Can identify individual ICs

15 15 Environmental Variations What happens if we change voltage and temperature? Measurement noise at 125C (baseline at 20C) = 3.5 bits Measurement noise with 10% voltage variation = 4 bits Even with environmental variation, we can still distinguish two different PUFs

16 16 Reliable PUFs PUF n Challenge PUFs can be made more secure and reliable by adding extra control logic c Response k One-Way Hash Function New Response Hash function (SHA-1,MD5) precludes PUF “model-building” attacks since, to obtain PUF output, adversary has to invert a one-way function Syndrome BCH Encoding n - k Error Correcting Code (ECC) can eliminate the measurement noise without compromising security BCH Decoding Syndrome For calibrationFor Re-generation

17 17 Security Discussion What does it take to find PUF secrets? Open the chip, completely characterize PUF delays –Invasive attacks are likely to change delays Read on-chip register with a digital secret from PUF while the processor is powered up –More difficult proposition than reading non-volatile memory Still, this only reveals ONE secret from PUF –Use two keys with only one key present on-chip at any time PUF is a cheap way of generating more secure secrets

18 18 Processor Architecture

19 19 Authentication The processor identifies security kernel by computing the kernel’s hash (on the l.enter.aegis instruction) –Similar to ideas in TCG TPM and Microsoft NGSCB –Security kernel identifies application programs H(SKernel) is included in a signature by the processor –Security kernel includes H(App) in the signature H(Skernel), H(App) in a signature  A server can authenticate the processor, the security kernel, and the application Application (DistComp) Security Kernel H(SKernel) H(App)

20 20 Protecting Program States On-chip registers and caches –Security kernel handles context switches –Virtual memory permission checks in MMU Off-chip memory –Memory Encryption [MICRO36] Counter-mode encryption –Integrity Verification [HPCA’03,MICRO36,IEEE S&P ’05] Hash trees and Log-hash schemes Swapped pages in secondary storage –Security kernel encrypts and verifies the integrity

21 21 A Simple Protection Model How should we apply the authentication and protection mechanisms? What to protect? –All instructions and data –Both integrity and privacy What to trust? –The entire program code –Any part of the code can read/write protected data Program Code (Instructions) Initialized Data (.rodata,.bss) Uninitialized Data (stack, heap) Memory Space Encrypted & Integrity Verified Hash  Program Identity

22 22 What Is Wrong? Large Trusted Code Base –Difficult to verify to be bug-free –How can we trust shared libraries? Applications/functions have varying security requirements –Do all code and data need privacy? –Do I/O functions need to be protected?  Unnecessary performance and power overheads Architecture should provide flexibility so that software can choose the minimum required trust and protection

23 23 Distributed Computation Example If Func(x) is a private computation –Needs both privacy and integrity Calling a signing system call to security kernel –Only needs integrity Receiving the input and sending the result (I/O) –No need for protection –No need to be trusted DistComp() { x = Receive(); result = Func(x); sig = sys_pksign(x,result); Send(result,sig); }

24 24 AEGIS Memory Protection Architecture provides five different memory regions –Applications choose how to use Static (read-only) –Integrity verified –Integrity verified & encrypted Dynamic (read-write) –Integrity verified –Integrity verified & encrypted Unprotected Only authenticate code in the verified regions Memory Space Static Verified Dynamic Encrypted Dynamic Verified Static Encrypted Unprotected Receive(), Send() Receive(), Send() data DistComp(), Func() DistComp() data Func() data

25 25 Suspended Secure Processing (SSP) Two security levels within a process –Untrusted code such as Receive() and Send() should have less privilege Architecture ensures that SSP mode cannot tamper with secure processing –No permission for protected memory –Only resume secure processing at a specific point STD TE/PTR SSP Start-up Secure Modes Insecure (untrusted) Modes Compute Hash Suspend Resume

26 26 Memory Integrity Verification

27 27 Microsoft Xbox Xbox is a Trusted PC Platform –All executables are digitally signed and verified before execution Trusted (encrypted) bootloader/OS prevents from –Using copied or imported games –Online game cheating –Using Xbox as a stock PC

28 28 Inside Xbox: How was It Broken? Southbridge ASIC –Secure boot code –128-bit secret key Flash ROM –Encrypted Bootloader Broken by tapping bus –Read the 128-bit key –Cost about $50 From Andrew “Bunnie” Huang’s Webpage Bootloader/OSSecure boot RC4/128-bit key Observation - Adversary can easily read anything on off- chip bus

29 29 Off-Chip Memory Protection Privacy Protection [MICRO36] –Encrypt private information on off-chip memory –Counter-mode encryption Integrity Verification [HPCA’03,MICRO36,IEEE S&P ’05] –Check if a value from external memory is the most recent value stored at the address by the processor Processor External Memory write read INTEGRITY VERIFICATION ENCRYPT / DECRYPT

30 30 Previous Work (1): Use Message Authentication Code (MAC) Hash Function MAC Input Secret Key Hash Functions –Functions such as MD5 or SHA-1 –Given an output, hard to find what input produced it –Also, hard to find two inputs with the same output Message Authentication Code (MAC) can be seen as a keyed hash function –Only an entity with the correct key can generate a valid MAC –An entity with the correct key can verify the MAC

31 31 Previous Work (1): MAC-based Integrity Verification Algorithm Store MAC(address, value) on writes, and check the MAC on reads (used by XOM architecture by David Lie) Does NOT work  Replay attacks Need to securely remember the off-chip memory state MAC Key Processor Untrusted RAM write read VERIFYVERIFY 124, MAC(0x45, 124) Address 0x45 120, MAC(0x45, 120) IGNORE

32 32 Previous Work (2): Hash Trees (Merkle Trees) V1V1 V3V3 V4V4 L2 block Untrusted Memory Processor Data Values MISS V2V2 READ VERIFY h 1 =h(V 1.V 2 )h 2 =h(V 3.V 4 ) root = h(h 1.h 2 ) VERIFY Construct a hash tree On-chip Storage: 16-20 Bytes Read/Write Cost: logarithmic  10x slowdown

33 33 cache Cached Hash Trees [HPCA’03] V1V1 V2V2 V3V3 V4V4 Untrusted Memory Processor cache MISS cache h 1 =h(V 1.V 2 )h 2 =h(V 3.V 4 ) root = h(h 1.h 2 ) VERIFY MISS VERIFY DONE!!! Cache hashes on-chip  On-chip cache is trusted  Stop checking earlier

34 34 Hardware Implementation Issue [ISCA’05] What happens if a new block evicts a dirty one?  write-back If we cache all hashes … Each access can cause another read and a write-back  Need a large buffer Cache hashes only if we have buffer space available Untrusted Memory Processor CacheBuffer for IV Data A Hash K Data B Data C Hash J Hash I WRITE-BACK Hash J MISS Hash JData D READ UPDATE Parent Hash WRITE-BACK Hash KHash H READ Hash HData D Hash K Hash H

35 35 Hiding Verification Latency Integrity verification takes at least 1 hash computation –SHA-1 has 80 rounds  80 cycles to compute Should we wait for integrity verification to complete? No need for precise integrity violation exceptions; they should never happen. If they do, we simply abort Memory verification can be performed in the background, so no latency is added except for instructions that can compromise security Load data load r0,(r1) addi r0,r0,#3 store (r1),r0 Integrity Verification Use data Store Result Exception?

36 36 Implementation & Evaluation

37 37 Implementation Fully-functional system on an FPGA board [ISCA’05] –AEGIS (Virtex2 FPGA), Memory (256MB SDRAM), I/O (RS-232) –Based on openRISC 1200 (a simple 4-stage pipelined RISC) –AEGIS instructions are implemented as special traps Processor (FPGA) External Memory RS-232

38 38 Area Estimate Synopsys DC with TSMC 0.18u lib New instructions and PUF add 30K gates, 2KB mem (1.12x larger) Off-chip protection adds 200K gates, 20KB memory (1.9x larger total) The area can be further optimized Core I-Cache (32KB) 0.512mm 2 1.815mm 2 D-Cache (32KB) 2.512mm 2 I/O (UART, SDRAM ctrl, debug unit) 0.258mm 2 IV Unit (5 SHA-1) 1.075mm 2 Encryption Unit (3 AES) 0.864mm 2 Cache (16KB) 1.050mm 2 0.086mm 2 Cache (4KB) 0.504mm 2 Code ROM (11KB) 0.138mm 2 Scratch Pad (2KB) 0.261mm 2 PUF 0.027mm 2

39 39 EEMBC/SPEC Performance Performance overhead comes from off-chip protections 5 EEMBC kernels and 1 SPEC benchmark EEMBC kernels have negligible slowdown –Low cache miss-rate –Only ran 1 iteration SPEC twolf also has reasonable slowdown Benchmark Slowdown (%) Integrity Integrity + Privacy routelookup 0.00.3 ospf 0.23.3 autocor 0.11.6 conven 0.11.3 fbital 0.00.1 twolf (SPEC) 7.115.5

40 40 High Performance Processors Assume that the entire program and data protected If we add encryption, the overhead is 31% on average and 59% in the worst case L2 Caches with 64B blocks Worst case 50% degradation Most cases < 25% degradation Normalized Performance (IPC)

41 41 Related Projects XOM (eXecution Only Memory) –Stated goal: Protect integrity and privacy of code and data –Operating system is completely untrusted –Memory integrity checking does not prevent replay attacks –Privacy enforced for all code and data TCG TPM / Microsoft NGSCB / ARM TrustZone –Protects from software attacks –Off-chip memory integrity and privacy are assumed AEGIS provides “higher security” with “smaller Trusted Computing Base (TCB)”

42 42 Security Review Have we built a secure computing system? Flash Kernel SDRAM Secure - Verified using hashes Secure - Integrity verification and encryption Protected by PUFs - Attacks should be carried out while the processor is running - Probing on-chip memory - Side channels (address bus, power, electromagnetic fields)  Can be further secured using techniques from smartcards

43 43 Applications Critical operations on computers w/ untrusted owners –Commercial grid computing, utility computing –Mobile agents –Software IP protection, DRM Collaboration of mutually mistrusting parties –Peer-to-peer applications –Trusted third party computation Securing physically exposed devices (embedded and mobile devices) –Secure sensor networks –Cell phones, laptops, automobiles, etc.

44 44 Summary Physical attacks are becoming more prevalent –Untrusted owners, physically exposed devices –Requires secure hardware platform to trust remote computation The trusted computing base should be small to be secure and cheap –Hardware: single-chip secure processor Physical random functions Memory protection mechanisms –Software: suspended secure processing There are many exciting applications that need to be investigated

45 45 Questions?

46 46 Extra Slides

47 47 Performance Slowdown Performance overhead comes from off-chip protections Synthetic benchmark –Reads 4MB array with a varying stride –Measures the slowdown for a varying cache miss-rate Slowdown is reasonable for realistic miss-rates –Less than 20% for integrity –5-10% additional for encryption D-Cache miss-rate Slowdown (%) Integrity Integrity + Privacy 6.25%3.88.3 12.5%18.925.6 25%31.540.5 50%62.180.3 100%130.0162.0

48 48 Part IV Ongoing and Future Work

49 49 Summary Security challenges will dominate the design, implementation and use of the Expanding Internet AEGIS can potentially be a building block that improves security in diverse platforms There are many exciting problems that need to be solved that span computer security, architecture, cryptography, and operating systems

50 50 Secure Sensor Networks Wide deployment of sensor networks for sensitive applications require physical security Platform –Power efficiency –Security kernel Programming the system –Distributed –Multiple security levels Upgrades and debugging Heterogeneity –Secure nodes and insecure nodes G Physically secure node Sensor/actuator G Physically secure gateway Internet

51 51 Other Research Challenges Bugs in applications –Hardware support to protect against malicious software attacks [ASPLOS’04] Secure storage –Log hash algorithm can be very efficient to check a long sequence of memory accesses [MICRO36] Uses a new cryptographic primitive [Asiacrypt’03] –Adaptive algorithm can give the best of both hash trees and log hashes [IEEE Security and Privacy 2005] –Integrity verification of databases and distributed storages

52 52 My Other Research Interests High Performance Memory Systems –Intelligent SRAM: DAC’03 –Caching and Job Scheduling Analytical cache models: ICS’01 Cache partitioning: PDCS’01, the Journal of Supercomputing 2004 Memory-aware scheduling: JSSPP 2001 Cache monitors: HPCA-8, 2002

53 53 Controlled PUFs

54 54 Attacks on a PUF How can we predict a response without a PUF? Duplication: Make more PUFs from the original layout and hope for a match  Only 23% chance per bit Brute-force attack: Exhaustively characterize the PUF by trying all challenges  Exponential number Direct measurement: Open the PUF and attempt to directly measure its delays  Difficult to precisely measure delays Model building attack: Try to build a model of the PUF that has a high probability of outputting the same value  possibility for a simple PUF circuit

55 55 Improving PUFs [ISCA’05] PUF n Challenge PUFs can be made more secure and reliable by adding extra control logic c Response k One-Way Hash Function New Response Hash function (SHA-1,MD5) precludes PUF “model-building” attacks since to obtain PUF output, adversary has to invert a one-way function Syndrome ECC Encoding n - k Error Correcting Code (ECC) can eliminate the measurement noise without compromising security ECC Decoding Syndrome For bootstrappingFor re-generation

56 56 ECC Security Discussion Suppose we choose a (n, k, d) code Code word length = Length of PUF response Dimension of Code Number of code words = 2 k Minimum distance = 2t + 1 –Syndrome = n – k bits  public, reveal information about responses –Security parameter is k  At least k bits remain secret after revealing the syndrome –BCH (255, 107, 45) will correct 22 errors out of 255 response bits and effectively produce 107-bit secret key –Can get arbitrary size keys –Theoretical study in fuzzy extractor [reference]

57 57 PUF Secret Sharing Protocols Now, we know how to extract a secret key from an IC using PUFs A trusted party can securely embed a private key using the PUF secret Blaise et al developed a PUF protocol for secret sharing –Each User can share a unique challenge-response-pair (CRP) with a PUF –Using a CRP, a user can share a secret with a specific program running on the processor with a PUF PUFs can be used for any current symmetric key or public/private key application

58 58 Controlled PUFs (CPUFs) Restrict access to the PUF using digital control logic to produce a Controlled PUF (CPUF) –Different control algorithms provide different capabilities –Many types of CPUFs possible, each tailored to a particular application –Will describe two types of CPUFs A user can share a secret with a remote CPUF that has been bootstrapped Error correction ensures PUF reliability equivalent to digital logic CPUFs can be used for any current symmetric key or public/private key application

59 59 Controlled PUF Implementations Will describe two useful CPUF implementations First Controlled PUF Implementation: –Initial bootstrap step to “express” key returns an “access code” –Secret generation in the field by applying public “access code”

60 60 A Controlled PUF Decryption Unit PUF XOR One-Way Function (OWF) Challenge Response AccessCode Volatile Key Decrypted data Response PreChall Encrypted data 1 0 Error Correction (EC) RorS Note: Everything but the PUF is conventional digital logic and can also be implemented in software

61 61 One-Way Function Hash function such as MD5 or SHA-1 Given an output, hard to find what input produced it One-Way Function (OWF) Challenge PreChall Given PreChall easy to compute Challenge Given Challenge, hard to find PreChall that produces that Challenge

62 62 Bootstrapping PUF SHA-1 Response PreChall Prior to CPUF deployment (with no eavesdroppers) Apply PreChall to obtain Response from CPUF User software: Compute Chall = OWF(PreChall) Choose Key, Compute AccessCode = Response XOR Key PreChall, Key are secret, Chall, AccessCode are public Chall, Key can be same for different CPUFs, but Response and AccessCode will be different for each CPUF RorS = 1

63 63 Secret Key Generation Decryption Unit PUF XOR Challenge Response AccessCode Volatile Key Decrypted message Encrypted message EC When the CPUF is in a remote location User: Encrypt Message with Key to get E Key (Message) Send [E Key (message), Chall, AccessCode] to CPUF CPUF chip takes Chall, AccessCode and generates Key internally CPUF uses Key to decrypt Message RorS = 0

64 64 Controlled PUF Implementations Will describe two useful CPUF implementations Second Controlled PUF Implementation: –Initial bootstrap step does not expose the PUF response (or key). This is accomplished using public- key encryption. –Secret generation is a matter of applying a challenge

65 65 Public Key Cryptography Have a public key PK and a private key SK that form a pair If a message is encrypted with PK, then only SK can decrypt it properly If a message is signed with SK, then the signature can be verified using PK

66 66 An Asymmetric Controlled PUF: Bootstrapping Allowing Eavesdroppers Public Key Encryption PUF Response Syndrome for Response (public information) Error Correction Syndrome Encoding SP public key E SP PK (Key) Device during bootstrapping simply writes out Syndrome(Response) and E SP PK (Key) Only SP can decrypt E SP PK (Key) with its private key to obtain Key for its public key challenge for each chip Challenge = Hash Key n - k k n

67 67 An Asymmetric Controlled PUF: Secret Sharing PUF Syndrome for Response (public information) Error Correction Syndrome Decoding Only SP can decrypt E SP PK (Key) with its private key to obtain Key for its public key challenge for each chip Key is re-generated reliably using the Syndrome and was never exposed at any time Response ’ SP public key Challenge = Hash Key XOR Specific_Key Access Code (public information)

68 68 Discussion A device cannot impersonate a CPUF device To detect fake devices that pretend to have PUFs, a list of public legitimate E SP PK (Key) identifiers can be maintained (other options possible) Public Key Encryption PUF Response Syndrome for Response (public information) Error Correction Syndrome Encoding SP public key E SP PK (Key)Challenge = Hash Key n n - k k

69 69 Applications for CPUFs

70 70 Applications Anonymous Computation Alice wants to run computations on Bob’s computer, and wants to make sure that she is getting correct results. A certificate is returned with her results to show that they were correctly executed. Software Licensing Alice wants to sell Bob a program which will only run on Bob’s chip (identified by a PUF). The program is copy-protected so it will not run on any other chip. How can we enable the above applications by trusting only a single-chip processor that contains a silicon PUF?

71 71 Sharing a Secret with a Silicon PUF Suppose Alice wishes to share a secret with the silicon PUF She has a challenge response pair that no one else knows, which can authenticate the PUF She asks the PUF for the response to a challenge PUF Alice 1. Challenge, Task 2. Response Anyone can see challenge and ask PUF for the response Anyone can see response if it is not encrypted

72 72 Restricting Access to the PUF To prevent the attack, the man in the middle must be prevented from finding out the response. Alice’s program must be able to establish a shared secret with the PUF, the attacker’s program must not be able to get the secret. Hash(Program) Secret Hash Challenge PUF Response Add this to PUF  Combine response with hash of program. The PUF can only be accessed via the GetSecret function:

73 73 Getting a Challenge-Response Pair Now Alice can use a Challenge-Response pair to generate a shared secret with the PUF equipped device. But Alice can’t get a Challenge-Response pair in the first place since the PUF never releases responses directly.  An extra function that can return responses is needed.

74 74 Getting a Challenge-Response Pair - 2 Let Alice use a Pre-Challenge. Use program hash to prevent eavesdroppers from using the pre-challenge. The PUF has a GetResponse function Hash(Program) Pre-Challenge Hash Challenge PUF Response Add this to PUF

75 75 Controlled PUF Implementation Hash(Program) Secret Hash Challenge PUF Response Hash(Program) Pre-Challenge Hash GetResponse GetSecret

76 76 Challenge-Response Pair Management: Bootstrapping When a CPUF has just been produced, the manufacturer wants to generate a challenge-response pair. 1.Manufacturer provides Pre-challenge and Program. 2.CPUF produces Response. 3.Manufacturer gets Challenge by computing Hash(Hash(Program), PreChallenge). 4.Manufacturer has (Challenge, Response) pair where Challenge, Program, and Hash(Program) are public, but Response is not known to anyone since Pre-challenge is thrown away Manufacturer CPUF

77 77 Software Licensing Program (Ecode, Challenge) Secret = GetSecret( Challenge ) Code = Decrypt( Ecode, Secret ) Run Code Ecode has been encrypted with Secret by Manufacturer Secret is known to the manufacturer because he knows Response to Challenge and can compute Secret = Hash(Hash(Program), Response) Adversary cannot determine Secret because he does not know Response or Pre-Challenge If adversary tries a different program, a different secret will be generated because Hash(Program) is different Hash(Program)

78 78 Certified Execution Program (Code, Challenge) Result = Run Code Secret = GetSecret( Challenge ) Cert = (Result, MAC Secret (Result, Hash(Code))) Output Cert Alice can verify Cert because she knows Response, and hence Secret, and she knows Hash(Code) Adversary cannot fake Cert because he does not know Secret Note: Secret is not dependent on Code, but dependent on Program Hash(Program)

79 79 CRP Management & Security Discussions

80 80 Challenge-Response Pair Management How does Alice get Challenge-response pairs? Protocols: –Bootstrapping: Manufacturer or Certifier has CPUF in their possession and can obtain challenge-response pairs without fear of eavesdroppers –Introduction: Certifier gives Alice a challenge-response pair over a secure channel. –Renewal: Given a challenge-response pair, Alice shares a secret with the remote CPUF and creates a secure channel. New challenge-response pairs can now be obtained. Alice can now “license” software to the CPUF.

81 81 Challenge-Response Pair Management Introduction Alice wants to get a challenge-response pair from a certifier who already has a challenge-response pair. The certifier can do renewal before he does introduction if he doesn’t want to give away his challenge-response pair. 1.The Certifier and Alice set up a secure connection. 2.The Certifier sends Alice a Challenge-Response Pair. Alice Challenge-Response pair Certifier

82 82 Challenge-Response Pair Management Renewal Alice, who has a challenge- response pair, wants to generate new challenge-response pairs. 1.Initial CRP is used to establish encrypted and authenticated link from the CPUF to Alice. 2.Alice provides a pre-challenge. 3.The CPUF generates a CRP. 4.The CRP is returned to Alice. AliceCPUF Challenge-Response pair Untrusted Network

83 83 Challenge-Response Pair Management Private Renewal Alice got a challenge- response pair from a certifier. Alice trusts the certifier to not to actively tamper with her communication, but does not trust him not to eavesdrop. Alice can use a public key system to generate a challenge-response pair that the certifier does not know. 1.The CRP and Alice’s public-key are used to establish an encrypted and authenticated link from the CPUF to Alice. 2.Same as normal renewal. AliceCPUF Challenge-Response pair Untrusted Network Certifier

84 84 Putting it all Together ManufacturerCertifier Introduction Alice Introduction CPUF Untrusted Network Bootstrapping Renewal Private Renewal Application Renewal Private Renewal Challenge-Response pair

85 85 CPUFs vs. Digital Keys CPUFs “express” volatile keys when “addressed” by public challenges and access codes –Volatile keys much harder to extract from running chip –Challenges and access codes can be stored in exposed form on-chip, on a different chip, or sent across the network –Crypto-processor does not need EEPROM CPUFs can express many independent “master” keys –Isolate service provider secrets – reduce/eliminate dependence of secondary keys on single master key –Update (potentially compromised) master keys in the field by addressing the PUF with a different challenge and access code Asymmetric CPUFs can be used in a manner that ensures that secret keys never have to be exposed Error correction is required, and produces reliability equivalent to digital logic

86 86 Security Discussion Suppose we choose a (n, k, d) code Code word length = Length of PUF response Dimension of Code Number of code words = 2 k Minimum distance = 2t + 1 –Syndrome = n – k bits –Security parameter is k –BCH (255, 107, 45) will correct 22 errors out of 255 response bits and effectively produce 107-bit secret key –Can get arbitrary size keys Hash function precludes PUF “model-building” attacks since to obtain PUF output, adversary has to invert a one-way function

87 87 Measurement Attacks and Software Attacks Can an adversary create a software clone of a given PUF chip? Distance between Chip X and Y responses = 23 Measurement noise for Chip X = 0.5 At 125C measurement noise for chip X = 9

88 88 Distance between Chip X and Y responses = 23 “Best” model for Chip X has error = 14 Measurement Attacks and Software Attacks Can an adversary create a software clone of a given PUF chip? Measurement noise for Chip X = 0.5 At 125C measurement noise for chip X = 9 Model-building appears hard even for simple circuits


Download ppt "1 AEGIS: A Single-Chip Secure Processor G. Edward Suh Massachusetts Institute of Technology."

Similar presentations


Ads by Google