Presentation is loading. Please wait.

Presentation is loading. Please wait.

Memory Protection in Resource Constrained Sensor Nodes Ram Kumar Rengaswamy Ph.D. Defense.

Similar presentations

Presentation on theme: "Memory Protection in Resource Constrained Sensor Nodes Ram Kumar Rengaswamy Ph.D. Defense."— Presentation transcript:

1 Memory Protection in Resource Constrained Sensor Nodes Ram Kumar Rengaswamy ( Ph.D. Defense

2 10/20/06 PhD Defense 2 Memory Corruption in Mote-class Sensor Nodes Run-time Stack Sensor Node Address Space Globals and Heap (Apps., drivers, OS) No Protection  Single address space CPU  Shared by apps., drivers and OS  Many bugs in deployed systems come from memory corruption  Corrupted nodes trigger network- wide failures

3 10/20/06 PhD Defense 3 Programming errors cause Corruption  Motes are difficult to program  Error in “well-tested” sampling module in SOS hdr_size = SOS_CALL(s->get_hdr_size, proto); s->smsg = (SurgeMsg*)(pkt + hdr_size); s->smsg->type = SURGE_TYPE_SENSORREADING;  Return value is not being checked SOS_CALL fails in some conditions, returns -1 Error code used as offset into a buffer, causes corruption  Protection mechanisms prevent such corruption Memory protection is required for building robust sensor software

4 10/20/06 PhD Defense 4 Desired Protection Domains KernelDriverApp. 1App. 2 Data RAM - Non contiguous partitions  Domains - Logical partitions of address space  Module loaded into a single domain  Protect domain from corruption by other domains Program FLASH - Contiguous partitions Harbor - Create and enforce protection domains

5 10/20/06 PhD Defense 5 MMU in Micro-Controllers  MMU can provide protection domains  No MMU in embedded micro-controllers MMU hardware requires lot of RAM Increases area and power consumption Poor performance - High context switch overhead  Microcontroller Design Constraints Minimize Price, Power and Memory Real-time performance requirements CoreMMUCach e Area (mm 2 ) 0.13u mW/MHz ARM7-TDMINo 0.250.1 ARM720TYes8 Kb2.40 (~10x)0.2 (~2x)

6 10/20/06 PhD Defense 6 Software-based Fault Isolation  Coarse-grained protection; check all memory accesses  Checks in-lined by binary rewriter; separately verified  Challenge - Ensure checks are not circumvented  Sandboxing [wahbe92SFI] Designed for RISC architectures Static partitioning of virtual address space Dedicated sandbox registers ensure checks are not skipped  XFI [xfi06osdi] Control Flow Integrity using static analysis and run-time checks  t-kernel [gu06tkernel] Naturalization - Rewrite binary on mote to make it safe Virtual memory for heap accesses for protection X

7 10/20/06 PhD Defense 7 Challenge - Fault Isolation on Mote  SFI partitions virtual address space Leverage large address space due to MMU 4 GB per process on x86  Static, continuous, and large domains Checks have low overhead Minimal state maintenance per domain  Static partitioning is impractical in motes Limited address space - No MMU Severely limited physical memory Wastage through internal fragmentation Kernel Ext. #1 Static Kernel Kernel Ext. #2

8 10/20/06 PhD Defense 8 Software-based Approaches  Application Specific Virtual Machine Interpreted code is safe and efficient High-level VM instructions are not type-safe [levis05active] [balani06dvm]  Type Safe Languages Fine grained protection Non-type safe extensions for low level code Compiler output difficult to verify [titzer06virgil], [necula02ccured]  Static Analysis for Resource Management Exchange heap ownership tracking in Singularity [sing06lang] Dynamic memory ownership tracking in SOS [roy07lighthouse]

9 10/20/06 PhD Defense 9 Harbor Design Goals  Fault Isolation as building block for protection Sandbox for ASVM instructions Static analysis for performance enhancement  Motivation of Harbor Memory protection in SOS operating system  Provide coarse-grained memory protection Protect OS from applications Protect applications from one another  Targeted for resource constrained systems Low RAM usage Minimal performance overhead  Amenable to software and hardware implementations

10 10/20/06 PhD Defense 10 Thesis Contributions  Designed Harbor Memory Protection Primitives suited for mote-class sensor nodes  Memory Map Fine grained ownership and layout info. Enables protection without static partitioning of RAM  Control Flow Manager Preserve control flow integrity Cross domain call [wahbe92sfi], Safe Stack [xfi06osdi] Stack Bounds - Prevent run-time stack corruption  Designed and evaluated two systems based on Harbor Software-based memory protection for SOS Hardware-enhanced memory protection unit for AVR

11 10/20/06 PhD Defense 11 Outline  Introduction  Harbor Primitives Memory Map  Sandbox for SOS Modules  UMPU  Conclusion

12 10/20/06 PhD Defense 12 Memory Map - Protection without static partitions Fine-grained layout and ownership information  Store information for all blocks  Encoded information per block Ownership - Kernel/User-ID Layout - Start of segment bit  Block size is 8 bytes for Mica2 User Domain Kernel Domain Partition address space into blocks Allocate memory in segments (Set of contiguous blocks)

13 10/20/06 PhD Defense 13 Using Memory map for protection 1.Ownership information in memory map is current 2.Only block owner can free/transfer ownership 3.Trusted domain can access memory map API 4.Store memory map in protected memory  Easy to incorporate into existing systems Modify memory allocators - malloc, free Track function calls that pass memory across domains Changes to SOS Kernel ~ 1%  Scheme is portable to other OSes and MCUs

14 10/20/06 PhD Defense 14 Memmap Checker  Enforces a protection model  Write access to block granted only to owner  Invoked before every write access  Operations performed by checker Lookup memory map based on write address Verify current executing domain is block owner

15 10/20/06 PhD Defense 15 Memory Map Tradeoffs  More protection domains  More bits per block  Larger memory map  Smaller protected address range  Smaller memory map  Block size Larger block size  Smaller memory map Larger block size  Greater internal fragmentation Match block size to size of memory objects Mica2 - 8 bytes, Cyclops - 128 bytes Addr. Range 0B - 4096B256B - 3072B 2 Domains128 B (~3%)88 B (~2%) 4 Domains256 B (~6%)176 B (~4%) Memory map size for 8 Byte Blocks

16 10/20/06 PhD Defense 16 Outline  Introduction  Harbor Primitives Control Flow Manager  Sandbox for SOS Modules  UMPU  Conclusion

17 10/20/06 PhD Defense 17 Managing Control Flow  Harbor needs to manage control flow Prevent circumvention of run-time checks Track the currently active domain  Enforced control flow restrictions Enter/Exit a domain through designated points  Control flow restrictions can be statically verified  Control flow can become corrupt at run-time Calls using corrupt function pointers Returns on corrupted stack Memory map cannot prevent such corruption

18 10/20/06 PhD Defense 18 Program Memory Domain A call fooJT foo_ret: Domain B foo: … ret Register exported function fooJT:jmp foo Cross Domain Call Jump Table Cross Domain Call Stub  Verify call into jump table  Compute callee domain ID  Determine return address  Verify call into jump table  Compute callee domain ID  Determine return address

19 10/20/06 PhD Defense 19 Stack Bounds Run Time Stack  Single stack shared by all domains  Bounds set during cross domain call  Protection Model No writes beyond stack bound Prevent cross domain corruption  Enforced by checker before all writes Stack Ptr. Stack Bound Stack Base Caller Domain Stack Frame Callee Domain Stack Frame

20 10/20/06 PhD Defense 20 Safe Stack  Extra stack maintained in protected memory  Used for storing Harbor state information e.g. Return Address Protection  Grows up in memory towards run-time stack RUN-TIME STACK SAFE STACK HEAP and GLOBALS Stacks grow towards one another

21 10/20/06 PhD Defense 21 Return Address Protection call foo foo_ret: foo: … ret foo_ret RUN-TIME STACK SAFE STACK func_entry_stub: Copy to safe stack foo_ret RUN-TIME STACK SAFE STACK func_exit_stub: Restore from safe stack

22 10/20/06 PhD Defense 22 Outline  Introduction  Harbor Primitives  Sandbox for SOS Modules  UMPU  Conclusion

23 10/20/06 PhD Defense 23 SOS Sandbox System Overview Binary Re-Writer Binary Re-Writer Sandbox Binary Raw Binary Memory Map Memory Map Control Flow Mgr. Control Flow Mgr. Memory Safe Binary Binary Verifier Binary Verifier Desktop Sensor Node

24 10/20/06 PhD Defense 24 System Design Choices  Binary Re-writer Independent of programming language, compilers Binary output is easier to decode and verify  Only verifier implementation needs to be correct  Verification at every node No need to trust the source of a binary Safety ensured by properties of the binary Do not need secure code dissemination protocols

25 10/20/06 PhD Defense 25 Binary Re-Writer st Z, Rsrc push X push R0 movw X, Z mov R0, Rsrc call memmap_checker pop R0 pop X push X push R0 movw X, Z mov R0, Rsrc call memmap_checker pop R0 pop X  Sandbox stores and computed control flow Sequence is re-entrant, works in presence of interrupts Push/Pop removed by using dedicated registers  Insert calls to run-time checks Checks located in trusted domain Inlining increases module size  Preserve original control flow e.g. branch targets etc.

26 10/20/06 PhD Defense 26 Verifier Operation  Single in-order pass through instruction stream Process one basic block at a time Constant state maintained  For each basic block, the verifier ensures Stores are sandboxed Computed control flow instructions are checked Static jump/call/branch targets lie within domain bounds Disallow certain instructions - SPM etc.

27 10/20/06 PhD Defense 27 Multi-Word Instructions  Multi-word instructions pose a problem for verifier st, call  Cannot detect jumps into the middle of an instruction could be decoded as a store instruction Such store instructions can circumvent checks  Re-writer inserts special marker symbols  Ensure all control transfers to the start of basic block  Special marker symbol is decoded as NOP Cannot be a valid instruction in the ISA No valid immediate argument can equal the marker symbol Unique symbol present at start of basic blocks only

28 10/20/06 PhD Defense 28 Comparison with t-kernel Harbort-kernel Weak Isolation  Memory writes  Limited control flow restriction Strong isolation  Memory writes  Stronger restrictions e.g. infinite loops Multi-domain protectionSingle application support only Rewrite on PC Verify on Mote Rewrite on Mote Requires external flash No virtual memoryVirtual memory for heap accesses Overheads of both systems are comparable

29 10/20/06 PhD Defense 29 Outline  Introduction  Harbor Primitives  Sandbox for SOS Modules Evaluation  UMPU  Conclusion

30 10/20/06 PhD Defense 30 Sandbox Evaluation  Sandbox feasible on mote-class sensors Modest resource usage Minimal overhead for typical sensor workloads  Code Memory Usage Blank SOS kernel compiled for Mica2 platform Compiler: avr-gcc -Os Jump table = 2 KB Checkers + Memory Map API ~ 4 KB Raw Code Size  (2 domains)  (8 domains) 41796 B+6146 B+14.7%+6228 B+14.9%

31 10/20/06 PhD Defense 31 Data Memory Usage 9.5% 5.1% 5.8% 3.4%  Flexible data-structure - Tradeoff RAM for protection  Memory map RAM usage: 70 - 256 Bytes  Additional constant overhead: 28 bytes

32 10/20/06 PhD Defense 32 Code Size of Sandbox Modules  Significant increase in code size: 30% - 65%  Rewriting store by long sequence to call checker  Overhead can be reduced by dedicated registers  Static analysis can eliminate redundant checks

33 10/20/06 PhD Defense 33 Memory Map Overhead  Overhead introduced in dynamic memory API Memory map needs to be updated  Higher overheads of free and change_own Added checks to prevent illegal freeing or transfer  Averaged over long simulation; avg. allocation size = 16 bytes  Measured using Avrora for Mica2 platform APINormalProtecte d Increas e malloc 36362282% free 138446238% change_own 55270418%

34 10/20/06 PhD Defense 34 Run-Time Checkers Overhead OPERATIONCYCLES memmap_checker 65+14 (40x) cross_domain_call 65+12 (19x) cross_domain_ret 28+4 (8x) func_entry_stub 38+8 func_exit_stub 38+4  Incur high overhead, despite implementation in assembly  Checks occur in critical path, impact application performance

35 10/20/06 PhD Defense 35 CPU Intensive Benchmarks  FFT - Fixed point operation on 64 samples  Outlier - Distance based threshold on 4 samples  Buffer Writer - Averaged over buffer size (16 - 96) bytes  Worst case work-load for Harbor 100% CPU utilization in executing benchmarks Checks introduced in critical path, directly impact performance

36 10/20/06 PhD Defense 36 Sandbox versus Virtual Machine  Virtual machines provide fine-memory protection  Average interpretation overhead 115 [ASVM]  Interpretation overhead significantly higher than sandboxing  Sandboxing VM slows it down by factor of 2.6 VM provides fine-grained protection to scripts Harbor provides coarse protection to system running VM

37 10/20/06 PhD Defense 37 Data Collector Application  Experiment Setup 3-hop linear network simulated in Avrora Simulation executed for 30 minutes Tree Routing and Surge modules inserted into network Data pkts. transmitted every 4 seconds Control packets transmitted every 20 seconds  1.7% increase in relative CPU utilization Absolute increase in CPU - 8.41% to 8.56% 164 run-time checks introduced Checks executed ~20000 times

38 10/20/06 PhD Defense 38 SOS Sandbox Conclusions  Sandboxing is feasible on mote-class sensors  Memory map is flexible; tune to protection needs  Low performance impact for typical workloads Run-time checks have high overhead Sandbox has lower overhead than virtual machine  Building block for enhanced protection schemes

39 10/20/06 PhD Defense 39 SOS Sandbox Future Work  Verifier design tradeoff Static analysis eliminates redundant checks Improve overall system performance Increase complexity of verifier, infeasible on mote  Incorporate network into the system Trusted code sources within the network Sandbox on Micro-servers or Backend Secure code dissemination to mote cluster

40 10/20/06 PhD Defense 40 Outline  Introduction  Harbor Primitives  Sandbox for SOS Modules  UMPU  Conclusion

41 10/20/06 PhD Defense 41 UMPU Design Goals  Exploit hardware to improve Harbor’s performance  Provide coarse-grained memory protection  Targeted for resource constrained processors Low RAM and ROM usage Minimize area of protection logic  Practical system No modifications to processor instructions Customizable - Soft IP Core, Configuration Registers

42 10/20/06 PhD Defense 42 Memory Protection Unit (MPU)  Low-cost hardware assisted protection  Static partition of address space Base and Bound registers define partitions Max. of 8 partitions in a processor Access permissions defined for every region Memory access validated against permissions  Simple Protection Features Supports only two domains Static partitioning of address space Limited control flow protection

43 10/20/06 PhD Defense 43 UMPU Overview Memory Map Checker Stack Bounds Checker (STORE) Memory Map Checker Stack Bounds Checker (STORE) Cross Domain Call Safe Stack (CALL & RET) Cross Domain Call Safe Stack (CALL & RET) Domain Bounds Checker Stack Overflow Checker Domain Bounds Checker Stack Overflow Checker Hardware ExtensionsSoftware Components Memory Map API Cross Domain Linking UMPU Device Driver Hardware Software Co-Design ComplexityEfficiency

44 10/20/06 PhD Defense 44 Memory Map Controller  Functional unit that validates store operations  Triggered by fetch decoder on a store instruction  Compute memory map byte address for issued write address  Retrieves memory map permission and validates stores Map maintained in protected RAM RAM access is bottleneck i.e. min. one cycle latency Fetch Decoder Fetch Decoder Memory Map Controller Memory Map Controller DATA RAM DATA RAM CPU_ADDR CPU_WR_EN CPU_STALL CPU_ADDR CPU_WR_EN DATA_BUS ST_INSTR

45 10/20/06 PhD Defense 45 CPU_WR_ADDR MMC_RD_ADDR CLK CPU_ADDR CPU_WR_EN MMC_ADDR MMC_WR_EN CPU_STALL MMC Store Operation Cycle 1Cycle 3Cycle 2 Regular Mode Protected Mode

46 10/20/06 PhD Defense 46 Cross Domain Call Unit Fetch Decoder Fetch Decoder Cross Domain Call Trigger Cross Domain Call Trigger Cross Domain Call State M/C Cross Domain Call State M/C CALL_ADDR CALL_INSTR CALL_ADDR CDC_EN Domain Bounds Checker Domain Bounds Checker PC CURR_DOM_ID Safe Stack RET_ADDR DATA RAM DATA RAM SS_PTR SSP_WR_EN

47 10/20/06 PhD Defense 47 Cross Domain Call Unit  Jump table setup by software Location configurable through UMPU registers  Cross Domain Call State M/C Triggerred by CDC_EN Operation similar to software implementation Bottleneck is the push operation to safe stack jmp_tbl_base jmp_tbl_bnd call_addr <=< AND From fetch_decoder on call instr. CDC_EN

48 10/20/06 PhD Defense 48 Domain Bounds Checker  Restrict control flow to address range of a domain Every domain occupies address range [dom_lb, dom_ub]  Implemented as simple combinational logic in HW dom_lb <= PC <= dom_ub  Domain bounds need to be initialized, stored and updated Initialized by loader or at link-time in static image Stored in Register array or RAM: Area-Performance tradeoff Updated on cross domain calls Domain Bounds Checker Domain Bounds Checker PC CURR_DOM_ID BND_PANIC

49 10/20/06 PhD Defense 49 UMPU Micro-Benchmarks InstructionAVRUMPUSW Store2379 Cross Domain Call49+477 Cross Domain Return 4932 Interrupt Dispatch45+13- Interrupt Return45+9- Save Return Addr.4446 Restore Return Addr.4442

50 10/20/06 PhD Defense 50 UMPU Logic Area  Synthesized microcontroller using Xilinx tools Design targeted towards Vertex2 FPGA 64% increase in area of core Measured in terms of equivalent logic gates  Spec sensor node [Jason Hill] Identical processing core Chip Area = 6.25 mm 2 Core Area = 0.39 mm 2 (Utilization = 6.3%) UMPU Enhanced Core = 0.64 mm 2 (Utilization = 10.3%) Area utilization of the core increases by 4% Packed without changing die size 10x increase in chip area with the addition of MMU

51 10/20/06 PhD Defense 51 UMPU Application Overhead  Install SOS on UMPU enhanced AVR  Overhead measured using Modelsim simulator  UMPU overhead is significantly lower than SOS Sandbox

52 10/20/06 PhD Defense 52 UMPU Conclusion  Hw-Sw Co-Desgin for Memory Protection Combine SW flexibility with HW efficiency  Practical system with widespread applications Low resource utilization Low performance overhead Binary compatible with existing software and tool-chains Useful for embedded operating systems

53 10/20/06 PhD Defense 53 UMPU Future Work  Recovery from corrupted software  UMPU supports exception mechanism  Non-maskable highest priority interrupt  Records system state during panic

54 10/20/06 PhD Defense 54 Outline  Introduction  Harbor Primitives  Sandbox for SOS Modules  UMPU  Conclusion

55 10/20/06 PhD Defense 55 Conclusion  Memory Protection Enabling technology for reliable software systems Memory protection introduced in 60’s in mainframes Page-based protection introduced in desktop in 80’s Embedded design constraints motivate new techniques  Explored fault-isolation on micro-controllers Protection primitives for limited address space Flexible design: Tradeoff protection for resources Building block for more advanced techniques

56 Thank You ! 2x/ 2x/ Ram Kumar PhD Defense April 4th, 2007

57 10/20/06 PhD Defense 57 References  [Wahbe92SFI] Robert Wahbe, Steven Lucco, Thomas E. Anderson and Susan L Graham. Efficient software-based fault isolation. In SOSP 92  [small97misfit] Christopher Small. MiSFIT: A tool for constructing safe extensible C++ systems. In Proc. 3rd USENIX Conference on Object-Oriented Technologies, 1997  [witchel02mmp] Emmett Witchel, Josh Cates, and Krset Asanovic. Mondriaan Memory Protection. In ASPLOS 2002.  [necula02ccured] George C. Necula, Scott McPeak, and Westley Weimer. Ccured: Type-saef retrofitting of legacy code. In POPL 2002.  [levis05active] Phil Levis, David Gay and David Culler. Active Sensor Networks. In NSDI 2005  [regehr06utos] John Regehr, Nathan Cooprider, Will Archer, and Eric Eide. Memory safety and untrusted extensions for TinyOS. Univ. of Utah Tech. Report 2006  [mccamant06pittsfield] Stephen McCamant and Greg Morrisett. Evaluating SFI for CISC architecture. In USENIX symposium 2006.  [balani06dvm] Rahul Balani, Ram Kumar, Simon Han, Ilias Tsigokannis, and Mani Srivastva. Multi-level Software Reconfiguration for Sensor Networks. In EMSOFT 2006.  [gu06tkernel] Lin Gu and John A Stankovic. t-kernel: Providing reliable OS support to wireless sensor networks. In Sensys 2006  [xfi06osdi] Ulfar Erlingsson, Martin Abadi, Michael Vrable, Michai Budiu, and George C Necula. XFI: Software guards for system address spaces. In OSDI 2006  [divya06ccured] Divya Arora, Anand Raghunathan and Niraj K Jha. Architectural support for safe software execution on embedded processors. In CODES+ISSS 2006  [titzer06virgil] Ben L. Titzer. Virgil: Objects on the head of a pin. In OOPSLA 2006.  [sing06lang] Manuel Fahndrich, Mark Aiken, Chris Hawblitzel, Orion Hodson, Galen Hunt, James R Larus, and Steven Levi. Language support for fast and reliable message-based communication in Singularity OS. In Eurosys 2006  [roy07lighthouse] Roy Shea, Shan Markstrum, Todd Millstein, Rupak Majumdar, and Mani Srivastava. Static Checking for Dynamic Resource Management in Sensor Network Systems.

58 10/20/06 PhD Defense 58 Core NameRAMROMFMAX$ 100 Units MMUCore AT91SAM7S128 + AT29BV010A 8KNon + 128K 338.84 + 1.7 NoneARM7TDMI AT91RM920016K128K18014.43YesAT91 ATMEGA1284K128K168.75NoneAVR ATMEGA12818K128K168.75NoneAVR ML67Q500x32K256K60??NoneARM7TDMI iMOTE232M 416299YesPXA271 MicaZ4K128K8125NoAVR Price for Protection

59 10/20/06 PhD Defense 59 Reliable Sensor Networks  Reliability is a broad and challenging goal  Data Integrity How do we trust data from our sensors ?  Network Integrity How to make network resilient to failures ?  System Integrity How to develop robust software for sensors ?

60 10/20/06 PhD Defense 60 Memory Protection in Desktops  HW based protection designed since early 60’s  Intel iAPX 432 introduced in 1975 Sophisticated architectural support for protection  Page-based protection Intel 80386 introduced it in 1985 Ubiquitous in the desktop world today  What about the embedded processors ?

61 10/20/06 PhD Defense 61 SOS Operating System Tree Routing Module Tree Routing Module Data Collector Application Data Collector Application Photosensor Module Photosensor Module Dynamically Loaded modules Dynamic Memory Dynamic Memory Message Scheduler Message Scheduler Dynamic Linker Dynamic Linker Kernel Components Sensor Manager Sensor Manager Messaging I/O Messaging I/O System Timer System Timer SOS Services Radio I2C ADC Device Drivers Static SOS Kernel

62 10/20/06 PhD Defense 62 SOS Memory Layout  Static Kernel State Accessed only by kernel  Heap Dynamically allocated Shared by kernel and applications  Stack Shared by kernel and applications Run-time Stack Static Kernel State Dynamically Allocated Heap 0x0200 0x0000

63 10/20/06 PhD Defense 63 Updating Memory Map  Blocks Size on Mica2 - 8 bytes  Efficiently encoded using 4 bits per block xxx0 - Start of segment block xxx1 - Later block of segment xxx is the 3-bit domain ID User Kernel pA = malloc(KERN, 16); pB = malloc(USER, 30); mem_chown(pA, USER); Free

64 10/20/06 PhD Defense 64 Memmap API  Updates Memory Map Blk_ID: ID of starting block in a segment Num_blk: Number of blocks in a segment Dom_ID: Domain ID of owner (e.g. USER / KERN)  Returns owner of a given block  Memory map API should be protected memmap_set(Blk_ID, Num_blk, Dom_ID) Dom_ID = memmap_get(Blk_ID)

65 10/20/06 PhD Defense 65 Address  Memory Map Lookup Address (bits 11-0) Memmap Table 1 Byte has 2 memmap records 8 1 Block Number (bits 11-3) Block Offset (bits 2-0) 9 bits

66 10/20/06 PhD Defense 66 Optimizing Memmap Checker  Minimize performance overhead of checks  Address  Memory Map Lookup Requires multiple complex bit-shift operations Micro-controllers support single bit-shift operations Use FLASH based look-up table 4x Speed up - From 32 to 8 clock cycles  Overhead of a software check - 89 cycles

67 10/20/06 PhD Defense 67 Control Flow Manager Overview  Cross Domain Call Enforce control flow restrictions Track current domain ID  Stack Bounds Prevents stack corruption across domains Enables safe sharing of single stack  Safe Stack Separate stack in protected memory Stores cross domain call frames, Return Addresses

68 10/20/06 PhD Defense 68 Cross Domain Call Frame Return Address #A Stack Bound #A Domain #A Safe Stack Return Address #B Stack Bound #B Domain #B Return Address #C Stack Bound #C Domain #C  Cross domain calls can be chained  Calls frames stored in a stack  Current state in top frame  Push during call  Pop during return  Protected from corruption  Frame size is 5 bytes

69 10/20/06 PhD Defense 69 Cross Domain Return Program Memory call foo foo: … ret Verify return address Restore previous frame Return Verify return address Restore previous frame Return Cross Domain Return Stub

70 10/20/06 PhD Defense 70 Control Flow Checkers  Checkers enforce the control flow restrictions Control flow cannot enter or exit a domain arbitrarily Only cross domain calls can cause a domain change  Static control flow instructions verified at load time CALL, JMP, BRANCH, RCALL etc. ret jmp ret_checker Return Instruction icall call icall_checker In-Direct Call Instruction

71 10/20/06 PhD Defense 71 Memory Map Checker Memory Map Checker Processor Fetch Decoder Processor Fetch Decoder Safe Stack Domain Tracker Bus Arbiter Call Call/Ret Store Stall Processor Update Domain ID to MMC Update Domain ID DATA SRAM DATA SRAM UMPU Extensions

72 10/20/06 PhD Defense 72 UMPU Initialization  UMPU configurable using registers Registers are protected and read-only Block size, size of memory map records  Trusted Domain Special domain not subject to any checks Boot into trusted domain on reset Initializes and updates protected registers Maintains and updates memory map

73 10/20/06 PhD Defense 73 Control Flow Controller Fetch Decoder Fetch Decoder Cross Domain Call Trigger Cross Domain Call Trigger Cross Domain Call State M/C Cross Domain Call State M/C CALL_ADDR CALL_INSTR CALL_ADDR CDC_EN Domain Bounds Checker Domain Bounds Checker PC CURR_DOM_ID Safe Stack RET_ADDR DATA RAM DATA RAM SS_PTR SSP_WR_EN Set of functional units that enforce control flow restrictions

74 10/20/06 PhD Defense 74 Interrupt Controller  Asynchronous change to current domain  All interrupts originate in trusted domain Verifier scans interrupt table during boot-up Controller changes state to trusted domain  ISR uses cross domain calls  Design alternative with better performance Register interrupts with a domain Incurs memory overhead to maintain extra state

75 10/20/06 PhD Defense 75 UMPU Exception Controller  Run-time checkers generate panic signals  Controller generates interrupt on panic UMPU exception has highest priority Interrupt is non-maskable  Controller stores state information Panic condition, program counter etc. Aids development of exception routines

76 10/20/06 PhD Defense 76 Other Harbor Primitives  Safe Stack Protected safe stack pointer register  Run-Time Stack Protection Stack bounds setup during cross domain calls Stack overflow protection in hardware  Return Address Protection Preserve control flow integrity within domain

77 10/20/06 PhD Defense 77 UMPU Area Breakup  Domain bounds checker occupies most area Register array stores domain bounds Improved performance of cross domain calls  Complex hardware based state machines Memory map controller Cross Domain Call Unit

78 10/20/06 PhD Defense 78 Latency Sensitive Operations  Isolate SOS serial stack - HDLC framing  Constant Interrupt dispatch overhead - 23 clock cycles  Rx interrupt incurs higher malloc overhead Regular SOS pre-allocates a message header pool SOS for UMPU performs in-line allocation

79 10/20/06 PhD Defense 79 Robust Sensor Network Systems Emergency Response Building Automation Asset Tracking  WSN Applications Demand high availability Support multiple users Current SW technology inadequate for long term deployments

Download ppt "Memory Protection in Resource Constrained Sensor Nodes Ram Kumar Rengaswamy Ph.D. Defense."

Similar presentations

Ads by Google