Presentation is loading. Please wait.

Presentation is loading. Please wait.

Wireless Sensor Networks

Similar presentations


Presentation on theme: "Wireless Sensor Networks"— Presentation transcript:

1 Wireless Sensor Networks
Lecture 8 – CS252

2 Sensor Networks: The Vision
Push connectivity out of the PC and into the real world Billions of sensors and actuators EVERYWHERE!!! Zero configuration Build everything out of CMOS so that each device costs pennies Enable wild new sensing paradigms

3 Why Now? Combination of: Breakthroughs in MEMS technology
Development of low power radio technologies Advances in low-power embedded microcontrollers

4 Real World Apps…

5 Vehicle Tracking

6 Cory Energy Monitoring/Mgmt System
50 nodes on 4th floor 5 level ad hoc net 30 sec sampling 250K samples to database over 6 weeks

7 Structural performance due to multi-directional ground motions (Glaser & CalTech)
Mote Layout Mote infrastructure 14 13 5 ` 15 15 6 12 11 9 8 Comparison of Results Wiring for traditional structural instrumentation + truckload of equipment

8 Node Localization

9 Sensor Network Algorithms
Directed Diffusion – Data centric routing (Estrin, UCLA) Sensor Network Query Processing (Madden, UCB) Distributed Data Aggregation Localization in sensor networks (UCLA, UW, USC, UCB) Multi-object tracking/Pursuer Evader (UCB, NEST) Security

10 Recipe For Architectural Research
Take known workload Analyze performance on current systems Form hypothesis on ways of improving “performance” Build new system based on hypothesis Re-analyze same workload on new system Publish results

11 Our Approach…. Hypothesize about requirements based on potential applications Explore design space based on these requirements Develop hardware platform for experimentation Build test applications on top of hardware platform Evaluate performance characteristics of applications GOTO step 1 (hopefully you’ll come up with a better set of requirements)

12 Sensor Node Requirements
Low Power, Low Power, Low Power… Support Multi-hop Wireless Communication Self Configuring Small Physical Size Can Reprogram over Network Meets Research Goals Operating system exploration Enables exploration of algorithm space Instrumentation Network architecture exploration

13 First Decision: The central controller
What will control the device? Modern Microcontroller Sidebar What’s inside a microcontroller today? What peripheral equipment do you need to support one? How do you program one?

14 Major Axes of Microcontroller Diversity
Flash based vs. SRAM based Combination of FLASH and CMOS logic is difficult Internal vs. External Memory Memory Size Digital Only vs. On-chip ADC Operating Voltage Range Operating Current, Power States and wake-up times Physical Size Support Circuitry Required External Clocks, Voltage References, RAM Peripheral Support SPI, USART, I2C, One-wire Cycle Counters Capture and Analog Compare Tool Chain

15

16

17 Second Decision: Radio Technologies
Major RF Devices Cordless Phones Digital/Analog Single Channel Cellular Phones Multi-channel, Base station controlled 802.11 “wireless Ethernet” Bluetooth Emerging, low-power frequency hopping

18 What is in your cell phone?
Texas Instrument’s TCS2500 Chipset ARM9, 120Mhz + DSP >> kbps

19 RFM TR1000 Radio 916.5 Mhz fixed carrier frequency
No bit timing provided by radio 5 mA RX, 10 mA TX Receive signal digitized based on analog thresholds Able to operate in OOK (10 kb/s) or ASK (115 kb/s) mode 10 Kbps design using programmed I/O Design SPI-based circuit to drive radio at full speed full speed on TI MSP, 50 kb/s on ATMEGA Improved Digitally controlled TX strength DS1804 1 ft to 300 ft transmission range, 100 steps Receive signal strength detector

20 TR 1000 internals

21 Why not use federation of CPUs?
Divide App, RF, Storage and Sensing Reproduce PC I/O hierarchy Dedicated communications processor could greatly reduce protocol stack overhead and complexity Providing physical parallelism would create a partition between applications and communication protocols Isolating applications from protocols can prove costly Flexibility is Key to success Apps CPU Sensing CPU Storage CPU RF CPU Sensors Storage Radio

22 Can you do this with a single CPU?

23 The RENE architecture Atmel AT90LS8535 32 KB EEPROM RFM TR1000 radio
51-Pin I/O Expansion Connector Atmel AT90LS8535 4 Mhz 8-bit CPU 8KB Instruction Memory 512B RAM 5mA active, 3mA idle, <5uA powered down 32 KB EEPROM 1-4 uj/bit r/w RFM TR1000 radio Programmed I/O 10 kb/s – OOK Network programmable 51-pin expansion connector GCC based tool/programming chain Digital I/O 8 Analog I/O 8 Programming Lines AT90LS8535 Microcontroller Transmission Power Control Coprocessor SPI Bus TR 1000 Radio Transceiver 32 KB External EEPROM 1.5”x1” form factor

24 What is the software environment?
Do I run JINI? Java? What about a real time OS? IP? Sockets? Threads? Why not?

25 TinyOS OS/Runtime model designed to manage the high levels of concurrency required Gives up IP, sockets, threads Uses state-machine based programming concepts to allow for fine grained concurrency Provides the primitive of low-level message delivery and dispatching as building block for all distributed algorithms

26 Key Software Requirements
Capable of fine grained concurrency Small physical size Efficient Resource Utilization Highly Modular Self Configuring As a starting point, we went through and came up with a set of key characteristics that distinguishes tiny networked devices from traditional architectures. The most obvious and fundamental is that these devices have serious constraints on their physical size and power consumption. This make it impractical to have multiple device controllers for each I/O stream. The traditional controller hierarchy seen in most systems must be replaced by a direct-to-device architecture where the single central processor communicates directly with the I/O device. This is in contrast to having a specialized controller – such as a disk drive controller – buffer and preprocess data coming from the I/O device or disk head. Additionally, the unsynchronized nature of the I/O streams forces the system to be capable of simultaneously dealing with multiple flows of data arriving in real time. This fine grained concurrency is not required when there are dedicated controllers buffering the data streams. This means that the system must be able to continually flow data through it, never shutting down one I/O channel to deal with another. In the case of taking sensor reading, the CPU must be capable of simultaneously reading in data off of the network or data packets will be lost. A third unique characteristic is that these devices will be incredibly diverse. Each sensor network will have a different purpose and will be customized to fit the task. This will likely take the form of customized hardware platforms as well as applications specific software and protocols. This forces a successful design to be highly modular so that system components can be composed together to efficiently meet the application's needs. Additionally, the divers array of hardware will vary what amount of functionality is provided by the hardware and what must be implemented in software. A truly adaptive system must support this migration of the hardware/software boundary. One device may have a hardware based bus controller while another may have to implement the bus protocols in software. Finally, once deployed, these devices will be unattended and numerous. This means that a system crash is most likely fatal. The use cannot be expected to run around changing batteries once a week or resetting individual devices. While the large number of devices provides natural redundancy, correlated failures due to software errors could be catastrophic. This makes robust operation a priority and narrow interface to well defined components is a tried and true method for achieving this.

27 State Machine Programming Model
System composed of state machines Command and event handlers transition modules from one state to another Quick, low overhead, non-blocking state transitions Many independent modules allowed to efficiently share a single execution context In order to achieve the necessary levels of concurrency, we have designed our system using a state machine based programming model as opposed to a thread based programming model. By making each component or service a state machine, we are able to make very efficient use of CPU and memory resources. Instead of having to allocate multiple stacks for each running application or service, we are able to share a single execution context amongst multiple sate machines. Each component uses events and commands to quickly transition from state to state. Logically, these state transitions are thought of a instantaneous, requiring very few CPU operations. Each component is temporarily allocated the execution context for the duration of these state changes. Interestingly, the same paradigm is being used to increase efficiency in high performance systems such Microsoft’s Pipeline server Berkeley’s Ninja project. We then add to this model the notion of tasks, which allow components to request the CPU execution context in order to perform long-running computations. These tasks get scheduled at a later date and run to completion. While they execute atomically with respect to other tasks, they can be periodically interrupted by higher priority events. Currently we use a simple FIFO queue for scheduling, however an alternative scheduling mechanism could be easily added if necessary. A secondary advantage of choosing to structure our programming model after finite state machines is that it propagates the hardware abstractions into software. Just as a hardware bases state machine responds to changes on it’s I/O pins, our components respond to events and commands on their interfaces. This similarity allows us to plan for the evolution of hardware. Specifically, we can easily handle having software components be implements in hardware or vice-versa.

28 Tiny OS Concepts Scheduler + Graph of Components Component:
constrained two-level scheduling model: threads + events Component: Commands, Event Handlers Frame (storage) Tasks (concurrency) Constrained Storage Model frame per component, shared stack, no heap Very lean multithreading Efficient Layering msg_rec(type, data) msg_send_done) Commands Events send_msg(addr, type, data) power(mode) init Messaging Component internal thread Internal State Power(mode) TX_packet(buf) init TX_packet_done (success) RX_packet_done (buffer)

29 Application = Graph of Components
Route map router sensor appln application Active Messages packet Radio Packet Serial Packet Temp photo SW Example: ad hoc, multi-hop routing of photo sensor readings HW Radio byte UART byte ADC 3450 B code 226 B data clocks bit RFM Graph of cooperating state machines on shared stack

30 System Analysis After building apps, system is highly memory constrained Communication bandwidth is limited by CPU overhead at key times. Communication has bursty phases. Where did the Energy/Time go? 50% of CPU used when searching for packets With 1 packet per second, >90% of energy goes to RX!

31 Architectural Challenges
Imbalance between memory, I/O and CPU Increase memory (Program and Data) by selecting different CPU Time/energy spent waiting for reception Solution: Low-power listening software protocols Peak CPU usage during transmission Solution: Hardware based communication accelerator

32 The MICA architecture 2xAA form factor Cost-effective power source
51-Pin I/O Expansion Connector Atmel ATMEGA103 4 Mhz 8-bit CPU 128KB Instruction Memory 4KB RAM 5.5mA active, 1.6mA idle, <1uA powered down 4 Mbit flash (AT45DB041B) SPI interface 1-4 uj/bit r/w RFM TR1000 radio 50 kb/s – ASK Communication focused hardware acceleration Network programmable 51-pin expansion connector Analog compare + interrupts GCC based tool/programming chain Digital I/O 8 Analog I/O 8 Programming Lines Atmega103 Microcontroller Power Regulation MAX1678 (3V) DS2401 Unique ID Hardware Accelerators Transmission Power Control Coprocessor SPI Bus TR 1000 Radio Transceiver 4Mbit External Flash 2xAA form factor Cost-effective power source

33 Wireless Communication Phases
Transmit command provides data and starts MAC protocol. Transmission Data to be Transmitted Encode processing Start Symbol Transmission Encoded data to be Transmitted MAC Delay Transmitting encoded bits Bit Modulations Radio Samples Start Symbol Search Receiving individual bits Synchronization Reception Start Symbol Detection Encoded data received Decode processing Data Received

34 Radio Interface Highly CPU intensive
CPU limited, not RF limited in low power systems Example implementations RENE node: 19,200 bps RF capability 10,000 bps implementation, 4Mhz Atmel AVR Chipcon application note example: 9,600 bps RF capability Example implementation 1,200bps with 8x over sampling on 16 Mhz Microchip PICmicro (chipcon application note AN008)

35 Node Communication Architecture Options
Classic Protocol Processor Application Controller RF Transceiver Protocol Processor Narrow, refined Chip-to-Chip Interface Raw RF Interface Application Controller RF Transceiver Direct Device Control Hybrid Accelerator Application Controller RF Transceiver Serialization Accelerator Timing Accelerator Memory I/O BUS Hardware Accelerators

36 Accelerator Approach Standard Interrupt based I/O perform start symbol detection Timing accelerator employed to capture precise transmission timing Edge capture performed to +/- 1/4 us Timing information fed into data serializer Exact bit timing performed without using data path CPU handles data byte-by-byte Hybrid Accelerator Application Controller RF Transceiver Serialization Accelerator Timing Accelerator Memory I/O BUS Hardware Accelerators

37 Results from accelerator approach
Bit Clocking Accelerator 50 Kbps transmission rate 5x over Rene implementation >8x reduction in peak CPU overhead Timing Accelerator Edge captured to +/- ¼ us Rene implementation = +/- 50 us CPU data path not involved

38 Power Optimization Challenge
Scenario: 1000 node multi-hop network Deployed network should be “dormant” until RF wake-up signal is heard After sleeping for hours, network must wake-up with-in 20 seconds Goal: Minimize Power consumption

39 What are the important characteristics?
Transmit Power? Receive Power consumption of the radio? Clock Skew? Radio turn-on time?

40 Solutions Minimize the time to check for “wake-up” message
“check” time must be greater than length of wake-up message If data packets are used for wake up signal, then “check” time must exceed packet transmission time Instead use long wake-up tone

41 Tone-based wake-up protocol
Each node turns on radio for 200us and checks for RF noise If present, then node continues to listen to confirm the tone If not, node goes back to sleep for 4 seconds Resulting duty cycle? /4 = .005 %. 200us due to wake-up time of the radio

42 Project Ideas Tos_sim ++ Tiny application specific VM
RF usage modeling Cycle-accurate simulation Nono-joule-accurate simulation Tiny application specific VM Source program lang Intermediate representation Mobile code story Communication model Analysis of CPU Multithreading/Radical core architectures Federated Architecture Alternative

43 Project Ideas (2) Closed loop system analysis
Simulation of closed loop systems Impact of design decisions on latency Channel characterization, Error Correction Stable, energy efficient, multi-hop communication implementation Scalable Reliable Multicast Analog Sensor network specific CPU design “Passive Vigilance” Circuits Power Harvesting Correct Architectural Balance (Memory:I/O:CPU) Self-diagnosis/watchdog architecture Cryptographic Support Alternate Scheduling Models – Perhaps periodic real-time Explore query processing/content based routing Design and build your own X


Download ppt "Wireless Sensor Networks"

Similar presentations


Ads by Google