Presentation is loading. Please wait.

Presentation is loading. Please wait.

Randy H. Katz Computer Science Division

Similar presentations


Presentation on theme: "Randy H. Katz Computer Science Division"— Presentation transcript:

1 The Computer is the Network: The Emergence of Programmable Network Elements
Randy H. Katz Computer Science Division Electrical Engineering and Computer Science Department University of California, Berkeley Berkeley, CA

2 Presentation Outline Motivation OASIS Project RouterVM COPS
Summary and Conclusions

3 Presentation Outline Motivation OASIS Project RouterVM COPS
Summary and Conclusions

4 Managing Edge Network Services and Applications
Not shrink wrap software—but cascaded “appliances” Data Center in-a-box blade servers, network storage Brittle to traffic surges and shifts, yielding network disruption Edge Network Blades Wide Area Network Server Traffic Shaper IDS Firewall Server Server Load Balancer Server Egress Checker Edge Network Middleboxes

5 Appliances Proliferate: Management Nightmare!
Packeteer PacketShaper Traffic monitor and shaper Network Appliance NetCache Localized content delivery platform F5 Networks BIG-IP LoadBalancer Web server load balancer Ingrian i225 SSL offload appliance Cisco SN 5420 IP-SAN storage gateway Nortel Alteon Switched Firewall CheckPoint firewall and L7 switch These are special-purpose boxes that are deployed at single points in the network, I.e. between networks They are not really designed to be composed Extreme Networks SummitPx1 L2-L7 application switch Cisco IDS 4250-XL Intrusion detection system NetScreen 500 Firewall and VPN

6 Network Support for Tiered Applications
LAN Wide Area Network Server App Tier Egress Checker Web Database Firewall Load Balancer Datacenter Network(s) Unified LAN Wide Area Network Server Servers on Demand Load Balancer + Firewall Egress Checker Blades Configure servers, storage, connectivity net functionality as needed

7 “The Computer is the Network”
Emergence of Programmable Network Elements Network components where net services/applications execute Virtualization (hosts, storage, nets) and flow filtering (blocking, delaying) Computation-in-the-Network is NOT Unlimited Packet handling complexity limited by latency/processing overhead NOT arbitrary per packet programming (aka active networking) Allocate general computation like proxies to network blades Beyond Per Packet Processing: Network Flows Managing/configuring network for performance and resilience Adaptation based on Observe (Monitor), Analyze (Detect, Diagnose), Act (Redirect, Reallocate, Balance, Throttle)

8 Key Technical Challenge: Network Reliability
Berkeley Campus Network Unanticipated traffic surges render the network unmanageable (and may cause routers to fail) Denial of service attacks, latest worm, or the newest file sharing protocol largely indistinguishable In-band control channel is starved, making it difficult to manage and recover the network Berkeley EECS Department Network (12/04) Suspected denial-of-service attack against DNS Poorly implemented/configured spam appliance adds to DNS overload Traffic surges render it impossible to access Web or mount file systems Network problems contribute to brittleness of distributed systems Configuration too complex for operator (Semi) automate configuration and reconfiguration in response to traffic changes

9 Presentation Outline Motivation OASIS Project RouterVM COPS
Summary and Conclusions

10 Overlays and Active Services for Inter-networked Storage

11 Vision Better network management of services/applications to achieve good performance and resilience even in the face of network stress Self-aware network environment Observing and responding to traffic changes While sustaining the ability to control the network Packet flow manipulations at L4-L7 made possible by new programmable network elements New service building blocks Flow extraction Packet annotation Packet translation Resource management

12 Generic Network Element Architecture
Interconnection Fabric Input Ports Output Ports Buffers CP Classification Processor CP AP Action Processor “Tag” Mem Rules & Programs

13 OASIS Framework Applications Control Plane cPredicates Packet
Network Service Protection Control Plane A-Layer + Event Manager Packet Processing Software NPUs, Blades cPredicates Packet/Flow Segregation Extended Router Deeper Inspection Generalized Actions SAN, FW, IDS, Traffic Shaper, L7 Switch, … Net Apps RouterVM Generalized Filtering and Composition Software Router Linux-based, Click, etc. Hardware Assist for Generalized Inspection TCAM FPGA State Machine “Basic” Router Forwarding, IP Filtering Edge Core

14 E.g., Application-Specific QoS for Three-Tier Systems
Composable building blocks to build web services Web containers, various app/ejb containers, persistent state via automatically managed DB pools Problem: Open control loop/requests driven by users Flash traffic, increased workload can overload components of the web service Hard to provision; hard to make performance guarantees; seemingly broken behavior to the end user TrafficOperation Classification Ants: Lightweight processing, touch few components Elephants: Heavyweight processing, cross-layer, many components Hard to distinguish statically, but can be determined through statistical techniques Correlate request patterns with measured system parameters (web + db CPU, disk activity, OS process switches, etc.) to identify elephants vs. ants

15 Identifying RuBIS Elephant Flows for Admission Control/QoS
SLOW DOWN No Network Visibility Servers /dbLookup.php SLOW DOWN /storeBid.php /lightRequest.php HTTP Header Visibility Servers

16 Request Time Distribution with Preferential Scheduling
Uncontrolled Controlled Blocking phenomenon Session time goes up But worst case goes way down

17 Presentation Outline Motivation OASIS Project RouterVM COPS
Summary and Conclusions

18 RouterVM High-level specification environment for describing packet processing Virtualized: abstracted view of underlying hardware resources of target PNEs Portable across diverse architectures Simulate before deployment Services, policies, and standard routing functions managed thru composed packet filters Generalized packet filter: trigger + action bundle, cascaded, allows new services and policies to be implemented / configured thru GUI New services can be implemented without new code through library building blocks Mel Tsai

19 Extended Router Architecture
Virtualized components representative of a “common” router implementation Structure independent of particular hardware Virtual backplane shuttles packets between line cards Virtual line card instantiated for every port required by application CPU handles routing protocols & mgmt tasks Blue “standard” components Yellow components added & configured per-application Compute engines perform complex, high-latency processing on flows Filters are key to flexibility Mel Tsai

20 Generalized Packet Filters
Key to flexibility Extend router “filter” concept Small # of GPF building blocks yield large number of apps Compose/parameterize filters No code writing Supports flexible classification, computation, and actions Executed in numeric order How “complete” is this formalization? Examples: Traffic shaping and monitoring L7 traffic detection (Kazaa, HTTP, AIM, POP3, etc.) QoS and packet scheduling NAT Intrusion detection Protocol conversion (e.g. IPv6) Content caching Load balancing Router/server health monitoring Storage Fibre Channel to IP iSCSI XML preprocessing TCP offload (TOE) Mobile host management, Encryption/compression. VPNs Multicast, Overlays, DHTs L2 Switching Engine w/ARP Packet filter 1 filter 2 filter n Default Mel Tsai

21 GPF Example A Server Load Balancer and L7 Traffic Detector Servers
Control Processor Servers Ext. IP = GPF 5: SLB GPF 10: P2P L2 Switching Engine w/ARP QoS Module Backplane IP Router Engine To Clients 10.35.x.x

22 GPF Example A Server Load Balancer and L7 Traffic Detector Servers
Control Processor Servers Ext. IP = GPF 5: SLB GPF 10: P2P L2 Switching Engine w/ARP QoS Module GPF 5 Setup name -algorithm - flowid - sip - smask - dip - dmask - proto - action1 - action2 - action3 - Server Load Balancer equal flows sip, sport any udp, tcp slb nat , log connections, file = log.txt tag “skip Yahoo Messenger Filter” The server load balancer filter will only process packets heading to destination IP (the “virtual” address of the server group comprising real servers and ). The “flowid” parameter specifies that it will maintain state about the source IP address and source port, which uniquely identifies the clients accessing the servers, and allows subsequent flows from the same client to always reach the same real server. The algorithm, “equal flows,” simply attempts to equally balance the number of news flows (connections) established to the two real servers. There are three actions that are performed on any packet matching this filter. First, the filter will load balance (using the algorithm specified) across and , which necessarily involves a network address translation step (hence “slb nat”). Second, the filter logs new connections into the file “log.txt”. Third, it tags each packet with an in-band message, instructing RouterVM to skip the downstream filter named “Yahoo Messenger” (which happens to be GPF 10). Backplane IP Router Engine To Clients 10.35.x.x

23 GPF Example A Server Load Balancer and L7 Traffic Detector Servers
Control Processor Servers Ext. IP = GPF 5: SLB GPF 10: P2P GPF 10 Setup L2 Switching Engine w/ARP name - type - pattern - timeout - flowid - sip - smask - dip - dmask - proto - action1 - action2 - Yahoo Messenger Filter yahoomessenger ^(ymsg|ypns|yhoo).?.?.?.?.?.?.?(w|t).*\xc0\x80 10 min sip, dip, sport, dport any tcp limit 1 kbps root QoS Module This yahoo messenger filter matches TCP packets heading to the client address space of x.x. The goal is to rate limit yahoo messenger traffic to 1 kbps (action1) and also contact the root administrator informing him/her of the traffic. Yahoo messenger traffic is classified using a regular expression search. The user does not have to type in this regular expression, it is included with a library of P2P and L7 traffic types, the user simply selects the “type” as “yahoomessenger” (routervm can list the available predefined types) and the “pattern” is filled in automatically. If the user wants, however, they can type in the pattern directly. Backplane IP Router Engine To Clients 10.35.x.x

24 GPF “Fill-in” Specification
RouterVM Generalized Packet Filter (type L7) “Packet filter” as high-level, programmable building-block for network appliance apps Traditional Filter FILTER 19 SETUP NAME - SIP - SMASK - DIP - DMASK - PROTO - SRC PORT - DST PORT - VLAN - ACTION - example any tcp,udp 80 default drop Classification Parameters Action

25 HTTP Switch Example

26 GPF Action Language Basic set of assignments, evaluations, expressions, control-flow operators, and “physical” actions on packets/flows Control-flow: If-then-else, if-not Evaluation: ==, <=, >=, != Packet flow control: Allow, unallow, drop, skip filter, jump filter Packet manipulation: Field rewriting (ip_src == blah, tcp_src = blah), truncation, header additions Actions: NAT, loadbalance, ratelimit, (perhaps others) Meta actions: packet generation, logging, statistics gathering Higher level of abstraction than C/C++ or packet processing language

27 Implemented GPF Libraries
Basic Filter Simple L2-L4 header classifications Any RouterVM actions L7 Filter Adds regular expressions, TCP termination, ADU reconstruction NAT Filter Adds a few more capabilities beyond the simple NAT action that is available to all GPFs Content Caching Builds on the L7 filter functionality WAN Link Compression Relatively simple to specify, but requires lots of computation IP-to-FC Gateway Requires its own table format & processing XML Preprocessing Not very well documented, and difficulty is unknown…

28 GPF Expressiveness Analysis
Expressiveness at the app layers depends on the breadth of GPF library and GPFs for specific apps

29 GPF Performance: Complex Filters
L2-L4 Headers “Retreat” 25 bytes of char ‘X’ Padding with ‘X’ Lesson: try to use start-of-buffer indicators ^ and avoid *’s… Many apps can be identified with simple start-of-buffer expressions Regex involves payload copying, which might be avoidable

30 Presentation Outline Motivation OASIS Project RouterVM COPS
Summary and Conclusions

31 Observations and Motivations
Internet reasonably robust to point problems like link and router failures (“fail stop”) Successfully operates under a wide range of loading conditions and over diverse technologies During 9/11/01, Internet worked reasonable well, under heavy traffic conditions and with some major facilities failures in Lower Manhattan

32 Why and How Networks Fail
Complex phenomenology of failure Recent Berkeley experience suggests that traffic surges render enterprise networks unusable Indirect effects of DoS traffic on network infrastructure: role of unexpected traffic patterns Cisco Express Forwarding: random IP addresses flood route cache forcing all traffic to go through router slow path—high CPU utilization yields inability to manage router table updates Route Summarization: powerful misconfigured peer overwhelms weaker peer with too many router table entries SNMP DoS attack: overwhelm SNMP ports on routers DNS attack: response-response loops in DNS queries generate traffic overload

33 COPS Checking Observing Protecting Services

34 Check Checkable Protocols: Maintain invariants and techniques for checking and enforcing protocols Listen & Whisper: well-formed BGP behavior Traffic Rate Control: Self-Verifiable Core Stateless Fair Queuing (SV-CSFQ) Existing work requires changes to protocol end points or routers on the path Difficult to retrofit checkability to existing protocols without embedded processing in PNEs Building blocks for new protocols Observable protocol behavior Cryptographic techniques Statistical methods                                

35 Protect Protect Crucial Services
Minimize and mitigate effects of attacks and traffic surges Classify traffic into good, bad, and ugly (suspicious) Good: standing patterns and operator-tunable policies Bad: evolves faster, harder to characterize Ugly: that which cannot immediately be determined as good or bad Filter the bad, slow the suspicious, maintain resources for the good (e.g., control traffic) Sufficient to reduce false positives Some suspicious-looking good traffic may be slowed down, but won’t be blocked

36 Observe Observation (and Action) Points
Network points where control is exercised, traffic classified, resources allocated Routers + End Hosts + Inspection-and-Action Boxes (iBoxes) Prototyped on commercial PNEs Placed at Internet and Server edges of enterprise net Cascaded with existing routers to extend their functionality

37 iBox Placement for Observation and Action
iBoxes strategically placed near entry/exit points within the Enterprise network

38 Annotation Layer: Between Routing and Transport
Marking vs. rewriting approach E.g., mark packets as internally vs. externally sourced using IP header options Prioritize internal vs. external access to services solves some but not all traffic surge problems

39 Annotation Layer: iBox Piggybacked Control Plane
Problem: Control plane starvation Use A-layer for iBox-to-iBox communication Passively piggyback on existing flows “Busy” parts of network have lots of control plane b/w Actively inject control packets to less active parts Embedded control info authenticated and sent redundantly Priority given to packets w/control when net under stress Network monitoring and statistics collection and dissemination subsystem currently being designed and prototyped

40 Observe and Protect iBoxes implemented on commercial PNEs
Don’t: route or implement (full) protocol stacks Do: protect routers and shield network services Classify packets Extract flows Redirect traffic Log, count, collect stats Filter/shape traffic

41 iBoxes and Indirection
Tradeoff net traffic loading and management complexity: Stoica’s Internet Indirection Infrastructure (I3) Gives a degree of freedom in the placement and management of iBoxes

42 Presentation Outline Motivation OASIS Project RouterVM COPS
Summary and Conclusions

43 OASIS Processing-in-the-Network is real Unifying Framework
Networking plus processing in switched and routed infrastructures Configuration and management of packet processing cast onto programmable network elements (network appliances and blades) Unifying Framework Methods to specify functionality and processing RouterVM: Filtering, Redirecting, Transformation cPredicates: Control extraction and execution based on packet applications content Application-specific network processing based on session extraction

44 COPS PNEs: foundation of a pervasive infrastructure for observation and action at the network level iBoxes Observation and Action points Annotation Layer for marking and control Check-Observe-Protect paradigm for protecting critical resources when network is under stress Functionality eventually migrates into future generations of routers E.g., Blades embedded in routers

45 The Computer is the Network: The Emergence of Programmable Network Elements Randy H. Katz Thank You!


Download ppt "Randy H. Katz Computer Science Division"

Similar presentations


Ads by Google