Presentation is loading. Please wait.

Presentation is loading. Please wait.

Managing the performance of multiple radio Multihop ESS Mesh Networks.

Similar presentations


Presentation on theme: "Managing the performance of multiple radio Multihop ESS Mesh Networks."— Presentation transcript:

1 Managing the performance of multiple radio Multihop ESS Mesh Networks.
March Managing the performance of multiple radio Multihop ESS Mesh Networks. Francis daCosta Meshdynamics (408)

2 1 radio ad hoc Mesh Networks
March 1 radio ad hoc Mesh Networks Severe bandwidth constraints with each hop Vulnerable to both inter and intra channel interference Clients mobility adds to the complexity of the problem Not an enterprise class solution

3 1 radio ad hoc vs. 2 radio Infrastructure
March 1 radio ad hoc vs radio Infrastructure Ethernet Link Ethernet Link AP 0,1 AP 0,1 Bandwidth loss each hops in network Bandwidth preserved at all hops in network RL 1,1 AP 1,1 STA1 STA1 1/2 AP 2,1 STA2 RL 2,1 STA2 1/4 ONE RADIO SYSTEM Bandwidth halved each level Routing paths are fixed Not redundant or re-configurable Not scaleable or self managing TWO RADIO SYSTEM Bandwidth conserved at each level Flexible routing paths Redundant and re-configurable Scaleable and self managing Routing Path Alternate Routing Path

4 A two radio Infrastructure Mesh
March A two radio Infrastructure Mesh ROOT ROOT RELAY RELAY ST9 RELAY ST2 ST8 ST1 ST3 RELAY ST4 ST5 ST6 Backhaul Up Backhaul Dn and Access Point to clients

5 Supports Multiple Routing Paths
March ROOT ROOT RELAY RELAY ST9 RELAY ST2 Alternate Path ST8 ST1 ST3 RELAY ST4 ST5 ST6 Supports Multiple Routing Paths Ensures same available Bandwidth at different levels Self-managed performance Dynamic Load balancing Backhaul Up Backhaul Dn and Access Point to clients

6 Meshed ESS is the Wireless Equivalent of 802.1d Switch Stack
March Meshed ESS is the Wireless Equivalent of 802.1d Switch Stack

7 Hub vs. Switch Topologies: Switch requires similar radios.
March Hub vs. Switch Topologies: Switch requires similar radios.

8 March Infrastructure Hybrid Mesh Topologies Ad Hoc

9 60Kb Control Layer Embedded Software Upgrade to devices 1
ACG Product offering March Monitor Network Manage Settings 60Kb Control Layer Embedded Software Upgrade to devices 1 2 Communications Interface with NMS

10 March Monitor Network

11 March Manage Settings

12 Backhaul throughput = 50 Backhaul latency = 1.0
March Backhaul throughput = 50 Backhaul latency = 1.0 ROOT Low latency network configuration Low Latency for all nodes Poor throughput for distant nodes Throughput is sacrificed for latency Signal Strength varies inversely with Distance. Devices connect based on Best Local Signal Strength, not best throughput In the current implementation of the WLAN network depicted, signal strength affects overall network throughput (= 50.0 )

13 Backhaul throughput = 64 Backhaul latency = 1.6
March Backhaul throughput = 64 Backhaul latency = 1.6 High throughput network configuration Distant nodes connect through nearer nodes More hops required for distant nodes Now latency is sacrificed for throughput ACG Software control layer enables each AP to make routing decisions that increase the overall throughput. (20% increase to 64) Higher throughput is achieved at the cost of higher latency. The Average number of hops increases to 1.6.

14 Backhaul throughput = 59 Backhaul latency = 1.2
March Backhaul throughput = 59 Backhaul latency = 1.2 NMS can “tune” network between extremes Directive sent to control layer in each device Devices change associations per config setting. ACG software layer in AP dynamically reconfigures network to satisfy latency objectives while minimizing throughput degradation

15 Backhaul throughput = 55 Backhaul latency = 1.1
March Backhaul throughput = 55 Backhaul latency = 1.1 At 37% tradeoff Backhaul more latency centric NMS can “tune” network between extremes Directive sent to control layer in each device Devices change associations per config setting. Further incentive (37) reduces the latency to 1.1

16 Backhaul throughput = 50 Backhaul latency = 1.0
March Backhaul throughput = 50 Backhaul latency = 1.0 At 49% tradeoff Backhaul at low latency setting At 49, the cost of connecting to a parent further removed from the root (in terms of number of hops) is too high. Thus, the system can be tuned to anything between low latency and high throughput

17 Small footprint control layer also provides: Dynamic Load balancing
March Congested Backhaul: Congested Node Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation As the load on a node increases, children seek alternate routes based on an increased cost-to-connect value..

18 Small footprint control layer also provides: Dynamic Load balancing
March Connectivity Cost = 7 Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation As the load on a node increases, children seek alternate routes based on an increased cost-to-connect value..

19 Small footprint control layer also provides: Dynamic Load balancing
March 8 Connectivity Cost = 8 Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation As the load on a node increases, it encourages its children to find alternate routes.. By progressively increasing the connect cost

20 Multiple Roots for more redundancy, throughput
March Multiple Roots for more redundancy, throughput Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation System supports multiple roots for redundancy and increased bandwidth

21 Small footprint control layer also provides: Dynamic Load balancing
March Backhaul is Self healing Turned Off Turned Off Small footprint control layer also provides: Dynamic Load balancing Automatic discovery, self healing Switching, automatic channel allocation Nodes self configure in case of node failure. System inherently redundant and fail safe.

22 March 13 2004 CoS based Data Flow
Buffer Depletion with Weighted Fair Queuing – QoS ON

23 March

24 March

25 March

26 Implementation of algorithms on Hardware
March Implementation of algorithms on Hardware

27 March Network Monitor

28 March Software Integration

29 Demonstration on hardware available on request.
March Demonstration on hardware available on request. Meshdynamics 1299 Parkmoor Ave, San Jose, CA 95126 Phone: (408)


Download ppt "Managing the performance of multiple radio Multihop ESS Mesh Networks."

Similar presentations


Ads by Google