We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byEaston Grenfell
Modified over 2 years ago
Modelling Robustness Part 1: Prologue Fabrice Saffre
© British Telecommunications plc, A few words about modelling There are basically two techniques: - Analysis (mathematical and/or numerical). - Simulation (typically Monte Carlo). Most of the time, they complement each other nicely, so youre likely to need both (independently of your preferences!) They both require abstracting the problem at hand to some extent (i.e. get rid of insignificant details).
© British Telecommunications plc, A few words about… me I got my PhD modelling swarming behaviour of social spiders (no kidding!) So by any practical definition, Im a biologist, not a computer scientist. My job now involves developing biologically- inspired algorithms to: - model topological robustness. - manage dynamic response to changing conditions (in a network security context).
© British Telecommunications plc, The context Network resilience to attack/failure is a growing concern. There are several routes toward improving robustness: - Increase ability to withstand damage. - Find new ways to limit damage. First step however is to find a suitable measurement for network state (with respect to dependability/QoS).
Part 2: Complex networks
© British Telecommunications plc, What is a complex network? You tell me... A fancy but (lets face it) meaningless expression... A (poor) designation for something very real and very widespread, combining elements of the graph theory with self-organisation and complex systems. But then what are complex systems?
© British Telecommunications plc, Where do you find them? Everywhere... In physics and chemistry (crystals, reaction chains...) In physiology and morphology (neural nets, cellular interactions in the embryo...) In ecology (food webs) In sociology (small worlds, collaboration networks...) In technology (power grids, telecom...)
© British Telecommunications plc, What we mean when talking about complex networks A collection of nodes and links... featuring global invariant properties (diameter, clustering coefficient etc)... even though it is produced using local probabilistic connection rules.
© British Telecommunications plc, Building a hierarchy with the preferential attachment rule (1/4) A scale free network can be generated if vertices are sequentially added and connected to the existing structure (Barabasi et al., 1999). To obtain the desired architecture, each new vertex needs only select its connection on the basis of the current degree distribution within the network.
© British Telecommunications plc, This can be done simply by attributing to each existing node i a probability of being selected P i depending on its degree k i : Repeating the connection process for all vertices using this expression (with = 1) is enough to generate a realistic network. Building a hierarchy with the preferential attachment rule (2/4)
© British Telecommunications plc, Building a hierarchy with the preferential attachment rule (3/4) Whats meant by realistic is that the resulting topology appears to be similar to that of real networks like the Internet. Faloutsos et al. (1999) have found the degree distribution profile of the Internet to obey a power law. Because of the built-in amplification process, the algorithm of Barabasi et al. (1999) also generates such a distribution (with the appropriate slope for = 1).
© British Telecommunications plc, Building a hierarchy with the preferential attachment rule (4/4) In other words: a very basic growth algorithm can be used to generate a variety of plausible network models on which to run numerical experiments.
© British Telecommunications plc, However... As usual, things are not that simple... Global network cohesion (i.e. a path exists between any 2 nodes) is only guaranteed as long as vertices are added one at a time. The sequential aspect of the connection process is also responsible for the emergence of hierarchy if attractiveness ( P i ) grows linearly ( = 1 ) with node degree ( k i ).
© British Telecommunications plc, The delayed attachment rule (1/2) Connections are initiated when all nodes are already present. For each link, the origin is chosen at random and the target by preferential attachment.
© British Telecommunications plc, The delayed attachment rule (2/2) This generates a very different topology. Global cohesion is lost (not all nodes are within the giant component). The hierarchy (degree distribution profile) is modified... because the advantage to elder vertices is lost.
© British Telecommunications plc, Why consider delayed attachment? Because there are no hidden (implicit) non- linear effects ( = 1 does not generate a power law). But also, more importantly, because it is a better model for highly dynamic architectures like ad hoc networks. Continuous re-mapping of graph topology also limits (removes?) the advantage to elder vertices.
© British Telecommunications plc, What is robustness (in networks)? The ability to sustain accidental damage and remain operational. The ability to sustain intentional damage and remain operational. The ability to survive topological changes, which can often be assimilated to a special form of accidental damage (nodes move out of range, a router is overloaded...)
© British Telecommunications plc, How do we measure it? Most authors tend to use a (reductive) practical definition: Being robust is being able to maintain most surviving nodes within the giant component after having sustained damage. Accordingly, network robustness is simply inversely proportional to the rate of decay of the giant components relative size, as a function of damage extent.
© British Telecommunications plc, However (again)... This is obviously a very unrefined view. Maintaining network cohesion (keeping nodes within the giant component) is necessary but not sufficient... because structural changes can cause congestion or routing failure, which can in turn prevent normal operation of the presumably intact network. But well have to live with that for now...
© British Telecommunications plc, What we knew (1/2) Scale free networks are very robust to accidental node failure... because the strong hierarchy makes it unlikely that random events hit those few high degree nodes which are responsible for global cohesion (Albert et al., 2000). As a result, the average size of the giant component decays gracefully with cumulative node failure (up to one point...)
© British Telecommunications plc, What we knew (2/2) Graceful doesnt mean slow, only that catastrophic events are rare (the relative size of the giant component doesnt drop faster than that of the network as a whole). Initiating more than one connection per node is an obvious way of increasing robustness (multiple paths available between graph regions).
© British Telecommunications plc, What we found (1/5) The decay of the giant components average size can be approximated using a simple non-linear expression (whatever the connection rules): Where X and are constants, while x is function of the fraction of nodes that have been removed.
© British Telecommunications plc, What we found (2/5) Sequential addition of nodes, with variable (A) and average degree (B).
© British Telecommunications plc, What we found (3/5) Delayed attachment, with variable (A) and average degree (B).
© British Telecommunications plc, What we found (4/5) The constants and X vary as a function of average degree, which could become the basis for a predictive tool(?)
© British Telecommunications plc, What we found (5/5) But being able to estimate the decay of the average size of the giant component isnt necessarily useful... There are regions of the parameter space for which the distribution is strongly bimodal and the average is virtually never observed!
© British Telecommunications plc, Conclusions A huge variety of potential architectures can be simulated simply by applying different local connection rules. The study of complex networks provides tools for a quantitative description of those architectures properties. But we are still a long way from an efficient and robust network design... Especially if the topology is to be dynamic!
Part 3: RAn (Robustness Analyser)
© British Telecommunications plc, What RAn can do for you: Simulate cumulative node failure and plot the evolution of the largest components relative size for any given topology. Conduct basic statistical analysis of the numerical data. Compute a set of global variables summarising the networks behaviour under stress. In a matter of seconds (for N up to 10 4 ).
© British Telecommunications plc, What RAn cannot (yet) do: Take into account the additional effects topological changes can have beyond affecting the relative size of the largest component. This unfortunately includes forming of bottlenecks due to re-routing of traffic through surviving nodes. But we are working on it...
© British Telecommunications plc, Summary: RAn is a lightweight, easy-to-use, network robustness analyser. Its primary purpose is to quickly obtain a rough evaluation of (and comparison between) alternative topologies. It is therefore a powerful tool in the early stages of network design (or audit), but is meant to be used in conjunction with other, more specific simulators.
© British Telecommunications plc, Example: robust small worlds? Most people belong to a highly clustered social network: I know most of my friends friends... But many have a few acquaintances outside the dense local mesh: I have never met some of my friends friends... Hence the very popular small world effect.
© British Telecommunications plc, Small worlds have well-known interesting properties: Their diameter grows as a logarithmic function of their size... Even when rewiring probability (proportion of long-range connections) is relatively low. They are notoriously difficult to navigate possible routing problems for otherwise very appealing network applications!
© British Telecommunications plc, But how robust are they?
© British Telecommunications plc, Obviously, it depends on several factors: How far do short-range links go (~ how many people are there in a local cluster)? - At one end of the spectrum is the not-so-small world, that is: a simple ring (local connections limited to 1 hop, no long-range links) - fairly brittle! - At the other end is the fully connected mesh (everybody knows everybody) - unbreakable! In between (i.e. true SW networks) a key question seems to be: what is the proportion of long-range links?
© British Telecommunications plc, Benchmark: the basic ring Best fit X c ~
© British Telecommunications plc, Basic ring + 2 hops connections (no long-range links...) X c ~ 0.064
© British Telecommunications plc, Classic small world (rewiring probability = 1%) X c ~ 0.21
© British Telecommunications plc, Classic small world (rewiring probability = 5%) X c ~ 0.37 Best fit
© British Telecommunications plc, Classic small world (rewiring probability = 10%) X c ~ 0.43
© British Telecommunications plc, Evolution of X c as calculated by RAn
© British Telecommunications plc, Evolution of (more puzzling!) Maximum?
© British Telecommunications plc, Equivalent scale free network (same size and average degree) X c ~ 0.61
© British Telecommunications plc, Equivalent scale free network under attack X c ~ 0.44 Best fit
© British Telecommunications plc, Conclusions There are indications that a scale free network featuring extensive redundancy (2 connections per node) is more robust to node failure than a small world... But if under attack, the behaviour of the scale free architecture appears remarkably similar to that of its counterpart... Which is understandable considering that a small world cannot really be attacked!
© British Telecommunications plc, Practical demonstration(s)...
Complex Systems Modeling and Networks Summary of NECSI Course CX February 2009 John M. Linebarger, PhD Sandia National Laboratories Interactive Systems.
1 Scale Free Networks 2 Intro Very large real networks (millions or billions of nodes and edges) Occurring in nature, society, economy and technology.
Sampling and monitoring the environment Marian Scott Sept 2006.
Sampling and monitoring the environment Marian Scott March 2009.
PLANNING THE AUDIT Individual audits must be properly planned to ensure: Appropriate and sufficient evidence is obtained to support the auditors opinion;
Hidden Metric Spaces and Navigability of Complex Networks Dmitri Krioukov CAIDA/UCSD F. Papadopoulos, M. Boguñá, A. Vahdat, kc claffy.
1 by L Goel Professor & Head of Division of Power Engineering School of Electrical & Electronic Engineering Nanyang Technological University,
Based on Interdisciplinary Studies Orientation © Queens Printer for Ontario, For the purposes of this document, the term interdisciplinary is used.
Lecture 20 Missing Data and random effect modelling.
1 Evolutionary Systems Paul CRISTEA Politehnica University of Bucharest Spl. Independentei 313, Bucharest, Romania, Phone: , Fax:
1 Statistical sampling principles for the environment Marian Scott August 2013.
UNIT V: LEARNING. LEARNING Learning from Observation Inductive Learning Decision Trees Explanation based Learning Statistical Learning methods Reinforcement.
INTERMEDIATE 1 PHYSICAL EDUCATION STRUCTURES AND STRATEGIES INFORMATION PACK Name : _____________________________________ Class : _________ Year : ______.
Chapter: The Nature of Science Table of Contents Section 3: Models in ScienceModels in Science Section 1: What is science? Section 2: Science in ActionScience.
A Framework for Network Visualisation Progress Report Report to IST-063/RWS-010 by the IST-059/RTG-025 Working Group on Framework for Network Visualisation.
ICT2191 Topic 9 A Neural Network Animat Aiming Problem Perceptron to Learn the Aiming Problem The Correction Problem Perceptron to Learn the Correction.
Economic Insights Toolkit Press F5 to launch the toolkit If you are seeing this message, you have opened the toolkit in PowerPoint. To launch it automatically,
Copyright 1999, 2003 G.v. Bochmann CN-FM ch.2 1 Course Notes on Formal Methods for the Development of Distributed Real-Time Applications Gregor v. Bochmann.
Knowledge Acquisition and modelling Introduction to Knowledge Acquisition and Elicitation.
Probability and Statistics Representation of Data Measures of Center for Data Simple Analysis of Data.
Distributed Computing Dr. Eng. Ahmed Moustafa Elmahalawy Computer Science and Engineering Department.
ASWEC 2008Slide 1 Construction by Configuration: An opportunity for SE research Prof. Ian Sommerville St Andrews University Scotland.
PhDs in Computer Science (FAIRS09) Frans Coenen Monday 14 December 2009 Department of Computer Science The University of Liverpool
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Sampling and monitoring the environment-I Marian Scott August 2008.
Direct Time study: Selecting and timing the job First step in time study is to select the job to be studied. There is always a reason why a particular.
Computing Higher - SD Process – Topic 2 St Andrew’s High School Unit 2 Software Development Process.
Pseudo Random and Random Numbers Vivek Bhatnagar and Chaitanya Cheruvu.
'Trends, time series and forecasting Paul Fryers East Midlands KIT.
1 Computer Networks: A Systems Approach, 5e Larry L. Peterson and Bruce S. Davie Chapter 9 Applications Copyright © 2010, Elsevier Inc. All rights Reserved.
© 2016 SlidePlayer.com Inc. All rights reserved.