Presentation on theme: "Lecture 3 - Why the Internet only just works - What can we do about it? D.Sc. Arto Karila Helsinki Institute for Information Technology (HIIT)"— Presentation transcript:
Lecture 3 - Why the Internet only just works - What can we do about it? D.Sc. Arto Karila Helsinki Institute for Information Technology (HIIT) T – Special Course in Future Internet Technologies M.Sc. Mark Ain Helsinki Institute for Information Technology (HIIT)
*** NOTICE *** The following readings are now mandatory: DONA I3 SEATTLE The following lectures are cancelled: Tue Mon Mon
Computer networking Developed for mainframes (on the left ENIAC and on the right IBM S/360) Sharing devices: computers, mass memory, printers etc. with addresses Point-to-point traffic between two devices or network interfaces The old paradigm still lives even though the world around has completely changed Picture source: IDG News Service
Contents What’s wrong with the Internet today What can we do about it?
What’s wrong with the Internet today? 1) Sender empowerment 2) Endpoint-centrism 3) Infrastructure trustworthiness 4) Application deployment 5) Congestion control 6) Inter-domain routing 7) Multi-homing 8) Address space 9) Identifier-locator unification Mobility 10) QoS 11) Multicast and caching
Sender empowerment In the 1960s… Computers were large and very resource-limited by modern standards Data was stored, input, and output by physical means e.g. punch cards; you took your data with you ARPANet was organized to address the need for efficient resource-sharing amongst computers of the time (NOT content sharing!) The send-receive communication paradigm was simple, arguably obvious, and well-suited for the purposes of the time
Sender empowerment Today… Computers are small, resources are abundant, content is at the forefront of (user) attention Send-receive may not be optimal Result: SPAM, DoS, concealment (firewalls, middleboxes etc.) etc
Quick look: SPAM (2009) Estimates… Upwards of $130 billion USD in global losses (2009 USD, average EUR-USD exchange rate, unadjusted for inflation 2012) ~62 trillion messages per year Server-side filtering could hypothetically save ~135TWh of energy per year = ~17 million metric tons of CO2 emissions
Endpoint-centrism The future: content-centrism Get “x” simply by asking for “x”… the network finds “x” and delivers it innately based on “x”
Endpoint-centrism The reality: endpoint-centrism Get “x” by asking WHERE is “x”, receiving response “y”, then fetching “x” from ”y”
Infrastructure trustworthiness Trust is irrational – however, there is a mathematical foundation for it The Internet was developed for a community where everybody was assumed trustworthy Now that the Internet is used by everybody, it is vital to enable communication between parties that don’t trust each other We need mechanisms by which people and companies can build and evaluate trust Good reputation can be made an asset worth protecting Combining privacy and reputation is challenging
Application deployment “There is a vicious circle – application developers will not use a new protocol (even if it is technically superior) if it will not work end-to-end; OS vendors will not implement a new protocol if application developers do not express a need for it; NAT and firewall vendors will not add support if the protocol is not in common operating systems; the new protocol will not work end-to-end because of lack of support in NATs and firewalls.” - M. Handley
Application deployment E2E principle should in theory make it easy to deploy applications over many hosts without worrying about interactional problems across network Unfortunately, this is not the case, as evolutionary developments and patchwork solutions (e.g. NAT) have broken E2E on many levels
Application deployment Internet stagnancy feedback loop
Internet stagnancy feedback loop A “chicken and the egg” problem Discussion: what happens first?
Congestion control It was implemented at the transport layer (TCP, mid-late 1980’s) because it was too late in the Internet’s development to change the core protocol stack TCP congestion control is largely successful, but incremental, and plagued by insufficiencies (next slide)
Congestion control TCP problems… 1. Only reacts to congestion, does not proactively prevent it; insufficient convergence times 2. Changing application and per-flow requirements variety of security, performance, and compatibility problems 3. Poor performance over links with high B*D product; too slow to converge, too aggressive backoff 4. Not designed for wireless environments; TCP reacts to packet loss as though it were congestion
Inter-domain routing The current inter-operator routing protocol BGP- 4 does not fulfill modern requirements but there is no successor to it in sight Tier-1 operators (AT&T, MCI, Sprint, C&W etc.) are a group of about a dozen global operators with mutual peering agreements In Practice they form a cartel, which wants to cement the market and is not advocating development
Inter-domain routing BGP policy routing mechanisms were a reaction to an abundance of users and the potential for commercial competition BGP operation centered on… AS’s are separate and equal Route-path information is commercial sensitive BGP attempts to avoid unecessarily releasing route-path information subject to misconfiguration, vulnerability, slow convergence etc
Multi-homing Reliability, transparent-failover, and load- sharing often necessitate multi-homed connections Problem: the mere presence of multiple IP prefix announcements on a wide-scale removes the benefits of hierarchical IP aggregation
Address space IPv4 once though inexhaustible Despite evolutionary patchwork (NAT, DHCP, improved allocation policies, reclamation projects etc.), IPv4 is exhausted IPv6???
IPv6 IPv6 was defined in 1995 and expected to spread fast It is still hardly used in Western countries The main improvement of IPv6 is moving from 32-bit to 128-bit addresses IPv6 was defined at a time when nobody could foresee all of the uses and needs of the Internet that we have now The transition to IPv6 will be a long one and it won’t solve most of the problems PlannedActual ?
Identifier-locator unification Mobility raises 5 fundamental problems… 1) Locating the mobile host and/or service 2) Preserving communication 3) Disconnecting gracefully 4) Hibernating efficiently 5) Reconnecting quickly The root cause of problems 1 and 2 is IP semantic overload i.e. identifier-locator unification; the last 3 are largely unaddressed!
QoS DiffServ and IntServ are NOT built-in to the network or protocol independent DiffServ does not provide end-to-end guarantees IntServ requires cooperation amongst providers and network state How do we provide protocol-independent QoS, built-in to the architecture, preserving E2E, without necessarily requiring network state etc?
Multicast and Caching
In summary… No major changes have been made to the core protocols of the Internet since 1993 (CIDR) The core protocols of the Internet are ossified while the needs have developed significantly
Contents What’s wrong with the Internet today What can we do about it?
Evolution vs. revolution The Internet has developed from the 1970’s in an evolutionary way, with no big changes As concluded before, this has led into a situation where it is very hard to make changes to the core protocols Among researchers and developers of the Internet, there is a growing opinion that something fundamental has to be done at some point It the Internet was to be designed from the scratch, it would probably become very different from what it has evolved to
Evolution vs. revolution Various clean-slate solutions are current research topics and some of them may lead into a new Internet It is possible that all the protocol layers, including the Internet Protocol, will change However, it looks like any new solution would have to be able to operate as overlay above the existing IP infrastructure, in order to have a change to proliferate The publish/subscribe paradigm (pub/sub) is one of the most promising new paradigms (for more information see and
Microeconomics Over the past ten years, microeconomics have grown in importance We need economic mechanisms that encourage people to do good for the community The Internet was developed with public funds for research and education without any commercial considerations If we want to inject resources into the network, it must be possible for the party paying for them to also receive (some of) the revenues We need to create ways for companies and people to improve their own economies by doing things beneficial for the community
For tomorrow… READ Van Jacobson, Diana K. Smetters, James D. Thornton, Michael F. Plass, Nicholas H. Briggs, and Rebecca L. Braynard Networking named content. In Proceedings of the 5th international conference on Emerging networking experiments and technologies (CoNEXT '09). ACM, New York, NY, USA, DOI= /
Thank you for your attention! Questions? Comments?