Presentation is loading. Please wait.

Presentation is loading. Please wait.

SEDCL: Stanford Experimental Data Center Laboratory.

Similar presentations


Presentation on theme: "SEDCL: Stanford Experimental Data Center Laboratory."— Presentation transcript:

1 SEDCL: Stanford Experimental Data Center Laboratory

2 Tackle Data Center Scaling Challenges with Stanford’s research depth and breadth

3 Data Center Scaling A network of data centers and web services are the key building blocks for future computing Factors contributing to data center scaling challenges –Explosive growth of data with no locality of any kind –Legal requirement to backup data in geographically-separated locations---big concern for financial industry –Emergence of mobile and Cloud Computing –Massive “interactive” web application –Energy as a major new factor and constraint –Increasing capex and opex pressures Continued innovations critical to sustain growth 3

4 Stanford Research Themes RAMCloud: main-memory based persistent storage –Extremely low latency RPC Networking: –Large, high-bandwidth, low-latency network fabric –Scalable, error-free packet transport –Software defined data center networking with OpenFlow Servers and computing –Error and failure resilient design –Energy aware and energy proportional design –Virtualization and mobile VMs 4

5 Major research topics of SEDCL RAMCloud: Scalable DRAM-based Storage –Scalable nvRAM –All data in DRAMs all the time Interconnect fabric –Bufferless networks: low-latency, high-bandwidth network Packet transport –Reliable delivery of packets: R2D2—L2.5 –Congestion management: QCN (IEEE 802.1Qau), ECN-HAT, DCTCP –Programmable bandwidth partitioning for multi-tenanted DCs: AF-QCN –Low-latency 10GBaseT Related projects –OpenFlow –Energy aware and energy proportional design 5

6 Experimentation is Key to Success Many promising ideas and technologies –Will need iterative evaluation at scale with real applications Interactions of subsystems and mechanisms not clear –Experimentation best way to understand the interactions Difficult to experiment with internal mechanisms of a DC –No experimental facilities and that is a big barrier to innovations Ongoing efforts to enable experimentation –Facebook, Microsoft, NEC, Yahoo!, Google, Cisco, Intel, … 6

7 Overview of Research Projects RAMCloud Packet transport mechanisms –Reliable and reliable data delivery: R2D2—L2.5 –ECN-HAT, DCTCP: collaboration with Microsoft Data center switching fabric –Extremely low latency, low errors and congestion (bufferless) –High port density with very large bisection bandwidth  project just initiated 7

8 RAMCloud Overview Lead: John Ousterhout Storage for datacenters 1000-10000 commodity servers 64 GB DRAM/server All data always in RAM Durable and available Low-latency access: 5µs RPC High throughput: 1M ops/sec/server Application Servers Storage Servers Datacenter

9 RAMCloud Research Issues Data durability and availability Low latency RPC: 5 microseconds –Need suitable network! Data model Concurrency/consistency model Data distribution, scaling Automated management Multi-tenancy Client-server functional distribution

10 Layer 2.5: Motivation and use cases 10

11 L2.5 Research Issues Determine simple signaling method –Simplify (or get rid of) headers/tags for L2.5 encapsulation Develop and refine the basic algorithm for TCP –In the kernel –In hardware (NICs) Develop the algorithm for storage (FC, FCoE) Deploy in a large testbed Collaborate on standardization

12 DCTCP DCTCP: TCP for data centers –Operates with really small buffers –Optimized for low-latency –Uses ECN marking  with Mohammad Alizadeh, and Greenberg et al at Microsoft  Influenced by ECN-HAT (with Abdul Kabbani)

13 DCTCP: Transport Optimized for Data Centers 1.High throughput –Creating multi-bit feedback at TCP sources 2.Low Latency (milliseconds matter) –Small buffer occupancies due to early and aggressive ECN marking 3.Burst tolerance –Sources react before packets are dropped –Large buffer headroom for bursts 1.Use full info in stream of ECN marks 2.Adapt quickly and in proportion to level of congestion Packet buffer K Mark Don’t Mark ECN MarksDCTCPTCP 1 0 1 1 1 Cut window by 40%Cut window by 50% 0 0 0 0 0 0 0 0 0 1Cut window by 5%Cut window by 50% Sauce  DCTCP Reduces variability Reduces queuing Incast Queue buildup

14 14 Research Themes and Teams Networking Virtualization: Server and network Energy Aware M. Rosenblum B. Prabhakar P. Levis K. Kozyrakis WEB App Framework N. McKeown B. Prabhakar G. Parulkar J. Ousterhout N. McKeown Resilient Systems M. Rosenblum S. Mitra N. McKeown Storage J. Ousterhout M. Rosenblum D. Mazieres


Download ppt "SEDCL: Stanford Experimental Data Center Laboratory."

Similar presentations


Ads by Google