CS258 Spring 2002 Mark Whitney and Yitao Duan

Slides:



Advertisements
Similar presentations
L ondon e-S cience C entre Application Scheduling in a Grid Environment Nine month progress talk Laurie Young.
Advertisements

International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
Dynamic Replica Placement for Scalable Content Delivery Yan Chen, Randy H. Katz, John D. Kubiatowicz {yanchen, randy, EECS Department.
Tapestry: Scalable and Fault-tolerant Routing and Location Stanford Networking Seminar October 2001 Ben Y. Zhao
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed System Architectures.
Introduction to Grid Computing Slides adapted from Midwest Grid School Workshop 2008 (
Beowulf Supercomputer System Lee, Jung won CS843.
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
Study of Hurricane and Tornado Operating Systems By Shubhanan Bakre.
Cache Coherent Distributed Shared Memory. Motivations Small processor count –SMP machines –Single shared memory with multiple processors interconnected.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Multiple Processor Systems Chapter Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
SLIDE 1IS 257 – Fall 2006 New Generation Database Systems: XML Databases and Grid-based Digital Libraries University of California, Berkeley.
CS 284a, 7 October 97Copyright (c) , John Thornley1 CS 284a Lecture Tuesday, 7 October 1997.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
Grid Computing – Issues in Data grids and Solutions Sudhindra Rao.
EEC-681/781 Distributed Computing Systems Lecture 3 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
POLITEHNICA University of Bucharest California Institute of Technology National Center for Information Technology Ciprian Mihai Dobre Corina Stratan MONARC.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Presented by: Alvaro Llanos E.  Motivation and Overview  Frangipani Architecture overview  Similar DFS  PETAL: Distributed virtual disks ◦ Overview.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Parallel Architectures
Service, Grid Service and Workflow Xian-He Sun Scalable Computing Software Laboratory Illinois Institute of Technology Nov. 30, 2006 Fermi.
Welcome e-Science in the UK Building Collaborative eResearch Environments Prof. Malcolm Atkinson Director 23 rd February 2004.
Introduction to Grid Computing Ann Chervenak and Ewa Deelman USC Information Sciences Institute.
Peer to Peer & Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The University.
Grid Security Steve Tuecke Argonne National Laboratory.
Dionicio D. Gante, Genevev G. Reyes & Vanylive T. Galima DDistributed Operating Systems.
JuxMem: An Adaptive Supportive Platform for Data Sharing on the Grid Gabriel Antoniu, Luc Bougé, Mathieu Jan IRISA / INRIA & ENS Cachan, France Workshop.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Slide 1 Experiences with NMI R2 Grids Software at Michigan Shawn McKee April 8, 2003 Internet2 Spring Meeting.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
CHEP 2000 (Feb. 7-11)Paul Avery (Data Grids in the LHC Era)1 The Promise of Computational Grids in the LHC Era Paul Avery University of Florida Gainesville,
Peer-to-Peer Distributed Shared Memory? Gabriel Antoniu, Luc Bougé, Mathieu Jan IRISA / INRIA & ENS Cachan/Bretagne France Dagstuhl seminar, October 2003.
D. Olson, L B N L 1 STAR Collab. Mtg. 13 Aug 2003 Grid Enabling a small Cluster Doug Olson Lawrence Berkeley National Laboratory STAR Collaboration Meeting.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
Scalarm: Scalable Platform for Data Farming D. Król, Ł. Dutka, M. Wrzeszcz, B. Kryza, R. Słota and J. Kitowski ACC Cyfronet AGH KU KDM, Zakopane, 2013.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
Grid Glasgow Outline LHC Computing at a Glance Glasgow Starting Point LHC Computing Challenge CPU Intensive Applications Timeline ScotGRID.
Hans Wenzel Second Large Scale Cluster Workshop October ”The CMS Tier 1 Computing Center at Fermilab" Hans Wenzel Fermilab  The big picture.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Rights Management for Shared Collections Storage Resource Broker Reagan W. Moore
Middleware and the Grid Steven Tuecke Mathematics and Computer Science Division Argonne National Laboratory.
Background Computer System Architectures Computer System Software.
Storage Management on the Grid Alasdair Earl University of Edinburgh.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
Introduction to Grids Ben Clifford University of Chicago
] Open Science Grid Ben Clifford University of Chicago
COMPSCI 110 Operating Systems
Jeremy Martin Alex Tiskin
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Last Class: Introduction
Introduction to Distributed Platforms
Achieving Multiprogramming Scalability of Parallel Programs on Intel SMP Platforms: Nanothreading in the Linux Kernel Christos D. Antonopoulos Panagiotis.
Chapter 1: Introduction
Consistency in Distributed Systems
Address Translation for Manycore Systems
Outline Midterm results summary Distributed file systems – continued
Multiple Processor Systems
The Grid and the Future of Business
Chapter 4 Multiprocessors
Outline Review of Quiz #1 Distributed File Systems 4/20/2019 COP5611.
Distributed Systems (15-440)
Distributed Systems and Algorithms
Presentation transcript:

CS258 Spring 2002 Mark Whitney and Yitao Duan Memory Consistency Models in Wide-area Storage System – Or What Do They Mean? CS258 Spring 2002 Mark Whitney and Yitao Duan

Motivations Global scale computing approaching Wide area storage is becoming a reality The greed for processing power calls the marriage of the two Traditional approach to large scale data processing: Hierarchy What if new algorithms require to touch more data? Scale SMP? Use OceanStore as testbed

Data and Computation Hungry Applications Quantum Chromodynamics Biomolecular Dynamics Weather Forecasting Cosmological Dark Matter Biomolecular Electrostatics Electric and Magnetic Molecular Properties

Data Grid for High Energy Physics - CalTech Tier2 Centre ~1 TIPS Online System Offline Processor Farm ~20 TIPS CERN Computer Centre FermiLab ~4 TIPS France Regional Centre Italy Regional Centre Germany Regional Centre Institute Institute ~0.25TIPS Pentium II 300 MHz Physicist workstations ~100 MBytes/sec ~622 Mbits/sec ~1 MBytes/sec HPSS There is a “bunch crossing” every 25 nsecs. There are 100 “triggers” per second Each triggered event is ~1 MByte in size Physicists work on analysis “channels”. Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server Physics data cache ~PBytes/sec ~622 Mbits/sec or Air Freight (deprecated) Caltech ~1 TIPS Tier 0 Tier 1 Tier 2 Tier 4 1 TIPS is approximately 25,000 SpecInt95 equivalents

Background What is OceanStore? Observations and questions A global persistent data store scalable to billions of users High availability, fault-tolerance, security Caching to reduce network congestion, guarantee availability and performance Flexible consistency semantics Observations and questions Remarkable resemblance to MP memory system Replica = cache, client = processor, data object = memory item OceanStore consistency semantics are typically that of a file system’s. What do they mean to a program?

Running Parallel Applications on OceanStore Why do we try this Distributed computing Grid World Wide Computing New programming paradigm? (OceanStore is a new phenomena, will it bring out new applications? Where will computing infrastructure go given the advance of network, storage and parallel processing?

Shared Virtual Memory Space ParaApp OS Kernel OClient

Running SMP Apps on OceanStore OceanStore data objects are globally identified Virtual address in application address space mapped to OceanStore object ID Shared memory address access turned into OceanStore requests

Consistency Models

Performance Evaluation OceanStore …(#of inner rings, …) Nachos++! MIPS R3000 processor w/FP Stanford SPLASH-2 benchmark suite 4 x 4 matrix LU decomposition

Computation Time Number of Cycles

Network Latency Milliseconds

Open Questions Programming model Cache policy Consistency models Sharing granularity

Conclusion and Future Work Matrix decomposition runs on OceanStore! Wide-area distributed synchronizations are expensive (not surprising) Need better memory model if want to run shared memory applications Message passing? – Seems to be a better match (use explicit OceanStore APIs) New programming model?