Dissemination-based Data Delivery Using Broadcast Disks.

Slides:



Advertisements
Similar presentations
Dynamic Generation of Data Broadcasting Programs for Dynamic Generation of Data Broadcasting Programs for a Broadcast Disk Array in a Mobile Computing.
Advertisements

IP Router Architectures. Outline Basic IP Router Functionalities IP Router Architectures.
Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
1 Memory Management Managing memory hierarchies. 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile –transparent Memory.
CSC 4250 Computer Architectures December 8, 2006 Chapter 5. Memory Hierarchy.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University.
Data Broadcast in Asymmetric Wireless Environments Nitin H. Vaidya Sohail Hameed.
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
© nCode 2000 Title of Presentation goes here - go to Master Slide to edit - Slide 1 Reliable Communication for Highly Mobile Agents ECE 7995: Term Paper.
Locality-Aware Request Distribution in Cluster-based Network Servers 1. Introduction and Motivation --- Why have this idea? 2. Strategies --- How to implement?
ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University.
SECTIONS 13.1 – 13.3 Sanuja Dabade & Eilbroun Benjamin CS 257 – Dr. TY Lin SECONDARY STORAGE MANAGEMENT.
Database caching in MANETs Based on Separation of Queries and Responses Author: Hassan Artail, Haidar Safa, and Samuel Pierre Publisher: Wireless And Mobile.
SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts Jianliang Xu, Qinglong Hu, Dik Lun Department of Computer Science in HK University.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
An Intelligent Cache System with Hardware Prefetching for High Performance Jung-Hoon Lee; Seh-woong Jeong; Shin-Dug Kim; Weems, C.C. IEEE Transactions.
What is adaptive web technology?  There is an increasingly large demand for software systems which are able to operate effectively in dynamic environments.
Client Cache Management Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access.
SECTIONS 13.1 – 13.3 Sanuja Dabade & Eilbroun Benjamin CS 257 – Dr. TY Lin SECONDARY STORAGE MANAGEMENT.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
CS401 presentation1 Effective Replica Allocation in Ad Hoc Networks for Improving Data Accessibility Takahiro Hara Presented by Mingsheng Peng (Proc. IEEE.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 34 – Media Server (Part 3) Klara Nahrstedt Spring 2012.
Client-Server Computing in Mobile Environments
Introduction to Database Systems 1 The Storage Hierarchy and Magnetic Disks Storage Technology: Topic 1.
Memory Systems Architecture and Hierarchical Memory Systems
Massively Distributed Database Systems Broadcasting - Data on air Spring 2014 Ki-Joune Li Pusan National University.
M i SMob i S Mob i Store - Mobile i nternet File Storage Platform Chetna Kaur.
Network Aware Resource Allocation in Distributed Clouds.
Web Prefetching Between Low-Bandwidth Clients and Proxies : Potential and Performance Li Fan, Pei Cao and Wei Lin Quinn Jacobson (University of Wisconsin-Madsion)
Segment-Based Proxy Caching of Multimedia Streams Authors: Kun-Lung Wu, Philip S. Yu, and Joel L. Wolf IBM T.J. Watson Research Center Proceedings of The.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
《 Hierarchical Caching Management for Software Defined Content Network based on Node Value 》 Reporter : Jing Liu , China Affiliation : University of Science.
Feb 5, ECET 581/CPET/ECET 499 Mobile Computing Technologies & Apps Data Dissemination and Management 2 of 3 Lecture 7 Paul I-Hai Lin, Professor Electrical.
Energy-Efficient Data Caching and Prefetching for Mobile Devices Based on Utility Huaping Shen, Mohan Kumar, Sajal K. Das, and Zhijun Wang P 邱仁傑.
Client Cache Management Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
ASPLOS’02 Presented by Kim, Sun-Hee.  Technology trends ◦ The rate of frequency scaling is slowing down  Performance must come from exploiting concurrency.
Data dissemination in wireless computing environments
Content caching and scheduling in wireless networks with elastic and inelastic traffic Group-VI 09CS CS CS30020 Performance Modelling in Computer.
A simple model for analyzing P2P streaming protocols. Seminar on advanced Internet applications and systems Amit Farkash. 1.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
CPET 565 Mobile Computing Systems Data Dissemination and Management (2) Lecture 8 Hongli Luo Indiana University-Purdue University Fort Wayne.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Massively Distributed Database Systems Broadcasting - Data on air Spring 2015 Ki-Joune Li Pusan National University.
Storage Systems CSE 598d, Spring 2007 OS Support for DB Management DB File System April 3, 2007 Mark Johnson.
for all Hyperion video tutorial/Training/Certification/Material Essbase Optimization Techniques by Amit.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Tree-Based Networks Cache Coherence Dr. Xiao Qin Auburn University
Feb 5, ECET 581/CPET/ECET 499 Mobile Computing Technologies & Apps Data Dissemination and Management 3 of 4 Lecture 8 Paul I-Hai Lin, Professor Electrical.
Chapter 4 The Medum Access Sublayer. MA Sublayer Additional Reference –Local and Metropolitan Area Networks, William Stallings, Prentice Hall, 2000, 6th.
Chapter 12: Architecture
Chapter 2 Memory and process management
Memory COMPUTER ARCHITECTURE
Distributed Processors
CSE-291 Cloud Computing, Fall 2016 Kesden
Data Dissemination and Management (2) Lecture 10
Dissemination-based Data Delivery Using Broadcast Disks
Broadcast Information Dissemination
Sanuja Dabade & Eilbroun Benjamin CS 257 – Dr. TY Lin
Outline Midterm results summary Distributed file systems – continued
Chapter 12: Physical Architecture Layer Design
CSE 4340/5349 Mobile Systems Engineering
Group Based Management of Distributed File Caches
Database System Architectures
Cache - Optimization.
10/18: Lecture Topics Using spatial locality
Data Dissemination and Management (2) Lecture 10
Presentation transcript:

Dissemination-based Data Delivery Using Broadcast Disks

Communication Asymmetry n Dissemination of Data to large number of clients u Stock Prices, news, sports, software distribution n Access by Millions of Users n Data from server to clients is much larger than from clients to server u Network Asymmetry – larger bandwidth from server to clients u Slow uplink u More clients, less server u Updates and new information n HTTP and FTP limit the number of connections

Cont.

Broadcast Disk n Proposes a mechanism called Broadcast Disks to provide database access to mobile clients. n Server continuously and repeatedly broadcasts data to a mobile client as it goes by. n Multiple disks of different sizes are superimposed on the broadcast medium. n Exploits the client storage resources for caching data.

Topics of Discussion n Server Broadcast Programs n Client Cache Management n Prefetching n Read/Write Case

Server Broadcast Programs n Data server must construct a broadcast program to meet the needs of the client population. n Server would take the union of required items and broadcast the resulting set cyclically. n Single additional layer in a clients memory hierarchy - flat broadcast. n In a flat broadcast the expected wait for an item on the broadcast is the same for all items.

Server Broadcast Programs

n Broadcast Disks are an alternative to flat broadcasts. n Broadcast is structured as multiple disks of varying sizes, each spinning at different rates.

Server Broadcast Programs Flat Broadcast Skewed Broadcast Multi-disk Broadcast

Server Broadcast Programs n For uniform access probabilities a flat disk has the best expected performance. u Increasing broadcast rate of one data item decreases broadcast rate of others (continuous broadcast) n For increasingly skewed access probabilities, non-flat disk programs perform better. n Multi-disk programs perform better than the skewed programs. u If the inter-arrival time of a page (broadcast rate) is fixed then the expected delay for a request arriving at random time is 1/2 of the gap between successive broadcast of the page. In contrast, if there is variance in the inter-arrival rate then the gaps between broadcast will be of different lengths. In this case the probability of a request arriving during a large gap is greater.

Server Broadcast Programs Generating a multi-disk broadcast n Number of disks (num_disks) determine the number of different frequencies with which pages will be broadcast. n For each disk, the number of pages and the relative frequency of broadcast (rel_freq(i)) are specified.

Example n Assume a list of pages partitioned into 3 disks n Pages in disk 1 are to be broadcast twice as frequently as pages in Disk 2 and four times as frequently as pages in disk 3 n Rel-freq(1) = 4, rel-freq(2) = 2 and rel-freq (3) = 1 Max-chunk = 4, num-chunks (1) = 1, num- chunks(2) = 2, num-chunks (3) = 4

Server Broadcast Programs

Client Cache Management n Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access distributions. n Therefore the client machines need to cache pages obtained from the broadcast.

Client Cache Management n With traditional caching clients cache the data most likely to be accessed in the future. n With Broadcast Disks, traditional caching may lead to poor performance if the servers broadcast is poorly matched to the clients access distribution.

Client Cache Management n In the Broadcast Disk system, clients cache the pages for which the local probability of access is higher than the frequency of broadcast. n This leads to the need for cost-based page replacement.

Client Cache Management n One cost-based page replacement strategy replaces the page that has the lowest ratio between its probability of access (P) and its frequency of broadcast (X) - PIX n PIX requires the following: 1Perfect knowledge of access probabilities. 2Comparison of PIX values for all cache resident pages at cache replacement time.

Client Cache Management n Another page replacement strategy adds the frequency of broadcast to an LRU style policy. This policy is known as LIX. n LIX maintains a separate list of cache-resident pages for each logical disk n Each list is ordered based on an approximation of the access probability (L) for each page. n A LIX value is computed by dividing L by X, the frequency of broadcast. The page with the lowest LIX value is replaced.

Prefetching n An alternative approach to obtaining pages from the broadcast. n Goal is to improve the response time of clients that access data from the broadcast. n Methods of Prefetching: Tag Team Caching Prefetching Heuristic

Prefetching n Tag Team Caching - Pages continually replace each other in the cache. n For example two pages x and y, being broadcast, the client caches x as it arrives on the broadcast. Client drops x and caches y when y arrives on the broadcast.

Prefetching n Simple Prefetching Heuristic n Performs a calculation for each page that arrives on the broadcast based on the probability of access for the page (P) and the amount of time that will elapse before the page will come around again (T). n If the PT value of the page being broadcast is higher than the page in cache with the lowest PT value, then the page in cache is replaced.

Read/Write Case n With dynamic broadcast there are three different changes that have to be handled. 1Changes to the value of the objects being broadcast. 2Reorganization of the broadcast. 3Changes to the contents of the broadcast.

Conclusion n Broadcast Disks project investigates the use of data broadcast and client storage resources to provide improved performance, scalability and availability in networked applications with asymmetric capabilities.