Presentation is loading. Please wait.

Presentation is loading. Please wait.

On Fairness, Optimizing Replica Selection in Data Grids Husni Hamad E. AL-Mistarihi and Chan Huah Yong IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,

Similar presentations


Presentation on theme: "On Fairness, Optimizing Replica Selection in Data Grids Husni Hamad E. AL-Mistarihi and Chan Huah Yong IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,"— Presentation transcript:

1 On Fairness, Optimizing Replica Selection in Data Grids Husni Hamad E. AL-Mistarihi and Chan Huah Yong IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 20, NO. 8, AUGUST 2009 Present by Chen, Ting-Wei

2 2 Table of Content Introduction Introduction System Requirements and Design System Requirements and Design Performance Metrics and Evaluation Performance Metrics and Evaluation Results and Discussion Results and Discussion Conclusions and Future Works Conclusions and Future Works

3 3 Introduction Problem Problem –How to select the best replica location from among many replica locations in minimum response time and high level of QoS? –How to establish fairness among the users in selecting the replica location, such that user gains an equity portion of QoS and response time in relation to other users?

4 4 Introduction (cont.) Replica selection Replica selection –One of major functions of data replication that decides which replica location is the best for the users based on some criteria Replicas for grid users Replicas for grid users –Minimum response time –High level of Quality of Service (QoS) –Be allocated among the users fairly

5 5 Introduction (cont.) Criteria in the selection decision Criteria in the selection decision –Response time –Security –Reliability  Conflict  Heterogeneous

6 6 Introduction (cont.) Achieves the following objectives Achieves the following objectives –Provides the Grid users with the required replica in minimum response time and maximum QoS –Establishes fairness among the users by providing a new method for resources allocation

7 7 Introduction (cont.) –Provides an elaborated method that generates the decision-maker preferences (weights) automatically, and is termed as the “ fairness method ” –Deploys the AHP model in replica selection engine

8 8 Introduction (cont.) Evaluated Evaluated –Own simulator which is an extension of the simulation OptorSim –Compare with the random algorithm  Because there is no previous work similar to them –Measure the fairness among users  Calculating the Standard Deviation (SD) FOR Grid users for each criterion value

9 9 System Requirements and Design Focus on Focus on –Replica selection decision –Establishing fairness among users –The most important resource

10 10 System Requirements and Design (cont.) Data file Copy 1 Grid site 1 Copy 5 Grid site 5 Copy 4 Grid site 4 Copy 3 Grid site 3 Copy 2 Grid site 2 Reliability Security Response Time Reliability Security Response Time Reliability Security Response Time Reliability Security Response Time Reliability Security Response Time

11 11 System Requirements and Design (cont.) Selection engine decides which is the best site Selection engine decides which is the best site –The highest secure site –The highest reliable site –The lower response time between the local site and the remote site  Best replica –The highest level of QoS

12 12 System Requirements and Design (cont.) Analytical Hierarchy Process Analytical Hierarchy Process –The weighted sum approach  Step 1: Underlying criteria. Thus, pair-wise comparisons are made and converted into quantity numbers. Criterion Scale of measurement Response Time From 30 minutes to 2000 minutes. Excellent=(30~100); Very Good=(101~500); Good=(501~1000); Indifferent=(1001~1500); Bad=(1501~2000) Reliability From 30 to 100. Excellent=(90~100); Very Good=(80~89); Good=(65~79); Indifferent=(50~64); Bad=(30~49) Security From 1 to 5. Excellent=5; Very Good=4; Good=3; Indifferent=2; Bad=1

13 13 System Requirements and Design (cont.)  Step 2 –The pair-wise comparisons are organized into a symmetric matrix –Multiply by itself becomes a judgment matrix –The total sum of each row in the judgment matrix is calculated to produce the AHP_Eigenvector value  Weight  Step 3 –For each criterion, the relative importance among the alternatives will be organized into a symmetric matrix, and steps 1, 2 are repeated

14 14 System Requirements and Design (cont.)  Step 4 –Local ratings –Multiply by the weights of the criteria of the judgment matrix (the first matrix) –Aggregate to get global ratings –Decision will be taken about the highest ranked alternative site  Disadvantage to AHP –Error prone –Hinder the dynamic nature of the autonomous Grid systems –Fairness Method is proposed to overcome this disadvantage

15 15 System Requirements and Design (cont.) Fairness Fairness –Contribute toward the replication management system in Grid –Contribute to other domains which have a similar optimization problem in selecting one possible solution

16 16 System Requirements and Design (cont.) System Detailed Design System Detailed Design –Data Grid architecture

17 17 System Requirements and Design (cont.) System consists of two main components System consists of two main components –Replica Manager (RM)  Manages the historical data file  Enquires the Replica Location Service for the related physical file name, and their sites locations  Enquires the NWS and GridFTP for site-related information and network status –Replica Selector (RS)  Located at each grid site (node) receives the requests from the user ’ s jobs

18 18 System Requirements and Design (cont.) –RS gets the related information from the RM in order to take the appropriate decision –RS computes the fairness values and comes up with the best replica location decision

19 19 System Requirements and Design (cont.) Implementation steps (Fairness method) Implementation steps (Fairness method) –Step 1:Calculate the User Criteria Average (UCA) from the historical data file for each criterion

20 20 System Requirements and Design (cont.) –Step 2: Calculate the System Criteria Average (SCA) for all users in the Grid system

21 21 System Requirements and Design (cont.) –Step 3: User Fairness (UF) is calculated for each criterion

22 22 System Requirements and Design (cont.) –Step 4: Calculate the correlated criteria weights  The equation is computed nine times by varying both i and j to fill the weights

23 23 System Requirements and Design (cont.) Implementation steps (AHP) Implementation steps (AHP) –Step 5: Produce the matrix  Fairness Matrix  Security Matrix  Reliability Matrix  Response Time Matrix

24 24 System Requirements and Design (cont.) –Step 6  Calculate the AHP_Eigenvector Security Matrix Row sum Eigenvector 10.40.672.070.2 2.511.675.170.5 1.50.613.10.3 Total=10.34

25 25 System Requirements and Design (cont.) –Step 7  Aggregate the AHP_Eigenvector for reliability, security, and response time in one matrix  Multiply this matrix by the AHP_Eigenvector of the fairness matrix  One-dimensional array, rank array  The maximum value of the resulting ranked matrix is the best site

26 26 Performance Metrics and Evaluation Evaluate the system performance Evaluate the system performance –Measure –Analysis –Compare with other models Quality of Service (QoS) and Response Time Quality of Service (QoS) and Response Time –High level of security –The security value specified on each site in replica selection decision

27 27 Performance Metrics and Evaluation (cont.) Fairness Metric Fairness Metric –Measure the resources portion gained by a specified user –SD metrics are the appropriate metrics to measure the fairness level Evaluation Evaluation –OptorSim –Made some changes to suit the case –Compare with random algorithm

28 28 Results and Discussion Test case (1): Fairness Test case (1): Fairness –Before - After

29 29 Results and Discussion (cont.) –Before -After

30 30 Results and Discussion (cont.) Test case (2): Best Replica Selectin and Scalability Test case (2): Best Replica Selectin and Scalability

31 31 Results and Discussion (cont.)

32 32 Results and Discussion (cont.)

33 33 Results and Discussion (cont.)

34 34 Results and Discussion (cont.) –Overall of the fairness algorithm compared with the random algorithm

35 35 Conclusions and Future Works Conclusions Conclusions –Best replica selection –Establishing fairness among users –Advantage  The system allows Grid users to participate and share Grid resources fairly  The system achieves better satisfaction for Grid users –Reliability and security are maximized –Response time is minimized

36 36 Conclusions and Future Works (cont.) Future Works Future Works –Improve the replica selection process by involving the users in determining their preferences –Create another component to the system that provides searching and matching services for the users –The stock market shares will be adapted such that each user can sell or buy fairness values with other users

37 37 Conclusions and Future Works (cont.) –Expand the system –Propose a new replication strategy  Support replica management  Replica deletion  Replica placement  Reduce both job execution time and network traffic –The future replication strategy will be compared with the OptorSim

38 Thank You for Your Attention


Download ppt "On Fairness, Optimizing Replica Selection in Data Grids Husni Hamad E. AL-Mistarihi and Chan Huah Yong IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,"

Similar presentations


Ads by Google