Presentation is loading. Please wait.

Presentation is loading. Please wait.

DataGrid Prototype 1 A.Ghiselli with the contribution of F.Donno, L.Gaido, F.Prelz, M.Sgaravatto INFN, Italy TNC 2002Limerick 5 June 2002.

Similar presentations


Presentation on theme: "DataGrid Prototype 1 A.Ghiselli with the contribution of F.Donno, L.Gaido, F.Prelz, M.Sgaravatto INFN, Italy TNC 2002Limerick 5 June 2002."— Presentation transcript:

1 DataGrid Prototype 1 A.Ghiselli with the contribution of F.Donno, L.Gaido, F.Prelz, M.Sgaravatto INFN, Italy TNC 2002Limerick 5 June 2002

2 2 Outline u The HEP Environment and the GRID scenario u The DataGrid components: n Authentication and Authorization s Virtual Organizations s One-time login s VO servers n DataGrid Resources s The DataGRID fabric s Grid Information Service n DataGrid Services s Grid scheduling and workload management s Data Management u DataGrid Release1 Description u Testbed1 u Conclusions

3 3 The DataGrid Environment and the GRID scenario u Research communities: High Energy Phisics(HEP), Earth Observation (EO), Bio-Informatics Communities u HEP experiment characteristics: n Huge number of users: thousands of physicists from hundred of institutions, Labs and universities for one experiment at Cern n Distributed resources: pool of computing, storage and network resources to analyze petabytes of data (10 15 ) n Natural data parallelism based on the experiment events (HTC) n Computing systems based on clusters with an high number of PC (Farm)

4 4 A very large distributed comunity CMS:1800 physicists 150 institutes 32 countries Just as an example

5 5 On-line System Large variety of triggers and thresholds Multi-level trigger Online reduction 10 7 Keep highly selected events Level 1 - Special Hardware Level 2 - Embedded Processors 40 MHz (1000 TB/sec) equivalent) Level 3 – Farm of commodity CPU 75 KHz (75 GB/sec)fully digitised 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) Data Recording & Offline Analysis Digital telephone 1-2 KB/sec

6 6 HEP computing, key parameters u All LHC experiments at CERN: n 10 Peta Bytes/yr data storage; disk: 2 P Byte u Multi-experiment Tier 1: n 3 Peta Byte/yr; disk: 0.5 P Byte u Tier 0 & 1 at CERN:2 M SI 95 (PC today ~ 20SI95) u Multi-experiment Tier 1:0.9 M SI 95 u Networking Tier 0 --> Tier 1:622 Mbps (4 Gbps) (black fibre: 1 Tbps today)

7 7 Regional Centres - a Multi-Tier Model Tier3 Desktop CERN – Tier 0 (CERN - Tier 1) Tier 1 FNAL INFN IN2P3 2.5 Gbps 622 Mbps 1G mbps 622M mbps Tier2 site a site b site c site n Organising Software: "Middleware" "Transparent" user access to applications and all data

8 8 GRID an extention of the WEB concept On-demand creation of powerful virtual computing and data systems Grid: Flexible and High Performance Access to All kinds of Resources. Sensor nets http:// Web: Uniform Access to the Informations Data Stores Computers Software catalogs Colleagues

9 9 DataGrid Authentication & Authorization u Grid refers to both the technologies and the infrastructure that enable coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, such as the ones considered in the DataGrid project. u Grid doesnt prevent the local access to computing resources u The dynamic nature of VO brings the need of specific VO services to authenticate and authorize users to access the Grid resources, to manage grid access policies, etc.

10 10 One-time login based on Identity Certificate u The users do not have a login name and password via which they can log in to the grid computers, rather they must own an X.509 Identity Certificate issued by a Certification Authority (CA) which is trusted (each CA has to adopt the set of agreed rules) by the DataGrid project. u CAs publish their certs in LDAP Directories (if the CA doesnt have/want one, a batch alternative is possible). u Trusted CAs: n CERN CA: http://globus.home.cern.ch/globus/ca/ n DOESG CA: http://pki1.doesciencegrid.org/ n INFN CA: ldap://security.fi.infn.it n DutchGrid and NIKHEF CA: ldap://certificate.nikhef.nl n UK HEP Testbed CA: http://www.gridpp.ac.uk/ca/ n CNRS DataGRid-Fr: http://marianne.in2p3.fr/datagrid/wp6-fr/ca/ca2-fr.html n Spanish DataGrid CA: http://www.ifca.unican.es/datagrid/ca/

11 11 One-time login / authentication process u Datagrid security is based on Globus Security Infrastructure (GSI) u GSI implements the standard Generic Security Service Application Program Interface (GSS-API) based on RFC 2078/2743. u GSS-API requires the ability to pass user/host/service authentication information to a remote site so that further authenticated connections can be established (on-time login). u Proxy is the entity empowered to act as the user at the remote site. n The remote end is able to verify the proxy certificate by descending the certificate signature chain and thus authenticate the certificate signer. n The signers identity is established by trusting the CA ( in DataGrid the CA must be one of the trusted CAs). n Proxy certificate has a short expiration time (default value 1 day).

12 12 One-time login / authorization u User authorization is the last remaining step n User access is granted by checking the proxy certificate subject (X.500 Distinguished Name) and looking it up in a list (so-called gridmap file) that is maintained at the remote site. n Gridmap file links a DN to a local resource username, so that the requesting user can inherit all the rights of the local user. Many DNs can be linked to the same local user u Real authorization specifically granted by the VO which the user belongs to in agreement with the owners of the grid resources. u Needs of VO authorization management server and tools

13 13 Basic elements of the GSI one-time-login model.

14 14 Authorization Structure u Each VO manages an LDAP Directory: n members (ou=People), which contain: s the URI of the certificate on the CA LDAP Directory; s the Subject of the users certificate (to speed up grid-mapfile generation); n groups (e.g. ou=tb1): s every user must belong to at least one group; n available VOs: s Alice: ldap://grid-vo.nikhef.nl/o=alice,dc=eu-datagrid,dc=org s Atlas: ldap://grid-vo.nikhef.nl/o=atlas,dc=eu-datagrid,dc=org s CMS: ldap://grid-vo.nikhef.nl/o=cms,dc=eu-datagrid,dc=org s CDF: ldap://ldap-vo.cnaf.infn.it/o=cdf,dc=eu-datagrid,dc=org u Gridmap files are generated from the VO Directories

15 15 grid-mapfile generation o=testbed, dc=edg, dc=org CN=Franz Elmer ou=People CN=John Smith mkgridmap grid-mapfile VO Directory Authorization Directory CN=Mario Rossi o=xyz, dc=edg, dc=org CN=Franz ElmerCN=John Smith Authentication Certificate ou=Peopleou=tb1ou=Admin local usersban list

16 16 The DataGrid fabric u The DataGrid fabric consists of a farm of centrally managed machines with multiple functionalities: Computing Elements(CE), Worker Nodes (WN), Storage Elements (SE), Information Service (IS), Network links, … automatically configured u A site consists of one or more software and profile servers and a number of clients. Both clients and server can be automatically configured by getting the appropriate profile from one profile servers. The first profile server needs to be installed/configured manually. CE / WN (PC Cluster) RPMs repository Profile repository LCFG Server SE(GDMP)

17 17 DataGrid resources and Information Service u Computing Element (CE) identifies large computing farms of commodity PCs n The access to the grid resources is based on the Globus Resource Access Management (GRAM) service. It is responsible for operating a set of resources under the same site-specific allocation policy or Local Resource Access Management (LRAM) like LSF, PBS, Condor.. u Storage Element (SE) identifies any storage system with the ability to provide direct file access and transfer via FTP, NFS or other protocols using the grid security mechanism.

18 18 Grid Information Service u The Information Service plays a fundamental role in the DataGrid environment since resource discovery and decision making is based upon the information service infrastructure. Basically an IS is needed to collect and organize, in a coherent manner, information about grid resources and status and make them available to the consumer entities. u EDG-Release1 adopted the Globus Information Service (MDS) which is based on LDAP directory service. u Advantages: n Well defined data model and a standard and consolidated way to describe data n Standardized API to access data in the directory servers n A distributed topological model that allows data distribution and delegation of access policies among institutions u Disadvantages n Not designed to store dynamic information such as the status of computing, storage and network resources. For these reasons other mechanisms are under test.

19 19 GRAM Architecture EGD specific Information provider EDG specific schema EDG specific LRAM

20 20 Resources and service schemas u The schema (data structure describing the grid resources and their status) represents what makes data valuable to Grid tools and applications. DataGrid defined and implemented its own CE, SE and NE schemas. u EDG CE schema describes a queue since it is the most suitable data structure to model a cluster of homogeneous PCs, locally managed by schedulers like pbs, lsf and Condor. u The definition and standardization of the information about grid resources is largely work in progress within the Global Grid Forum. u Work for a common schema for CE, SE and NE is in progress between EDT/EDG, iVDGL and Globus. This will allow access US and EU grids for both EU and US users.

21 21 Scheduling on the grid u One of the most important aspects of the Project is the workload management of applications dealing with large amount of data and clusters with high number of machines. u The scheduler is one of the most critical components of the resource management systems, since it has the responsibility of assigning resources to jobs in such a way that the application requirements are met, and of ensuring that the resource usage limits granted to the user are not exceeded. Although scheduling is a traditional area of computer science research, the particular characteristics of the DataGrid project, and of the computational grids in general, make traditional schedulers inappropriate.

22 22 Which properties for EDG scheduler? u Distributed organization. Given that several user communities (also called virtual organizations) have to co-exist in DataGrid, it is reasonable to assume that each of them will want to use a scheduler that better fits its particular needs (community scheduler). However, when a number of independent schedulers are operating simultaneously, a lack of coordination among their actions may result in conflicting and performance-hampering decisions. The need of coordination among these peer, independent schedulers naturally calls for a distributed organization. u Predictive state estimation, in order to deliver adequate performance even in face of dynamic variation of the resource status. u Ability to interact with the resource information system. At the moment, all the existing schedulers require that the user specifies the list of the machines that (s)he has permission to use. However, a fully functional grid scheduler should be able to autonomously find this information by interacting with the grid-wide information service.

23 23 Cont. u Ability to optimize both system and application performance, depending on the needs of DataGrid users. As a matter of fact, DataGrid users needing high-throughput for batches of independent jobs (such as the HEP community) have to co-exist with users requiring low response times for individual applications (e.g. the bio- medical community). In this case, neither a system-oriented nor an application-oriented scheduling policy would be sufficient. u Submission reliability: Grids are characterized by an extreme resource volatility, that is the set of available resource may dynamically change during the lifetime of an application. The scheduler should be able to resubmit, without requiring the user intervention, an application whose execution cannot continue as consequence of the failure or unavailability of the machine(s) on which it is running. u Allocation fairness. In a realistic system different users will have different priorities that determine the amount of Grid resources allocated to their applications.

24 24 WMS components u The EDG-WMS has been designed having in mind the above properties for a grid scheduler. u Resource broker, is the core component, which has: n to find a CE that best matches the requirements and preferences of a submitted job, considering also the current distribution of load on the grid. n Once a suitable computing element is found, to pass the job to the job submission service for the actual submission. u These tasks include interacting with the DataGrid Data Management Services to resolve Logical data set names as well as to find a preliminary set of sites where the required data are stored

25 25 Other WMS components u The logging and bookkeeping service is responsible to store and manage logging and bookkeeping information generated by the various components of the WMS. It collects information about the scheduling system and about active jobs. u A user can submit jobs and retrieve the output through the User Interface. The description of a job is expressed in the Job Description Language (JDL), which is based on the classified advertisement scheme developed by the Condor project. n It is a semi-structured data model: no specific schema is required n Symmetry: all entities in the grid, in particular applications and computing resources can be expressible in the same language. n Simplicity for both syntax and semantics.

26 26 Dg-job-submit u dg-job-submit jobad6.jdl -o jobs_list -n elisabetta.ronchieri@cnaf.infn.it u # u Executable = "WP1testC"; u StdInput = "sim.dat"; u StdOutput = "sim.out"; u StdError = "sim.err"; u InputSandbox = {"/home/wp1/HandsOn-0409/WP1testC","/home/wp1/HandsOn-0409/file*, u "/home/wp1/DATA/*"}; u OutputSandbox = {"sim.err","test.out","sim.out"}; u Rank = other.AverageSI00; u Requirements = (other.OpSys == "Linux RH 6.1" || other.OpSys == "Linux RH 6.2) && u (other.RunTimeEnvironmnet == CMS3.2); u InputData = "LF:test10096-0009"; u ReplicaCatalog = "ldap://sunlab2g.cnaf.infn.it:2010/rc=WP2 INFN Test Replica Catalog,dc=sunlab2g, dc=cnaf, dc=infn, dc=it"; u DataAccessProtocol = "gridftp";

27 27 Dg-job-submit u Executable = "WP1testF"; u StdOutput = "sim.out"; u StdError = "sim.err"; u InputSandbox = {"/home/datamat/sim.exe", "/home/datamat/DATA/*"}; u OutputSandbox = {"sim.err","sim.err","testD.out"}; u Rank = other.TotalCPUs * other.AverageSI00; u Requirements = other.LRMSType == "PBS" \ && (other.OpSys == "Linux RH 6.1" || other.OpSys == "Linux RH 6.2") && \ self.Rank > 10 && other.FreeCPUs > 1; u RetryCount = 2; u Arguments = "file1"; InputData = "LF:test10099-1001"; u ReplicaCatalog = "ldap://sunlab2g.cnaf.infn.it:2010/rc=WP2 INFN Test Replica Catalog,dc=sunlab2g, dc=cnaf, dc=infn, dc=it"; u DataAccessProtocol = "gridftp"; u OutputSE = "grid001.cnaf.infn.it";

28 28 UI commands n Submit of a job for execution on a remote Computing Element, including: s automatic resource discovery and selection s staging of the application and data (input sandbox) n Selection of a list of suitable resources for a specific job n Cancellation of one or more submitted jobs n Retrieval of the output file(s) produced by a completed job (output sandbox) n Retrieval and display of bookkeeping information about submitted jobs n Retrieval and display of logging information about jobs.

29 29 A Job Submission Example UI JDL Logging & Book-keeping ResourceBroker Output sandbox Input sandbox Job Submission Service StorageElement ComputeElement Brokerinfo Output sandbox Input sandboxInformationService Job Status LFN->PFN Data Management Services Author. &Authen. Job Submit Job Query Job Status

30 30 Data management u In EDG the technique adopted for optimizing data access and providing fault tolerance is data replication. u The data management architecture is focused on file replication services and the main objectives include optimized data access, caching, file replication and file migration. The most important tasks are: u Management of a universal namespace for files (using replica catalogues) u Secure and efficient data transfer between sites u Synchronization of remote copies u (Optimized) wide-area data access/caching u Management of meta-data like indices and file meta-data u Interface to mass storage systems

31 31 The building blocks of the DM architecture u Replica catalogue as a fundamental building block in data grids n It addresses the common need to keep track of multiple copies of a single logical file by maintaining a mapping from logical file names to physical locations. n Imported from globus and based on openLDAP v2 with and EDG schema, but migrating to a more distributed architecture with a relational database as backend.

32 32 The building blocks of the DM architecture u The Replica Manager :the main tasks are to securely and efficiently copy files between two Storage Elements and update the replica catalogue when the copy process has successfully terminated. u The File Copier (also called Data Mover) is an efficient and secure file transfer service that has to be available on each Storage Element. Initially this service will be based on GridFTP protocol. u The Consistency Service has to guarantee the synchronization of the file replicas when an update occurs. This service is provided on top of the replica manager. u The first implementation is based on GDMP (Grid Data Mirroring Package) which is a file replication tool that implements most of the replica manager functionalities. u The access to the replica files shall be optimized by using some performance information such as the current network throughput and latency and the load on the Storage Elements. The Replica Optimizers duty is to select the best replicas.

33 33 The Storage Element u Data Management in DataGrid has to deal with heterogeneity of storage systems and thus Storage Elements (SE). The interface to SE has to be unique regardless of the underlying storage technology. SE is a basic storage resource in DataGrid and also define the smallest granularity for a compound storage system. A Storage Element can either be a large disk pool or a Mass Storage System (MSS) having its own internal disk pool. The current storage system implementations include MSS like HPSS, Castor as well as distributed file systems like AFS or NFS.

34 34 Architecture

35 35 DataGrid Release 1 summary u UI User Interface: Lightweight component for accessing to the workload management system. u RB Resource Broker: The core component of the workload management system, able to find a resource matching the users requirements (including data location). u LB Logging and Bookkeeping: Repository for events occurring in the lifespan of a job. u JSS Job Submission Services: It is the result of the integration of Condor-G and Globus services to achieve reliable submission of jobs via the Globus GRAM protocol. u II Information Index: Caching information index (based on Globus MDS-2) directly connected to the RB, to achieve control of the cache times and to prevent blocking failures when accessing the information space. u Replica Manager /GDMP: To consistently copy (replicate) files from one Storage Element to another and register replicas.

36 36 Cont. u Replica Catalog: It stores information about the physical files on all the Grid Storage Elements. A centralized replica catalog has been chosen for prototype1 u MDS (GRIS and Info.Providers): Grid Information System used by the Resource Broker. The first implementation is based on the Globus MDS system, with resource schema and information providers defined and implemented by DataGrid. u Authentication and Authorization Services: VO directory configuration and tools to periodically generate authorization lists u Automatic Installation and Configuration management: Large Scale Linux Configuration (LCFG) tool for very large computing fabrics (CE)

37 37 EDG Release1 services

38 38 Definition of the EDG release 1/2 u The hardest task of the Integration Team, a working group in charge of integrating the different pieces of software, has been to collect all the EDG software packages, Globus and Condor, to study the functionality and interdependencies, the requirements for a correct operation of the testbed and come up with a topology and precise installation and configuration instructions for deployment. u Grid Elements: in order to achieve these goals it has been necessary to construct a node-centric view of the testbed deployment specifying the profile of each node type (grid elements). u RPM: the EDG release has been packaged via Linux RPMs (Red Hat Package Manager), because of the requirements coming with the installation and distribution tools provided for the farms. In particular the Globus main subcomponents were packaged in separate RPMs that could be installed and configured independently. u Grid Elements as a list of software RPMs and configuration RPMs. During this work, it was necessary to specify detailed configuration instructions and requirements that allow these elements to interoperate.

39 39 Definition of the EDG release 2/2 u LCFG for automatic installation and configuration: for each of the grid elements an LCFG template provides: n the RPM lists of the packages required for a specific element n a typical LCFG configuration that needs to be customized for a specific testbed site. n a specific set of instructions about configuring the grid element. u Once the definition of the profile for a grid element has been optimized, a small test suite has been used to verify that all the functionalities required were present and working. u All the above constitutes the content of the edg-release package available in the CVS repository of the DataGrid Release.

40 40 Middleware integration EDG Services Grid Elements Basic Grid Services Grid Scheduler RM/GDMP Globus-EDG GIS UIRBL&BCESEWN CondorGlobus Replica Catalogue II

41 41 Testbed 1 u The first prototype of the EDG (European DataGrid) grid infrastructure (Testbed1) has been deployed in December 2001 using the official - tagged- EDG software Release 1, based on Globus 2 beta 21. u The Testbed1 was initially made up by a limited amount of grid elements in 5 European countries (CERN, FR, UK, IT, NL), but it is currently being extended to about 30 sites all over Europe including also some other countries such as the Czech Republic, ES, PT, DE and the Nordic Countries. u The common grid elements (User Interfaces, Computing Elements, Worker Nodes and Storage Elements) have been installed and configured at each site while the grid elements devoted to the central grid services (Resource Brokers, Information Indexes and Logging and Bookkeeping servers) have been set up at CERN and INFN-CNAF (IT). Some other dedicated servers (the Virtual Organization LDAP servers and the VO Replica Catalogues) have been hosted at NIKHEF (NL) and INFN-CNAF.

42 42 Catania Bologna/CNAF Padova/LNL Torino Cagliari Roma Milano To USA To Russia/Japan The prototype DataGrid testbed Cern NIKHEF London RAL CC-Lyon Paris Prague Barcelona Madrid Lisbona

43 43 Validation of Release1 u It is essential for the success of the Project that the developed middleware satisfies the users initial requirements. For this reason, a software validation phase performed by the user communities using real applications in a large scale environment is crucial within the project. u The validation activity will be progressively done during the whole project lifetime, in order to test each middleware release. u Short term use cases have been used at the beginning and real applications are going to be used later on.

44 44 NL SURFnet CERN UK SuperJANET4 Abilene ESNET MREN IT GARR-B GEANT NewYork STAR-TAP STAR-LIGHT DataTAG project Two main focus: Grid applied network research; 2.5 Gbps lambda with Star-Light for network research Interoperability between Grids in EU and US US partnership: iVDGL project Main Partners CERN, INFN,UvA(NL) PPARC(UK), INRIA(FR)

45 45 Conclusions u The first DataGrid prototype (Relese 1) is in place in Europe as result of the collaboration of all the actors of a distributed computing environment: resource owner, middleware developers, scientific application programmers and scientific application users. u The testing phase demonstrated the power of the grid whose basic functionalities allow to select the most appropriate CE-SE pair just specifying job characteristic and related input/output data in a high level language. u The major project release, foreseen for September 2002, will introduce new important services like support for dependent, parallel, and partitionable jobs, resource co-allocation, advance reservation and accounting, as well as more efficient information and monitoring services. u DataGrid documents can be found at http://eu-datagrid.web.cern.ch/


Download ppt "DataGrid Prototype 1 A.Ghiselli with the contribution of F.Donno, L.Gaido, F.Prelz, M.Sgaravatto INFN, Italy TNC 2002Limerick 5 June 2002."

Similar presentations


Ads by Google