CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.

Slides:



Advertisements
Similar presentations
Conference xxx - August 2003 Sverker Holmgren SNIC Director SweGrid A National Grid Initiative within SNIC Swedish National Infrastructure for Computing.
Advertisements

Kejun Dong, Kai Nan CNIC/CAS CNIC Resources And Activities Update Resources Working Group PRAGMA11 Workshop, Oct.16/17 Osaka, Japan.
Jorge Gasós Grid Technologies Unit European Commission The EU e Infrastructures Programme Workshop, Beijing, June 2005.
EU DataGrid progress Fabrizio Gagliardi EDG Project Leader
An open source approach for grids Bob Jones CERN EU DataGrid Project Deputy Project Leader EU EGEE Designated Technical Director
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
The LHC experiments AuthZ Interoperation requirements GGF16, Athens 16 February 2006 David Kelsey CCLRC/RAL, UK
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
INFSO-RI Enabling Grids for E-sciencE The EGEE project Fabrizio Gagliardi Project Director EGEE CERN, Switzerland Research Infrastructures.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Hungrid A Possible Distributed Computing Platform for Hungarian Fusion Research Szabolcs Hernáth MTA KFKI RMKI EFDA RP Workshop.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Infrastructure overview Arnold Meijster &
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Astronomical GRID Applications at ESAC Science Archives and Computer Engineering Unit Science Operations Department ESA/ESAC.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Pilot Test-bed Operations and Support Work.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
A short introduction to GRID Gabriel Amorós IFIC.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Grids and SMEs: Experience and Perspectives Emanouil Atanassov, Todor Gurov, and Aneta Karaivanova Institute for Parallel Processing, Bulgarian Academy.
Grid Computing: Running your Jobs around the World
The Beijing Tier 2: status and plans
Grid site as a tool for data processing and data analysis
Belle II Physics Analysis Center at TIFR
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Moroccan Grid Infrastructure MaGrid
Update on Plan for KISTI-GSDC
UK GridPP Tier-1/A Centre at CLRC
Javier Magnin Brazilian Center for Research in Physics & ROC-LA
High Energy Physics Computing Coordination in Pakistan
Presentation transcript:

CBPF J. Magnin LAFEX-CBPF

Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF

What is the GRID ? Web is a service for sharing information over the Internet, Grid is a service for sharing computer power and data storage capacity over the Internet. The five big ideas Resource Sharing Secure Access Resource Use The Death of Distance Open Standards Direct access to remote software, computers and data Access policy, authentication and authorization You should be able to calculate the optimal allocation of resources High-speed connections between computers applications made to run on one resource will run on all others

LCG – Global GRID for High Energy Physics LCG LHC Computing Grid project Launched in 2002 at CERN. Mission: to integrate thousand of computers worldwide to store and analyze the huge amount of data that will be produced by the LHC. LHC will produce ~ 15 Petabytes of data (15x10 6 Gb) each year. Data sould be available to the thousand of scientists independent of their location. LCG involves today more than 200 sites in over 30 countries worldwide.

LCG is part of EGEE (Enabling Grids for E-sciencE) In April 2004 it was decided to build a permanent Grid infrastructure for scientific applications in Europe. The work has been carried out by a collaboration led by CERN. By the end of 2006, engineers and scientists of EGEE were managing ~ CPUs over 39 countries and 5 Pb of data storage. Six major scientific fields were included in the EGEE: physics, earth observation, climate prediction, petroleum exploration, astronomy and drug discovery. From Oct 2004 to Oct 2005 two million jobs have been successfully run on this Grid. EELA is a project related to EGEE Initiated in January 2006, coordinated by CIEMAT (Spain). Mission: to bring the e-Infrastructures of Latin American countries to the level of those of Europe. Will benefit of the Alice project and the RedCLARA network. Will focus on Grid infrastructure and related e-Science applications, identifying and promoting a sustainable framework for e-Science (in Latin America). America Latina Interconectada Com Europa Project set up in 2003 to develop the RedClara network 80% founded by the European Commission 19 Latin American and 4 European partners Cooperación Latino Americana de Redes Avanzadas Initiated in 2003 Linked to GÉANT (European advanced network)

Why GRID at CBPF ? CBPF has two groups participating in large experiments at LHC-CERN, LHCb and CMS. Both groups require of huge computational resources in terms of processing power and data storage. The CBPF computational facility has to be a dedicated resource for LHCb and CMS, but possibly open for other LHC experiments. The CBPF computational facility has to meet all the requirements of the CERN Data GRID

What are our needs/wishes ? Production center Distributes RAW data in quasi real time to Tier-1s Will hold a copy of RAW data Responsible for all the production and processing phases associated with real data, including (user) data analysis Primarily MC production centers Eventually in the future: data analysis Total CPU requirements for 2008: MSI2k.years (1000 Intel Xeon 3.06 GHz = 6 TF = 1.1 MSI2k) CERN 7% Tier 1s 34% Tier 2s 59% Disk requirements for 2008: ~3.3PB CERN 0.8 PB Tier 1s 2.4 PB Tier 2s 0.1 PB 1.1 MB/s MB/s year average CBPF wants to be a Tier 2

Status of GRID at CBPF 11 dual CPU dual core servers with a 160 GB Hard Disk. 1 dual core server with four 320 GB Hard Disks and two GigaBit network interface cards. All CPUs are Intel Xeon GHz – 64 bits. Initial GRID CBPF: SL (32 bits) installed in all machines 1 Storage Element (SE) Server with 1.2 TB disk 1 Monitor (MON) 1 Computer Element (CE) 9 Worker Nodes (WN) Total cost, including a 24 ports hub-switch, two racks and a 10 KVA UPS ~ 120 KR$ Job request processing Torque server Scheduler Job distribution Data storageJob processing Middleware: CE LCG 2.7 SE gLite 3.0 MON gLite 3.0 WN LCG 2.7 Status: All computers certified. Software installed and configured. GRID node linked to a 1 Gb/s network (RedeRio). Onsite tests done and passed. Waiting for EELA tests and approval to be integrated to EELA Virtual Organization (VO).

Near future: New servers will be bought in the very near future (in the next two or three weeks) (~ machines). New servers will be dual CPU, dual core, probably Intel Xeon 5050 or better. All the system will be installed in a definitive location. ~ 38 CPUs in the near future

CBPF is a Registration Authority (RA) Authorized by the Certification Authority (CA) at UFF What a RA do: Deals with users registration Deals with computational resources registration

"The world will only need five computers attributed to Thomas J. Watson, IBM "640 kilobytes is all the memory you will ever need" attributed to Bill Gates, Microsoft The end