Construction of Computational Segment at TSU HEPI Erekle Magradze Zurab Modebadze.

Slides:



Advertisements
Similar presentations
RegNet: Cultural Heritage in Regional Networks IST Kick-off: Graz, April 2001 Clustering.
Advertisements

National Database Templates for the Biosafety Clearing-House Application (NDT-nBCH) Overview of the US nBCH Applications.
SALSA HPC Group School of Informatics and Computing Indiana University.
Beowulf Supercomputer System Lee, Jung won CS843.
High Performance Computing Course Notes Grid Computing.
Completing the EU internal energy market
Information Technology Center Introduction to High Performance Computing at KFUPM.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Feasibility Study on a Common Analysis Framework for ATLAS & CMS.
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
EInfrastructures (Internet and Grids) - 15 April 2004 Sharing ICT Resources – “Think Globally, Act Locally” A point-of-view from the United States Mary.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
Jordan University of Science and Technology
Networking and GRID Infrastructure in Georgia Prof. Ramaz Kvatadze Executive Director Georgian Research and Educational Networking Association - GRENA.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Computing at COSM by Lawrence Sorrillo COSM Center.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
 The institute started in 1989 as a UNDP funded project called the National Agricultural Genetic Engineering Laboratory (NAGEL).  The Agricultural.
Maintaining a Microsoft SQL Server 2008 Database SQLServer-Training.com.
Naixue GSU Slide 1 ICVCI’09 Oct. 22, 2009 A Multi-Cloud Computing Scheme for Sharing Computing Resources to Satisfy Local Cloud User Requirements.
Test Organization and Management
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
ISBE An infrastructure for European (systems) biology Martijn J. Moné Seqahead meeting “ICT needs and challenges for Big Data in the Life Sciences” Pula,
1 Application of multiprocessor and GRID technology in medical image processing IKTA /2002.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Howard Brown Center for the Study of the Origin and Structure of Matter (COSM) Presented at University of Texas, Brownsville March 1, 2002.
System Analysis of Virtual Team Collaboration Management System based on Cloud Technology Panita Wannapiroon, Ph.D. Assistant Professor Division of Information.
Monitoring the Grid at local, national, and Global levels Pete Gronbech GridPP Project Manager ACAT - Brunel Sept 2011.
N*Grid – Korean Grid Research Initiative Funded by Government (Ministry of Information and Communication) 5 Years from 2002 to million US$ Including.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SALSA HPC Group School of Informatics and Computing Indiana University.
MINER A Software The Goals Software being developed have to be portable maintainable over the expected lifetime of the experiment extensible accessible.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
The project of application for network computing in seismology --The prototype of SeisGrid Chen HuiZhong, Ze Ren Zhi Ma, Hu Bin Institute.
The Development of the Siemens Knowledge Community Support By: Matt Greaves.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Internet Services Introduction Expertise is a collaborative tool for knowledge sharing, interacting and group working that can be adapted to the needs.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
System Construction System Construction is the development, installation and testing of system components.
Learning Object Metadata Application Profiles: Lithuanian Approach E. Kurilovas S. Kubilinskienė Centre for IT in Education, MoE Lithuania.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
ATLAS TIER 3 in Georgia ATLAS TIER 3 in Georgia 2nd ATLAS-SouthCaucasus Software/Computing Workshop & Tutorial  PC farm for ATLAS Tier 3 analysis  First.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Highest performance parallel storage for HPC environments Garth Gibson CTO & Founder IDC HPC User Forum, I/O and Storage Panel April 21, 2009.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
LCG – AA review 1 Simulation LCG/AA review Sept 2006.
Goran Majstrovic Davor Bajs Energy Institute HRVOJE POŽAR Croatia PREPARATION FOR LARGE SCALE WIND INTEGRATION IN SOUTH EAST EUROPE Joint TSO.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Ian Bird CERN, 17 th July 2013 July 17, 2013
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
K. Harrison CERN, 21st February 2005 GANGA: ADA USER INTERFACE - Ganga release Python client for ADA - ADA job builder - Ganga release Conclusions.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Heat Network Demonstration SBRI: policy context & objectives for the competition Natalie Miles Heat Strategy and Policy (Heat Networks)
ChinaGrid: National Education and Research Infrastructure Hai Jin Huazhong University of Science and Technology
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
Types of information systems (IS) projects
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Software Defined Storage
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Why PC Based Control ?.
Dr. John P. Abraham Professor, Computer Engineering UTPA
Windows Server 2016 Software Defined Storage
Presentation transcript:

Construction of Computational Segment at TSU HEPI Erekle Magradze Zurab Modebadze

Introduction I want to introduce you about situation in Georgia concluding the high performance computing and GRID technologies

The Institute of High Energy Physics and Informatization of Tbilisi State University (IHEPI TSU) represents the leading institution of Georgia in the works for high performance computing and GRID infrastructure development. Of course there are high performance computing centers in Georgia but it’s impossible to use the hardware for science.

The team of IHEPI TSU has rich experience in networking, data management and analysis. Since 1999 year IHEPI TSU owns an Internet Service Provider server that provides the internet connections for several Georgian Organization

Now I want to tell you about our current activities In the context of ISTC (The International Science and Technology Center) project “The Search for and Study of a Rare Processes Within and Beyond Standard Model at ATLAS Experiment of Large Hadron Colider at CERN” it is planned to construct the computing cluster (24 CPU) at the Institute of High Energy Physics and Informatization of Tbilisi State University.

The experimental cluster will be constructed on the Bases of PBS software on Linux platform and for monitoring there will be used “Ganglia”. All nodes will be interconnected using Gigabit Ethernet interfaces. The required ATLAS software will be installed at the working nodes in SLC5 environment. The cluster will be tested with a number of tasks studying various physical processes in top quark physics. Signal and background processes generation, fast and full simulation, reconstruction and analysis will be done in the framework ATHENA.

FUTURE PLANS The work on Georgian national GRID infrastructure development is currently in a very beginning, phase and lots of aspects still need to be determined and made clear. We can just list the prior steps that we consider as very important for our future work:

Further development of the Georgian national network for science and education in the frames of different joint projects as base for national GRID segment construction; Training and education of specialists; Creation of the common computing and networking infrastructure on the base three Georgian research centers;

Installation of GRID software and other application packages and creation of GRID service centers; Integration of Georgian national GRID segments in Caucasian GRID segment and its including in the World wide GRID infrastructure.

Also It is important for Caucasian and especially for Georgian research centers and scientists to develop and implement hardware and software standards that would facilitate the scaling of our computing cluster systems administration support efforts. This help focus the scope of technical expertise that we will develop and allow the scientists of our region to concentrate on developing more technical depth. Rather than re-invent the wheel, we leverage the experience of other experts in the HPC community.

Equally important is the use of open source software, such as Linux. In addition to being freely available, it allows us to make changes to the software to facilitate the integration of various hardware and software components. Moreover, if our changes improve the software, we will sometimes able to propagate the changes back into the open source code base so that everyone else could benefits from our efforts.

THANK YOU FOR ATTENTION