Easy Deployment of the WRF Model on Heterogeneous PC Systems Braden Ward and Shing Yoh Union, New Jersey.

Slides:



Advertisements
Similar presentations
Condor use in Department of Computing, Imperial College Stephen M c Gough, David McBride London e-Science Centre.
Advertisements

CROSSGRID WP41 Valencia Testbed Site: IFIC (Instituto de Física Corpuscular) CSIC-Valencia ICMoL (Instituto de Ciencia Molecular) UV-Valencia 28/08/2002.
INTRODUCTION  In this presentation we are going to talk about one of the latest Operating Systems which is called “Slitaz OS” and how does it perform.
Beowulf Supercomputer System Lee, Jung won CS843.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
LifeSize Desktop June 15, Page 2 LifeSize Desktop 1.0 Extending the LifeSize experience to your PC LifeSize ® Desktop offers unmatched performance.
Improved Mesh Partitioning For Parallel Substructure Finite Element Computations Shang-Hsien Hsieh, Yuan-Sen Yang and Po-Liang Tsai Department of Civil.
Mesoscale Investigations and Modeling in the Northern Mid-Atlantic Paul J. Croft and Shing Yoh.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
MASPLAS ’02 Creating A Virtual Computing Facility Ravi Patchigolla Chris Clarke Lu Marino 8th Annual Mid-Atlantic Student Workshop On Programming Languages.
MCITP: Microsoft Windows Vista Desktop Support - Enterprise Section 1: Prepare to Deploy.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Incorporation of TAMDAR into Real-time Local Modeling Tom Hultquist Science & Operations Officer NOAA/National Weather Service Marquette, MI.
WINDOWS XP PROFESSIONAL Bilal Munir Mughal Chapter-1 1.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
1 RH033 Welcome to RedHat Linux. 2 Hardware Requirements ♦ Pentium Pro or better with 256 MB RAM ♦ Or ♦ 64-bit Intel/AMD with 512 MB RAM ♦ 2-6 GB disk.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
IBM Express Runtime Quick Start Workshop © 2007 IBM Corporation Install IBM Express Runtime Development Environment.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
27 May 2004 C.N. Papanicolas EGEE and the role of IASA ( In close collaboration with UOA ) IASA GRID Steering Committee: George Kallos Lazaros.
Creating Flash Movies for Pocket PC Presentation by: JOSE BARRIGA.
1 Automated Installation Windows Unattended Client PC Setup Vahid Ajimine W.M. Keck Observatory Mentor: Jason Ward Home institution: University of Hawaii.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
EFFECTS OF LOCALITY, CONTENT AND JAVA RUNTIME ON VIDEO PERFORMANCE Vikram Chhabra, Akshay Kothare, Mark Claypool Computer Science Department Worcester.
TIM-GSD 13 June GSD Weather Modeling Efforts in Support of RSA Christopher J. Anderson ESRL/GSD/FAB 10-km grid valid 09 UTC 12 Jun1.1-km grid valid.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
HPC system for Meteorological research at HUS Meeting the challenges Nguyen Trung Kien Hanoi University of Science Melbourne, December 11 th, 2012 High.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
“Come out of the desert of ignorance to the OASUS of knowledge” Grid Computing with SAS ® Foundation Statistics Canada SAS Technology Centre.
Parallelization of Classification Algorithms For Medical Imaging on a Cluster Computing System 指導教授 : 梁廷宇 老師 系所 : 碩光通一甲 姓名 : 吳秉謙 學號 :
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
LINUX MINT 13 Introduction Linux Mint is a modern, elegant and comfortable operating system which is both powerful and easy to use. The minimum requirements.
Linux Operations and Administration Chapter Two Installing OpenSUSE.
EVLA Data Processing PDR Scale of processing needs Tim Cornwell, NRAO.
Breaking Barriers Exploding with Possibility Breaking Barriers Exploding with Possibility The Cloud Era Unveiled.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Minimalist’s Linux Cluster Changyoung Choi, Jeonghyun Kim, Seyong Kim Department of Physics Sejong University.
UTA Site Report DØrace Workshop February 11, 2002.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
I NTRODUCTION TO N ETWORK A DMINISTRATION. W HAT IS A N ETWORK ? A network is a group of computers connected to each other to share information. Networks.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
Grid Appliance The World of Virtual Resource Sharing Group # 14 Dhairya Gala Priyank Shah.
Introduction TO Network Administration
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
COSMO - ROmania Liliana VELEA National Meteorological Administration, Bucharest, Romania.
Introduction to live CDs Leon Lee. What are live CDs? “Live” because it can run an operating system existing on the disc without installation Functions.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
System Requirements  Supports 32 bit i586 and 64 bit x86-64 PC hardware.  PowerPC(PPC) processors.  RAM: 256 MB minimum, 512 MB recommended.  Hard.
 Microsoft Windows XP OS has been a very popular operating system in homes, offices & industrial segments. Windows XP works well for Laptops and PCs.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
HedEx Lite Obtaining and Using Huawei Documentation Easily
Red hat Installation 2 Live CD.
Virtualization OVERVIEW
CGS 3763 Operating Systems Concepts Spring 2013
TeraScale Supernova Initiative
Linux Operations and Administration
Software Developers Conference
Department of Computer Science
Department of Computer Science
Cluster Computers.
Department of Engineering Science EE 465 (CES 440) - Intro
Presentation transcript:

Easy Deployment of the WRF Model on Heterogeneous PC Systems Braden Ward and Shing Yoh Union, New Jersey

Bootable Cluster CD (BCCD) Features 2.4 Linux Kernel Available online Networked file system used for data storage Setup time of 1-2 minutes per PC OS and file system completely undisturbed upon restart

Using the BCCD

3 Groups of PCs Used 448 MHz Pentium 3 desktops with 256 MB of memory (networked 100 Mbps switches) 2.8 GHz Pentium 4 desktops with 2.0 GB of memory (networked 100 Mbps switches) 16 node Linux cluster: 3.0 GHz Pentium 4 processors, 1.0 GB of memory each, linked with 1 Gbps switches

Supercell Ideal Case em_quarter_ss in WRF distribution 2 hours forecast Grid resolution of 2 km (100 x 100 x 40)

Supercell Ideal Case Runtime (minutes) Number of Processors

Real case 12 hour forecast, beginning 00UTC February 11 th, 2006 Grid resolution of 15km (100x100x31 grid points) Initial and boundary conditions generated from 00Z NAM run from NCEP using WRFSI

Real Case Runtime (Minutes) Number of Processors

Supercell Ideal Case Runtime (minutes) Number of Processors

Real Case Runtime (Minutes) Number of Processors

Results A Linux cluster is easily created using PCs with BCCD Real case runtimes decreased 2-4 times on clusters of 4-8 PCs Useful for students and operational forecasters with limited or dated resources to run a locally configured WRF Model

Acknowledgment Dr. George Chang from the Computer Science Department and Computational Science Group at Kean University for building the Linux Cluster and providing technical support. Dr. Paul Gray of the University of Northern Iowa for developing the BCCD and making the software freely available. The project is supported by the Kean University Office of Research and Sponsored Programs and the Department of Geology of Meteorology.