NeSC Workshop July 20, 2002 Simulation of Chemical Reactor Performance – A Grid-Enabled Application – Kenneth A. Bishop Li Cheng Karen D. Camarda The University.

Slides:



Advertisements
Similar presentations
Libra: An Economy driven Job Scheduling System for Clusters Jahanzeb Sherwani 1, Nosheen Ali 1, Nausheen Lotia 1, Zahra Hayat 1, Rajkumar Buyya 2 1. Lahore.
Advertisements

Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Building a CFD Grid Over ThaiGrid Infrastructure Putchong Uthayopas, Ph.D Department of Computer Engineering, Faculty of Engineering, Kasetsart University,
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Towards Automating the Configuration of a Distributed Storage System Lauro B. Costa Matei Ripeanu {lauroc, NetSysLab University of British.
Clusters, Grids and their applications in Physics David Barnes (Astro) Lyle Winton (EPP)
ITU-T Workshop on IP/Optical Chitose, 9-11 July 2002 Session Network Performance N eal Seitz, Chair SG 13/WP 4 IP Performance Specifications: Progress.
Multiple Processor Systems
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
NGS computation services: API's,
NGS computation services: APIs and.
SWITCH Visit to NeSC Malcolm Atkinson Director 5 th October 2004.
Cloud Resource Broker for Scientific Community By: Shahzad Nizamani Supervisor: Peter Dew Co Supervisor: Karim Djemame Mo Haji.
Welcome to Who Wants to be a Millionaire
£1 Million £500,000 £250,000 £125,000 £64,000 £32,000 £16,000 £8,000 £4,000 £2,000 £1,000 £500 £300 £200 £100 Welcome.

1 Sizing the Streaming Media Cluster Solution for a Given Workload Lucy Cherkasova and Wenting Tang HPLabs.
Reaction Equilibrium in Ideal Gas Mixture
WestGrid Collaboration and Visualization Brian Corrie Collaboration and Visualization Coordinator WestGrid/SFU.
Raspberry Pi Performance Benchmarking
Installing Windows XP Professional Using Attended Installation Slide 1 of 30Session 8 Ver. 1.0 CompTIA A+ Certification: A Comprehensive Approach for all.
FIGURE 3-1 Basic parts of a computer. Dale R. Patrick Electricity and Electronics: A Survey, 5e Copyright ©2002 by Pearson Education, Inc. Upper Saddle.
Fixed-Bed Reactor for studying the Kinetics of Methane Oxidation on Supported Palladium Objectives: 1.The general goal is to understand: a)the influence.
Real Tubular Reactors in Laminar Flow Quak Foo Lee Department of Chemical and Biological Engineering The University of British Columbia.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
Lincoln University Canterbury New Zealand Evaluating the Parallel Performance of a Heterogeneous System Elizabeth Post Hendrik Goosen formerly of Department.
LUNARC, Lund UniversityLSCS 2002 Transparent access to finite element applications using grid and web technology J. Lindemann P.A. Wernberg and G. Sandberg.
Technical Architectures
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Grid Programming Environment (GPE) Grid Summer School, July 28, 2004 Ralf Ratering Intel - Parallel and Distributed Solutions Division (PDSD)
LINEAR SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS
Cloud computing Tahani aljehani.
Two-fluid models for fluidized bed reactors: Latest trends and challenges Yassir Makkawi Chemical Engineering.
Processing of a CAD/CAE Jobs in grid environment using Elmer Electronics Group, Physics Department, Faculty of Science, Ain Shams University, Mohamed Hussein.
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
COMP3019 Coursework: Introduction to GridSAM Steve Crouch School of Electronics and Computer Science.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
A Grid fusion code for the Drift Kinetic Equation solver A.J. Rubio-Montero, E. Montes, M.Rodríguez, F.Castejón, R.Mayo CIEMAT. Avda Complutense, 22. Madrid.
Partial Oxidation of Benzene to Maleic Anhydride
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
High Performance Cluster Computing Architectures and Systems Hai Jin Internet and Cluster Computing Center.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
© 2014 Carl Lund, all rights reserved A First Course on Kinetics and Reaction Engineering Class 26.
Experiences with the Globus Toolkit on AIX and deploying the Large Scale Air Pollution Model as a grid service Ashish Thandavan Advanced Computing and.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Chemical Reaction Engineering Lecture (1) Week 2.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
© 2015 Carl Lund, all rights reserved A First Course on Kinetics and Reaction Engineering Class 34.
ATmospheric, Meteorological, and Environmental Technologies RAMS Parallel Processing Techniques.
Design Issues of Prefetching Strategies for Heterogeneous Software DSM Author :Ssu-Hsuan Lu, Chien-Lung Chou, Kuang-Jui Wang, Hsiao-Hsi Wang, and Kuan-Ching.
© 2015 Carl Lund, all rights reserved A First Course on Kinetics and Reaction Engineering Class 34.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
AMH001 (acmse03.ppt - 03/7/03) REMOTE++: A Script for Automatic Remote Distribution of Programs on Windows Computers Ashley Hopkins Department of Computer.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
NGS computation services: APIs and.
BDTS and Its Evaluation on IGTMD link C. Chen, S. Soudan, M. Pasin, B. Chen, D. Divakaran, P. Primet CC-IN2P3, LIP ENS-Lyon
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Optimizing Distributed Actor Systems for Dynamic Interactive Services
NGS computation services: APIs and Parallel Jobs
A First Course on Kinetics and Reaction Engineering
Chapter 16: Distributed System Structures
Distributed System Structures 16: Distributed Structures
by Manuel Saldaña, Daniel Nunes, Emanuel Ramalho, and Paul Chow
Multithreaded Programming
Presentation transcript:

NeSC Workshop July 20, 2002 Simulation of Chemical Reactor Performance – A Grid-Enabled Application – Kenneth A. Bishop Li Cheng Karen D. Camarda The University of Kansas

NeSC Workshop July 20, 2002 Presentation Organization Application Background Chemical Reactor Performance Evaluation Grid Assets In Play Hardware Assets Software Assets Contemporary Research NCSA Chemical Engineering Portal Application Cactus Environment Application

NeSC Workshop July 20, 2002 Chemical Reactor Description V 2 O 5 Catalyst in Tubes Feed Products Coolant Molten Salt Reaction Conditions: Temperature: 640 ~ 770 K Pressure: 2 atm O-Xylene : Air Mixture Phthalic Anhydride

NeSC Workshop July 20, 2002 Simulator Capabilities Reaction Mechanism: Heterogeneous Or Pseudo-homogeneous Reaction Path: Three Specie Or Five Specie Paths Flow Phenomena: Diffusive vs Bulk And Radial vs Axial Excitation: Composition And/Or Temperature

NeSC Workshop July 20, 2002 Chemical Reactor Start-up TEMPERATURE K TUBE AXIAL POSITION ENTRANCEEXIT CENTER RADIUS INITIAL CONDITION:FEED NITROGEN FEED TEMP. 640 K COOLANT TEMP. 640 K FINAL CONDITION:FEED 1% ORTHO-XYLENE FEED TEMP. 683 K COOLANT TEMP. 683 K TEMPERATURE

NeSC Workshop July 20, 2002 TEMPERATURE ORTHO-XYLENE PHTHALIC ANHYDRIDE TOLUALDEHYDE PHTHALIDE COx LOWHIGH Reactor Start-up: t = 60 +

NeSC Workshop July 20, 2002 TEMPERATURE ORTHO-XYLENE PHTHALIC ANHYDRIDE TOLUALDEHYDE PHTHALIDE COx LOW HIGH Reactor Start-up: t = Reactor Start-up: t = +

NeSC Workshop July 20, 2002 Grid Assets In Play - Hardware The University of Kansas JADE O2K [6] 250MHz, R10000, 512M RAM PILTDOWN Indy [1] 175MHz, R4400, 64M RAM Linux Workstations Windows Workstations University of Illinois (NCSA) MODI4 O2K [48] 195MHz, R10000, 12G RAM Linux ( IA32 [968] & IA64 [256] Clusters) Boston University LEGO O2K [32] 195MHz, R10000, 8G RAM

NeSC Workshop July 20, 2002 Grid Assets In Play - Software The University of Kansas IRIX 6.5: Globus 2.0 (host); COG [Java] (client); Cactus Linux: Globus 2.0 (host); COG [Java] (client); Cactus Windows 2K: COG (client); Cactus University of Illinois (NCSA) IRIX 6.5: Globus 2.0 (host); COG (client); Cactus Linux: Cactus Boston University IRIX 6.5: Globus (host); COG (client); Cactus

NeSC Workshop July 20, 2002 Research Projects Problem Complexity: Initial (Target) Pseudo-homogeneous (Heterogeneous) Kinetics Temperature And Feed Composition Excitation 1,500 (70,000) grid nodes & 200 (1,000) time steps Applications Alliance Chemical Engineering Portal; Li Cheng –Thrust: Distributed Computation Assets –Infrastructure: Method of Lines, XCAT Portal, DDASSL Cactus Environment; Karen Camarda –Thrust: Parallel Computation Algorithms –Infrastructure: Crank-Nicholson, Cactus, PETSc

NeSC Workshop July 20, 2002 ChE Portal Project Plan Grid Asset Deployment Client: KU Host: KU or NCSA or BU Grid Services Used Globus Resource Allocation Manager Grid FTP Computation Distribution (File Xfer Load) Direct to Host Job Submission (Null) Client- Job Submission; Host- Simulation (Negligible) Client- Simulation; Host- ODE Solver (Light) Client- Solver; Host- Derivative Evaluation (Heavy )

NeSC Workshop July 20, 2002 ChE Portal Project Results Run Times (Wall Clock Minutes) Load\Host PILTDOWN JADEMODI4 Null NegligibleNA LightNA Heavy2540* NA15.00** 211,121 Derivative Evaluations ** Exceeded Interactive Queue Limit After 3 Time Steps (10,362 Derivative Evaluations)

NeSC Workshop July 20, 2002 ChE Portal Project Conclusions Conclusions The Cost For The Benefits Associated With The Use Of Grid Enabled Assets Appears Negligible. The Portal Provides Robust Mechanisms For Managing Grid Distributed Computations. The Cost Of File Transfer Standard Procedures As A Message Passing Mechanism Is Extremely High. Recommendation A High Priority Must Be Assigned To Development Of High Performance Alternatives To Standard File Transfer Protocols.

NeSC Workshop July 20, 2002 Cactus Project Plan Grid Asset Deployment Client: KU Host: NCSA (O2K, IA32 Cluster, IA64 Cluster) Grid Services Used MPICH-G Cactus Environment Evaluation Shared Memory : Message Passing Problem Size: 5x10 5 – 1x10 8 Algebraic Equations Grid Assets: 0.5 – 8.0 O2K Processor Minutes 0.1 – 4.0 IA32 Cluster Processor Minutes Application Script Use

NeSC Workshop July 20, 2002 Cactus Project Results

NeSC Workshop July 20, 2002 Cactus Project Conclusions Conclusions The IA32 Cluster Outperforms O2K On The Small Problems Run To Date. (IA32 Faster Than O2K; IA32 Speedup Exceeds O2K Speedup.) The Cluster Computations Appear To Be Somewhat Fragile. (Convergence Problems Encountered Above 28 Cluster Node Configuration; Similar (?) Problems With The IA64 Cluster.) The Grid Service (MPICH-G) Evaluation Has Only Begun. Recommendations Continue The Planned Evaluation of Grid Services. Continue The Planned IA64 Cluster Evaluation.

NeSC Workshop July 20, 2002 Overall Conclusions The University Of Kansas Is Actively Involved In Developing The Grid Enabled Computation Culture Appropriate To Its Research & Teaching Missions. Local Computation Assets Appropriate To Topical Application Development And Use Are Necessary. Understanding Of And Access To Grid Enabled Assets Are Necessary.