CMRS Review, PPPL, 5 June 2003 &. 4 projects in high energy and nuclear physics 5 projects in fusion energy science 14 projects in biological and environmental.

Slides:



Advertisements
Similar presentations
Progress and Plans on Magnetic Reconnection for CMSO For NSF Site-Visit for CMSO May1-2, Experimental progress [M. Yamada] -Findings on two-fluid.
Advertisements

Reconnection: Theory and Computation Programs and Plans C. C. Hegna Presented for E. Zweibel University of Wisconsin CMSO Meeting Madison, WI August 4,
The University of Chicago Center for Magnetic Reconnection Studies: Final Report Amitava Bhattacharjee Institute for the Study of Earth, Oceans, and Space.
FLASH Workshop Hamburger Sternwarte, University of Hamburg, Feb 15 – Feb 16, 2012 A Solution Accurate, Efficient and Stable Unsplit Staggered Mesh MHD.
Modeling laser-plasma interaction with the direct implicit PIC method 7 th Direct Drive and Fast Ignition Workshop, Prague, 3-6 May 2009 M. Drouin a, L.
University of Colorado at Boulder Center for Integrated Plasma Studies Two-Fluid Applications of NIMROD D. C. Barnes University.
Princeton University Using the computer to select the right variables Rationale: Lake Carnegie, Princeton, NJ Straight Line Distance Actual transition.
Parallelizing stencil computations Based on slides from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
Module on Computational Astrophysics Professor Jim Stone Department of Astrophysical Sciences and PACM.
Magnetic field diffusion in Molecular Clouds Understanding star formation is a central problem of modern astrophysics. In this work we are performing a.
Parallel Mesh Refinement with Optimal Load Balancing Jean-Francois Remacle, Joseph E. Flaherty and Mark. S. Shephard Scientific Computation Research Center.
Advancing Computational Science Research for Accelerator Design and Optimization Accelerator Science and Technology - SLAC, LBNL, LLNL, SNL, UT Austin,
Generation of Solar Energetic Particles (SEP) During May 2, 1998 Eruptive Event Igor V. Sokolov, Ilia I. Roussev, Tamas I. Gombosi (University of Michigan)
PETSc Portable, Extensible Toolkit for Scientific computing.
APNe Conference, Seattle, July 2003 Clumpy Flows in Protoplanetary and Planetary Nebulae Alexei Poludnenko, Adam Frank University of Rochester, Laboratory.
D. Zuzio* J-L. Estivalezes* *ONERA/DMAE, 2 av. Édouard Belin, Toulouse, France Simulation of a Rayleigh-Taylor instability with five levels of adaptive.
Inductive-Dynamic Magnetosphere-Ionosphere Coupling via MHD Waves Jiannan Tu Center for Atmospheric Research University of Massachusetts Collaborators:
Preconditioning Implicit Methods for Coupled Physics Problems David Keyes Center for Computational Science Old Dominion University & Institute for Scientific.
Iterative and direct linear solvers in fully implicit magnetic reconnection simulations with inexact Newton methods Xuefei (Rebecca) Yuan 1, Xiaoye S.
Towards Developing a “Predictive” Hurricane Model or the “Fine-Tuning” of Model Parameters via a Recursive Least Squares Procedure Goal: Minimize numerical.
Massively Parallel Magnetohydrodynamics on the Cray XT3 Joshua Breslau and Jin Chen Princeton Plasma Physics Laboratory Cray XT3 Technical Workshop Nashville,
S.S. Yang and J.K. Lee FEMLAB and its applications POSTEC H Plasma Application Modeling Lab. Oct. 25, 2005.
NCAR ΔX O(10 -2 ) m O(10 2 ) m O(10 4 ) m O(10 7 ) m Cloud turbulence Gravity waves Global flows Solar convection Numerical merits of anelastic models.
Report on Sensitivity Analysis Radu Serban Keith Grant, Alan Hindmarsh, Steven Lee, Carol Woodward Center for Applied Scientific Computing, LLNL Work performed.
TOPS SciDAC Overview David Keyes, project lead Dept. of Mathematics & Statistics Old Dominion University.
ACTS Workshop, 6 August David E. Keyes, Columbia University Overview.
1 Using the PETSc Parallel Software library in Developing MPP Software for Calculating Exact Cumulative Reaction Probabilities for Large Systems (M. Minkoff.
Overview of MHD and extended MHD simulations of fusion plasmas Guo-Yong Fu Princeton Plasma Physics Laboratory Princeton, New Jersey, USA Workshop on ITER.
Variants of the 1D Wave Equation Jason Batchelder 6/28/07.
Scalable Solvers and Software for PDE Applications The Pennsylvania State University 29 April 2003 David E. Keyes Center for Computational Science Old.
Brookhaven Science Associates U.S. Department of Energy MUTAC Review April , 2004, LBNL Target Simulation Roman Samulyak, in collaboration with.
1 Interpolation. 2 What is Interpolation ? Given (x 0,y 0 ), (x 1,y 1 ), …… (x n,y n ), find the value of ‘y’ at a.
Efficient Integration of Large Stiff Systems of ODEs Using Exponential Integrators M. Tokman, M. Tokman, University of California, Merced 2 hrs 1.5 hrs.
High Energy and Nuclear Physics Collaborations and Links Stu Loken Berkeley Lab HENP Field Representative.
Strategies for Solving Large-Scale Optimization Problems Judith Hill Sandia National Laboratories October 23, 2007 Modeling and High-Performance Computing.
Scalable Solvers and Software for PDE Applications David E. Keyes Department of Applied Physics & Applied Mathematics Columbia University Institute for.
Reconnection rates in Hall MHD and Collisionless plasmas
1 SciDAC TOPS PETSc Work SciDAC TOPS Developers Satish Balay Chris Buschelman Matt Knepley Barry Smith.
Ed Seidel Albert Einstein Institute Sources of Gravitational Radiation A.1 Development of CACTUS n Training in the use of the Cactus.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Interactive Computational Sciences Laboratory Clarence O. E. Burg Assistant Professor of Mathematics University of Central Arkansas Science Museum of Minnesota.
M. Onofri, F. Malara, P. Veltri Compressible magnetohydrodynamics simulations of the RFP with anisotropic thermal conductivity Dipartimento di Fisica,
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
MECH4450 Introduction to Finite Element Methods Chapter 9 Advanced Topics II - Nonlinear Problems Error and Convergence.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Connections to Other Packages The Cactus Team Albert Einstein Institute
D. McCune 1 PTRANSP Predictive Upgrades for TRANSP.
1 INTERPOLASI. Direct Method of Interpolation 3 What is Interpolation ? Given (x 0,y 0 ), (x 1,y 1 ), …… (x n,y n ), find the value of ‘y’ at a value.
AMS 691 Special Topics in Applied Mathematics Lecture 8
On the Use of Finite Difference Matrix-Vector Products in Newton-Krylov Solvers for Implicit Climate Dynamics with Spectral Elements ImpactObjectives 
Current Sheet and Vortex Singularities: Drivers of Impulsive Reconnection A. Bhattacharjee, N. Bessho, K. Germaschewski, J. C. S. Ng, and P. Zhu Space.
TR&D 2: NUMERICAL TOOLS FOR MODELING IN CELL BIOLOGY Software development: Jim Schaff Fei Gao Frank Morgan Math & Physics: Boris Slepchenko Diana Resasco.
MECH593 Introduction to Finite Element Methods
Brookhaven Science Associates U.S. Department of Energy MUTAC Review April , 2004, BNL Target Simulations Roman Samulyak in collaboration with Y.
Center for Extended MHD Modeling (PI: S. Jardin, PPPL) –Two extensively developed fully 3-D nonlinear MHD codes, NIMROD and M3D formed the basis for further.
S j l ij l ik l il i j k l C C i C j C k C l S i S k S l m ij m ik m il s i Roberto Lionello and Dalton D. Schnack Center for Energy and Space Science.
Quality of Service for Numerical Components Lori Freitag Diachin, Paul Hovland, Kate Keahey, Lois McInnes, Boyana Norris, Padma Raghavan.
Fast Reconnection in High-Lundquist- Number Plasmas Due to Secondary Tearing Instabilities A.Bhattacharjee, Y.-M. Huang, H. Yang, and B. Rogers Center.
Unstructured Meshing Tools for Fusion Plasma Simulations
Magnetic Reconnection in Solar Flares
Introduction to Numerical Methods Mathematical Procedures
for more information ... Performance Tuning
Meros: Software for Block Preconditioning the Navier-Stokes Equations
Adaptive Grid Generation for Magnetically Confined Plasmas
HPC Modeling of the Power Grid
Salient application properties Expectations TOPS has of users
INTERPOLASI.
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Presentation transcript:

CMRS Review, PPPL, 5 June 2003 &

4 projects in high energy and nuclear physics 5 projects in fusion energy science 14 projects in biological and environmental research 10 projects in basic energy sciences “The partial differential equation entered theoretical physics as a handmaid, but has gradually become mistress.” – A. Einstein & ISICs PDEs are dense in the SciDAC portfolio

CMRS Review, PPPL, 5 June 2003 Scope for TOPS l Design and implementation of “solvers” for PDE-derived systems n Linear solvers n Eigensolvers n Nonlinear solvers n Time integrators n Optimizers l Software integration l Performance optimization Optimizer Linear solver Eigensolver Time integrator Nonlinear solver Indicates dependence Sens. Analyzer (w/ sens. anal.)

CMRS Review, PPPL, 5 June 2003 Status of CMRS-TOPS collaboration CMRS team has provided TOPS with discretization of model 2D multicomponent Hall magnetic reconnection evolution code in PETSc ’s DA/DMMG format, using automatic differentiation for Jacobian objects l TOPS has implemented fully nonlinearly implicit Newton-GMRES-MG-SOR parallel solver (with deflation of nullspace in CMRS’s doubly periodic formulation) l Both first- and second-order implicit temporal integration available l CMRS and TOPS reproduce the same dynamics on the same grids with the same time-stepping, up to a finite-time singularity due to collapse of current sheet (that falls below presently uniform mesh resolution) l TOPS code, being implicit, can choose timesteps an order of magnitude larger, with potential for higher ratio in more physically realistic parameter regimes, though it is slower in wall-clock time for small CFL Plan: tune PETSc solver by profiling, blocking, reuse, etc. l Plan: identify the numerical complexity benefits from implicitness (in suppressing fast timescales) and quantify (explicit versus implicit) l Plan (with APDEC team): incorporate AMR

CMRS Review, PPPL, 5 June 2003 Equilibrium: Model equations: (Porcelli et al., 1993, 1999) 2D Hall MHD sawtooth instability ( PETSc examples /snes/ex29.c and /sles/ex31.c ) figures c/o A. Bhattacharjee, CMRS Vorticity, early time Vorticity, later time zoom

CMRS Review, PPPL, 5 June 2003 PETSc ’s DMMG in Hall MR application l Mesh and time refinement studies of CMRS Hall magnetic reconnection model problem (4 mesh sizes, dt=0.1 (near CFL limit for fastest wave) on left, dt=0.8 on right) l Measure of functional inverse to thickness of current sheet versus time, for 0<t<200 (nondimensional), where singularity occurs around t=215

CMRS Review, PPPL, 5 June 2003 PETSc ’s DMMG in Hall MR app., cont. l Implicit timestep increase studies of CMRS Hall magnetic reconnection model problem, on finest (192  192) mesh of previous slide, in absolute magnitude, rather than semi-log

CMRS Review, PPPL, 5 June 2003 Newton-Krylov-Schwarz – a parallel PDE “workhorse” Newton nonlinear solver asymptotically quadratic Krylov accelerator spectrally adaptive Schwarz preconditioner parallelizable

CMRS Review, PPPL, 5 June 2003 PETSc code CMRS code Application Initialization Function Evaluation Jacobian Evaluation Post- Processing PCKSP PETSc Main Routine Linear Solvers (SLES) Nonlinear Solvers (SNES) Timestepping Solvers (TS) CMRS/PETSc Library Interactions ADIC generated code