The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Corridor One: An Integrated Distance Visualization Environment for.

Slides:



Advertisements
Similar presentations
The Access Grid Ivan R. Judson 5/25/2004.
Advertisements

Sheldon Brown, UCSD, Site Director Milton Halem, UMBC Director Yelena Yesha, UMBC Site Director Tom Conte, Georgia Tech Site Director Fundamental Research.
University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
THINC: A Virtual Display Architecture for Thin-Client Computing Ricardo A. Baratto, Leonard N. Kim, Jason Nieh Network Computing Laboratory Columbia University.
Foundations for an LHC Data Grid Stu Loken Berkeley Lab.
Xingfu Wu Xingfu Wu and Valerie Taylor Department of Computer Science Texas A&M University iGrid 2005, Calit2, UCSD, Sep. 29,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
MCS  FUTURESLABARGONNE  CHICAGO Rick Stevens, Terry Disz, Lisa Childers, Bob Olson Argonne National Laboratory
Virtual Reality at Boston University Glenn Bresnahan Boston University Scientific Computing and Visualization (
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Connecting HPIO Capabilities with Domain Specific Needs Rob Ross MCS Division Argonne National Laboratory
Fast Isosurface Visualization on a High-Resolution Scalable Display Wall Adam Finkelstein Allison Klein Kai Li Princeton University Sponsors: DOE, Intel,
In-situ Visualization and Analysis of Plasma Surface Interaction Simulations Wathsala Widanagamaachchi SCI Institute, University of Utah Mentors : Li-Ta.
The Globus Toolkit: Description and Applications Review Steve Tuecke & Ian Foster Argonne National Laboratory The University of Chicago Globus Co-PI: Carl.
Visualization Linda Fellingham, Ph. D Manager, Visualization and Graphics Sun Microsystems Shared Visualization 1.1 Software Scalable Visualization 1.1.
University of Illinois at Chicago Who, Where, What, Why, How, and a little When Tom DeFanti October 7, 1999 ESnet/MREN Regional Grid Experimental NGI Testbed.
SAN DIEGO SUPERCOMPUTER CENTER HDF5/SRB Integration August 28, 2006 Mike Wan SRB, SDSC Peter Cao
DISTRIBUTED COMPUTING
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
CHAPTER FOUR COMPUTER SOFTWARE.
Progress towards a National Collaboratory Stu Loken Lawrence Berkeley Laboratory.
ACCESS GRID Group to Group Collaboration Gurhan Gunduz Computational Science and Information Technology. Florida State University.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
Applications Requirements Working Group HENP Networking Meeting June 1-2, 2001 Participants Larry Price Steven Wallace (co-ch)
D EPT. OF I NFO. & C OMM., KJIST Access Grid with High Quality DV Video JongWon Kim, Ph.D. 17 th APAN Meeting /JointTech WS Jan. 29 th, 2004 Networked.
Sam Uselton Center for Applied Scientific Computing Lawrence Livermore National Lab October 25, 2001 Challenges for Remote Visualization: Remote Viz Is.
The Globus Project: A Status Report Ian Foster Carl Kesselman
An Accelerated Strategic Computing Initiative (ASCI) Academic Strategic Alliances Program (ASAP) Center at The University of Chicago The Center for Astrophysical.
High Energy and Nuclear Physics Collaborations and Links Stu Loken Berkeley Lab HENP Field Representative.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
1 Computer Graphics Week2 –Creating a Picture. Steps for creating a picture Creating a model Perform necessary transformation Lighting and rendering the.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
KISTI Supercomputing Center 16 th APAN Meetings (August 27, 2003) Collaborative VR Grid in Meteorology Minsu Joh Head of Supercomputing.
Tools for collaboration How to share your duck tales…
28 March 2001F Harris LHCb Software Week1 Overview of GGF1 (Global Grid Forum) and Datagrid meeting, NIKHEF, Mar 5-9 F Harris(Oxford)
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
GEON2 and OpenEarth Framework (OEF) Bradley Wallet School of Geology and Geophysics, University of Oklahoma
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
VAPoR: A Discovery Environment for Terascale Scientific Data Sets Alan Norton & John Clyne National Center for Atmospheric Research Scientific Computing.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Connections to Other Packages The Cactus Team Albert Einstein Institute
Internet2 AdvCollab Apps 1 Access Grid Vision To create virtual spaces where distributed people can work together. Challenges:
MCS  FUTURESLABARGONNE  CHICAGO Rick Stevens, Terry Disz, Lisa Childers, Bob Olson Argonne National Laboratory
Presented by Visualization at the Leadership Computing Facility Sean Ahern Scientific Computing Center for Computational Sciences.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
Networking: Applications and Services Antonia Ghiselli, INFN Stu Loken, LBNL Chairs.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
2. WP9 – Earth Observation Applications ESA DataGrid Review Frascati, 10 June Welcome and introduction (15m) 2.WP9 – Earth Observation Applications.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Wednesday NI Vision Sessions
ENEA GRID & JPNM WEB PORTAL to create a collaborative development environment Dr. Simonetta Pagnutti JPNM – SP4 Meeting Edinburgh – June 3rd, 2013 Italian.
2003 NTHU IEEM 1 Enterprise Integration Collaborative Product Design – Using Access Grid Project as an Example Group No.11 : 林彥伯 (Gilbert)
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
The Access Grid Human interaction across the grid Large displays Supports: large-scale distributed meetings, collaborative work sessions, seminars,
VisIt Project Overview
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Electronic Visualization Laboratory University of Illinois at Chicago
Interactive Videostreaming Visualization on Clusters and Grids
Globus —— Toolkits for Grid Computing
Grid Computing.
Chapter 18 MobileApp Design
University of Technology
DOE 2000 PI Retreat Breakout C-1
Document Visualization at UMBC
ACCESS GRID Group to Group Collaboration
A guided tour of the Access Grid
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Corridor One: An Integrated Distance Visualization Environment for SSI and ASCI Applications Startup Thoughts and Plans Rick Stevens Argonne/Chicago Participants: Argonne, Berkeley, Illinois, Los Alamos, Princeton, Utah

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah CorridorOne: An Overview The Team Our Goals Applications Targets Visualization Technologies Middleware Technology Our Testbed Campaigns Timetable and First Year Milestones

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah The Team Rick StevensArgonne National Maxine Brown University of Tom DeFantiUniversity of Adam FinkelsteinPrinceton Thomas FunkhouserPrinceton University Chuck HansenUniversity of Andy JohnsonUniversity of Chris JohnsonUniversity of Jason LeighUniversity of Kai LiPrinceton Dan SandinUniversity of Jim AhrensLos Alamos National Deb AgarwalLawrence Berkeley Terrence DiszArgonne National Ian FosterArgonne National Nancy JohnstonLawrence Berkeley Stephen LauLawrence Berkeley Bob LucasLawrence Berkeley Laboratory Mike PapkaArgonne National Laboratory John ReyndersLos Alamos National Laboratory Bill TangPrinceton Plasma Physics Lab

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Our Goals Grid Middleware and Advanced Networking Distributed Visualization and Data Manipulation Techniques Distributed Collaboration and Display Technologies Systems Architecture, Software Frameworks and Tool Integration Application Liaison, Experimental Design and Evaluation

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Distributed Data and Visualization Corridor Possible WAN Interconnection Points

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Applications Targets ASCI and SSI Applications Drivers –Climate Modeling (LANL) –Combustion Simulation (LBNL and ANL) –Plasma Science (Princeton) –Neutron Transport Code ( LANL) –Center for Astrophysical Flashes (ANL) –Center for Accidental Fires and Explosions (Utah ) –Accelerator Modeling (LANL)

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Climate Modeling: Massive data sizes and time series POP Ocean model –3000 x 4000 x 100 cells per timestep, 1000’s of timesteps

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Combustion Modeling: Adaptive Mesh Refinement Data is irregular, not given on a simple lattice Data is inherently hierarchical

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah PROBLEM DESCRIPTION: Particle-in-cell Simulation of Plasma Turbulence PPPL Key issue for Fusion is confinement of high temperature plasmas by magnetic fields in 3D geometry (e.g, donut-shaped torus) Pressure gradients drives instabilities producing loss of confinement due to turbulent transport Plasma turbulence is nonlinear, chaotic, 5-D problem Particle-in-cell simulation  distribution function solved by characteristic method  perturbed field solved by Poisson equation

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah With GYROKINETIC TURBULENCE SIMULATIONS ON NEW MPP’S Science 281, 1835 (1998) Turbulence reduction via sheared plasma flow, compared to case with flow suppressed. Results obtained using full MPP capabilities of CRAY T3E Supercomputer at NERSC Flow ow Without Flow With Flow

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah MC++ : Monte Carlo Neutronics Neutronics simulation of multi- material shell Runs all ASCI platforms Arbitrary number of particles

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah What Is The FLASH Problem? To simulate matter accumulation on the surface of compact stars, nuclear ignition of the accumulated (and possibly stellar) material, and the subsequent evolution of the star’s interior, surface, and exterior X-ray bursts (on neutron star surfaces) Novae (on white dwarf surfaces) Type Ia supernovae (in white dwarf interiors)

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Neutron star surface X-ray Burst

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Paramesh Data Structures Iris Explorer Isosurfaces Volume Visualization Animations 100’s timesteps Resolution moving to Billion zone computations

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Center for Accidental Fires and Explosions

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Uintah Simulation Runs Datasets Visualizations Software Versions Configuration Parameters Computing Resources Hypotheses Interpretations Assumptions Insight Fire Spread Container Dynamics HE Materials

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah C-SAFE Uintah PSE

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Distributed/Parallel Uintah PSE Computed on remote resources Viewed locally Main Uintah PSE window on local machine

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Accelerator Simulations Accelerator model –300 million to 2 billion particles per timestep, 1000’s of timesteps Phase space Electromagnetic fields

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Distributed Visualization Technologies Remote and Distributed Rendering Protocols for Remote Visualization Progressing Refinement Deep Images and Image Based Rendering Compression for Visualization Streams Remote Immersive Visualization Data Organization for Fast Remote Navigation High-end Collaborative Visualization Environments Collaborative Dataset Exploration and Analysis User Interfaces and Computational Steering Distributed Network Attached Framebuffers Integration with Existing Tools

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah CorridorOne Data Servers Analysis and Manipulation Engines Visualization Backend Servers Visualization Clients Display Device Interfaces Advanced Networking Services

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Protocols for Remote and Distributed Visualization Database retrieval Geometry processing RasterizationDisplay High-level primitives 3-D primitives 2-D primitives Pixels Distributed Scientific Visualization Passing data via messaging –Serialization of vtk data structures, use C++ streams structured points, grids, unstructured grids, graphics Passing control via messaging –Update protocol Model Based Remote Graphics

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Example - Parallel Isosurface and Serial Rendering on a Linux Cluster

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Progressive Refinement and Multi-resolution Techniques: Example Application Particle Accelerator Density Fields –wavelet-based representation of structured grids –isosurface visualization with vtk

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Multiresolution Display Development High Resolution Inset Image Background Image Match display to human visual system most cones in 5  foveal spot Optimal use of rendering power resolution where you need it Match display to data resolution resolution where the data is

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Remote Volume Rendering “True” 3D presentation of 3D data Blending of user-defined color and opacity Reveals subtle details/structure in data that could be lost in isosurface rendering

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Remote Visualization Using Image-Based Rendering Front ViewSide View

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah ActiveMural, Giant Display Wall Argonne, Princeton UIUC Collaboration 8’ x 16’ display wall –Jenmar Visual Systems BlackScreen™ technology, > lumens –8 LCD  15 DLP  24 DLP –8-20 MegaPixels

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Network Attached — Virtual Frame Buffer 3796p x 1436p (4x2)  5644p x 2772p (6x4)... VFB back-end Servers (mapped one-one on graphics output) VBF front-end server Serial semantics local framebuffer interface - output partitioning - blending - serial  parallel - flexible transport - shadow buffer X-Windows ? OpenGL ? ggi ? Message passing, SM or DSM Accelerator RAMDAC Projector Accelerator RAMDAC Projector Accelerator RAMDAC Projector Accelerator RAMDAC Projector VFB Net Command Interface

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah The MicroMural Portable Tiled Display for High Resolution Vis and Access Grid

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Ambient mic (tabletop) Presenter mic Presenter camera Audience camera Ambient mic (tabletop) Presenter mic Presenter camera Audience camera Access Grid Nodes Access Grid Nodes Under Development –Library, Workshop –ActiveMural Room –Office –Auditorium

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Components of an AG Node Display Computer Video Capture Computer Audio Capture Computer Mixer Echo Canceller Network RGB Video NTSC Video Analog Audio Digital Video Digital Audio Shared App, Control Computer RS232 Serial

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Collaborative Dataset Exploration and Analysis Collaboration & Network Aware Visualization Tools TIDE being built in collaboration with NCDM as a framework for navigating and viewing data-sets in Tele-Immersion. –Low-Res navigation –High-Res visualization –Set viewpoints then raytrace Integrate annotation tools & multiperspective techniques. Support VTK and make it collaborative. Interface with other commonly used ASCI/SSI visualization tools such as HDF5. TIDE showing Compression of a Lattice (ASCI data)

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Collaborative Dataset Exploration and Analysis Annotation and Recording How do you record discoveries in tele-immersion? V-Mail & Virtual Post-It notes attach to spaces, objects, or states. Recording states and checkpoints. Useful for documenting spatially located features. Useful for asynchronous collaboration. Querying in VR. People tend to treat recordings as if they were still there. Viewing V-Mail in Tele-Immersion

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Collaborative Dataset Exploration and Analysis Collaboration techniques & technology for navigating massive data-sets Explore human factors to motivate the design of collaborative tools. Take advantage of having more than 1 expert to help with interpretation and/or manipulation. Provide Multiple Cooperative Representations. e.g. Engineer and artist. e.g. Partition multi-dimensions across viewers. e.g. People with different security clearances. CAVE6D implementation and pilot study. CAVE6D: Tele-Immersive tool for visualizing Oceanographic Data

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Middleware Technology Integrated Grid Architecture Grid Services Infrastructure Multicast Protocols for Rapid Image Transfer Analyzing the User of Network Resources

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah The Grid from a Services View Resource-specific implementations of basic services: E.g., Transport protocols, name servers, differentiated services, CPU schedulers, public key infrastructure, site accounting, directory service, OS bypass Resource-independent and application-independent services: E.g., authentication, authorization, resource location, resource allocation, events, accounting, remote data access, information, policy, fault detection Distributed Computing Applications Toolkit Grid Fabric (Resources) Grid Services (Middleware) Application Toolkits Data- Intensive Applications Toolkit Collaborative Applications Toolkit Remote Visualization Applications Toolkit Problem Solving Applications Toolkit Remote Instrumentation Applications Toolkit Applications Chemistry Biology Cosmology Nanotechnology Environment

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Monitoring : Globus I/O & Netlogger

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Teleimmersion Networking Requirements Audio Video Tracking Database and Event Transactions Simulation Data Haptic Drivers Remote Rendering Text Control Immersive environment Sharing of objects and virtual space Coordinated navigation and discovery Interactive control and synchronization Interactive modification of environment Scalable distribution of environment

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Corridor One  Testbed

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah High Bandwidth Data Distribution Achieved 35 MBytes/sec.

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah Midwest Networked CAVE and ImmersaDesk Sites Enabled by EMERGE

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah CorridorOne Application Campaigns  Approximately two weeks in duration (will do approximately three or four each year)  Focused testing and evaluation of one application area during that time  Involving the participation of external applications scientists  Part of the effort is qualitative to determine how the users will use the remote capabilities  Part of the effort is a set of well designed quantitative experiments to collect data

The CorridorOne Project Argonne  Berkeley  Illinois  Los Alamos  Princeton  Utah First Year Milestones Access Grid nodes up for supporting C1 Collaboration (Oct 31) Integrate Visualization Tools, Middleware and Display Technologies Conduct Phase 1 Applications Experiments beginning Dec 1-10 For each applications domain area we will: –Collect relevant problem datasets and determine possible visualization modalities –Develop remote scientific visualization and analysis scenarios with the end users, –Prototype a distributed collaborative visualization application/demonstration –Test the application locally and remotely with variable numbers of participants and sites –Document how the tools, middleware and network were used and how they performed during the tests –Evaluate the tests and provide feedback to Grid middleware developers, visualization tool builders, and network providers