S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Honors Seminar in High Performance Computing, Spring 2000 Prof. Sid Karin x45075.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Advertisements

Ver 0.1 Page 1 SGI Proprietary Introducing the CRAY SV1 CRAY SV1-128 SuperCluster.
S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Computational Science Challenges for the Beginning.
Beowulf Supercomputer System Lee, Jung won CS843.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
Using Metacomputing Tools to Facilitate Large Scale Analyses of Biological Databases Vinay D. Shet CMSC 838 Presentation Authors: Allison Waugh, Glenn.
NPACI: National Partnership for Advanced Computational Infrastructure Supercomputing ‘98 Mannheim CRAY T90 vs. Tera MTA: The Old Champ Faces a New Challenger.
Reconfigurable Application Specific Computers RASCs Advanced Architectures with Multiple Processors and Field Programmable Gate Arrays FPGAs Computational.
National Partnership for Advanced Computational Infrastructure San Diego Supercomputer Center Evaluating the Tera MTA Allan Snavely, Wayne Pfeiffer et.
Where Are They Now? Current Status of C++ Parallel Language Extensions and Libraries From 1995 Supercomputing Workshop 1.
MASPLAS ’02 Creating A Virtual Computing Facility Ravi Patchigolla Chris Clarke Lu Marino 8th Annual Mid-Atlantic Student Workshop On Programming Languages.
National Partnership for Advanced Computational Infrastructure Advanced Architectures CSE 190 Reagan W. Moore San Diego Supercomputer Center
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
AAAS February 2002 Boston, Massachusetts USA Sid Karin
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
UNIVERSITY of MARYLAND GLOBAL LAND COVER FACILITY High Performance Computing in Support of Geospatial Information Discovery and Mining Joseph JaJa Institute.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Discovery Environments Susan L. Graham Chief Computer Scientist Peter.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Brains to Bays --Scaleable Visualization Toolkits Arthur J. Olson Interaction Environments.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
SAN DIEGO SUPERCOMPUTER CENTER NUCRI Advisory Board Meeting November 9, 2006 Science Gateways on the TeraGrid Nancy Wilkins-Diehr TeraGrid Area Director.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Molecular Science in NPACI Russ B. Altman NPACI Molecular Science Thrust Stanford Medical.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
IPlant cyberifrastructure to support ecological modeling Presented at the Species Distribution Modeling Group at the American Museum of Natural History.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Kurt Mueller San Diego Supercomputer Center NPACI HotPage Updates.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing - User Environment Anke Kamrath Associate Director, SDSC
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
CSE 291A Interconnection Networks Instructor: Prof. Chung-Kuan, Cheng CSE Dept. UCSD Winter-2007.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Breakout # 1 – Data Collecting and Making It Available Data definition “ Any information that [environmental] researchers need to accomplish their tasks”
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
1 THE EARTH SIMULATOR SYSTEM By: Shinichi HABATA, Mitsuo YOKOKAWA, Shigemune KITAWAKI Presented by: Anisha Thonour.
Center for Computational Visualization University of Texas, Austin Visualization and Graphics Research Group University of California, Davis Molecular.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
PACI Program : One Partner’s View Paul R. Woodward LCSE, Univ. of Minnesota NSF Blue Ribbon Committee Meeting Pasadena, CA, 1/22/02.
Computer System Evolution. Yesterday’s Computers filled Rooms IBM Selective Sequence Electroinic Calculator, 1948.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
Clouds , Grids and Clusters
Overview of Earth Simulator.
Grid Computing.
University of Technology
San Diego Supercomputer Center
BlueGene/L Supercomputer
with Computational Scientists
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Honors Seminar in High Performance Computing, Spring 2000 Prof. Sid Karin x45075

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions of Supercomputers The most powerful machines available. Machines that cost about 25M$ in year 2000 $. Machines sufficiently powerful to model physical processes including accurate laws of nature and realistic geometry, and including large quantities of observational/experimental data.

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Supercomputer Performance Metrics Benchmarks Applications Kernels Selected Algorithms Theoretical Peak Speed (Guaranteed not to exceed speed) TOP 500 List

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Misleading Performance Specifications in the Supercomputer Field David H.Bailey RNR Technical Report RNR December 1, R /RNR html

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications Cryptography Nuclear Weapons Design Weather / Climate Scientific Simulation Petroleum Exploration Aerospace Design Automotive Design Pharmaceutical Design Data Mining Data Assimilation

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications cont’d. Processes too complex to instrument Automotive crash testing Air flow Processes too fast to observe Molecular interactions Processes too small to observe Molecular interactions Processes too slow to observe Astrophysics

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications cont’d. Performance Price Performance / Price

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Theory Experiment Simulation Data- intensive computing (mining) Data- intensive computing (assimilation) Numerically intensive computing

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Supercomputer Architectures Vector Parallel Vector, Shared Memory Parallel Hypercubes Meshes Clusters SIMD vs. MIMD Shared vs. Distributed Memory Cache Coherent Memory vs. Message Passing Clusters of Shared Memory Parallel Systems

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Cray - 1 A vector computer that worked A balanced computing system CPU Memory I/O A photogenic computer

S AN D IEGO S UPERCOMPUTER C ENTER CSE : The Supercomputing “Island” Today: A Continuum Number of machines Performance

S AN D IEGO S UPERCOMPUTER C ENTER CSE190

S AN D IEGO S UPERCOMPUTER C ENTER CSE190

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Cray X-MP Shared Memory Parallel Vector Followed by Cray Y-MP, C-90, J-90, T90…..

S AN D IEGO S UPERCOMPUTER C ENTER CSE190

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Cray -2 Parallel Vector Shared Memory Very Large Memory (256 MW) Actually 256K MW = 262 MW One word = 8 Bytes Liquid Immersion cooling

S AN D IEGO S UPERCOMPUTER C ENTER CSE190

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Cray Companies Control Data Cray Research Inc. Cray Computer Company Inc. SRC Inc.

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Thinking Machines SIMD vs. MIMD Evolution from CM-1 to CM-2 ARPA Involvement

S AN D IEGO S UPERCOMPUTER C ENTER CSE190

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 1 st Teraflops System for US Academia “Blue Horizon” 1 TFLOPs IBM SP processor compute nodes 12 2-processor service nodes 1,176 Power3 processors at 222 MHz > 640 GB memory (4 GB/node), 10.6GB/s bandwidth, upgrade to > 1 TB later 6.8 TB switch-attached disk storage Largest SP with 8-way nodes High-performance access to HPSS Trailblazer switch (current ~115MB/s bandwidth) interconnect with subsequent upgrade

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 UCSD Currently #10 on Dongarra’s Top 500 List Top 500 List Actual Linpack benchmark sustained 558 Gflops on 120 nodes Projected Linpack benchmark is 650 Gflops on 144 nodes Theoretical peak Tflops

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 First Tera MTA is at SDSC

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Tera MTA Architectural Characteristics Multithreaded architecture Randomized, flat, shared memory 8 CPUs, 8 GB RAM now going to 16 (later this year) High bandwidth to memory (word per cycle per CPU) Benefits Reduced programming effort: single parallel model for one or many processors Good scalability

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 SDSC’s road to terascale computing

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Site Prep 12,000 sq ft 120 ft 100 ft

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Site Prep 12,000 sq ft 120 ft 100 ft

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Facilities Accomplishments 12,000 sq. ft. of floor space 1.6 MWatts of power 530 tons of cooling capability 384 cabinets to house the 6144 CPU’s 48 cabinets for metarouters 96 cabinets for disks 9 cabinets for 36 HIPPI switches about 348 miles of fiber cable

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain SST System Final Configuration  Cray Origin TeraFLOPS peak  48X128 CPU Origin 2000 (250MHz R10K)  6144 CPUs: 48 X 128 CPU SMPs  1536 GB memory total: 32 GB memory per 128 CPU SMP  76 TB Fibre Channel RAID disks  36 x HIPPI-800 switch Cluster Interconnect  To be deployed later this year:  9 x HIPPI way switch Cluster Interconnect

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Accomplishments On-site integration of 48X128 system completed (including upgrades) HiPPI-800 Interconnect completed 18GB Fiber Channel Disk completed Integrated Visualization (16 IR Pipes) Most Site Prep completed System integrated into LANL secure computing environment Web based tool for tracking status

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 ASCI Blue Mountain Accomplishments-cont Linpack - achieved 1.608TeraFLOPs accelerated schedule-2 weeks after install system validation run on 40x126 configuration f90/MPI version run of over 6 hours sPPM - turbulence modeling code validated full system integration used all 12 HiPPI boards/SMP and 36 switches used special “MPI” HiPPI bypass library ASCI codes scaling

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Summary Installed ASCI Blue Mountain computer ahead of schedule and achieved Linpack record two weeks after install. ASCI application codes are being developed and used.

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Half

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Rack

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Trench

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Connect any pair of the DSM computers through the crossbar switches Connect directly only computers to switches, optimizing latency and bandwidth (there are no direct links DSM DSM or switch switch) Support a 3-D toroidal 4x4x3 DSM configuration by establishing non-blocking simultaneous links across all sets of 6 faces of the computer grid Maintain full interconnect bandwidth for subsets of DSM computers (48 DSM computers divided into 2, 3, 4, 6, 8,12, 24, or 48 separate, non-interacting groups) Network Design Principles

S AN D IEGO S UPERCOMPUTER C ENTER CSE Groups of 8 Computers each 18 16x16 Crossbar Switches 18 Separate Networks

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 sPPM Hydro on 6144 CPUs Problem Subdomain: 8x4x4 process layout 128 CPUs/1 DSM) 12 HiPPI-800 NICs Router CPU on Neighbor SMP 1-HiPPI-800 NIC Router CPU Problem Domain: (4x4x3 DSM layout 48 DSMs/6144 CPUs)

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 sPPM Scaling on Blue Mountain

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 SDSC A National Laboratory for Computational Science and Engineering

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 A Distributed National Laboratory for Computational Science and Engineering

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Continuing Evolution Individuals SDSC NPACI Resources Education Outreach & Training Resources Partners Technology & applications thrusts Enabling technologies Applications

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 NPACI is a Highly Leveraged National Partnership of Partnerships 46 institutions 20 states 4 countries 5 national labs Many projects Vendors and industry Government agencies

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Accelerate Scientific Discovery Through the development and implementation of computational and computer science techniques Mission

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Collect data from digital libraries, laboratories, and observation Analyze the data with models run on the grid Visualize and share data over the Web Publish results in a digital library Changing How Science is Done Vision

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Embracing the Scientific Community Capability Computing Provide compute and information resources of exceptional capability Discovery Environments Develop and deploy novel, integrated, easy-to-use computational environments Computational Literacy Extend the excitement, benefits, and opportunities of computational science Goals: Fulfilling the Mission

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 NPACI Objectives Deploy teraflops-scale computers Create a national metacomputing infrastructure Enable data-intensive computing Create persistent intellectual infrastructure Conduct outreach to new communities Advance computing technology in support of science

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Partnership Organizing Principle: “Thrusts” Molecular Science Neuroscience Earth Systems Science Engineering Metasystems Programming Tools & Environments Data-intensive Computing Interaction Environments Computational Literacy EOT Discovery Environments APPLICATIONS Discovery Environments TECHNOLOGIES Capability Computing RESOURCES

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Technology Thrusts Metasystems Programming Tools and Environments Data- Intensive Computing Interaction Environments

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Applications Thrusts Neuroscience Engineering Molecular Science Earth Systems Science

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Projects Meld Applications and Technology Data-Intensive Computing + Neuroscience Brain databases Metasystems and Parallel Tools + Engineering

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Definitions History SDSC/NPACI Applications

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Modeling Galaxy Dynamics and Evolution Project Leaders Lars Hernquist, Harvard and John Dubinski, U Toronto Stuart Johnson and Bob Leary, SDSC SAC Program First images from Blue Horizon Simulation 24M particles = 10M stars + 2M dark matter halo in each galaxy Working on 120M particle run Run on all 1,152 processors during acceptance Animation - Collision between Milky Way and Andromeda “Due to SAC efforts, our simulations run two to three times faster, we can ask more precise questions and get better answers”...Hernquist

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Bioinformatics Infrastructure for Large-Scale Analyses Next-generation tools for accessing, manipulating, and analyzing biological data Russ Altman, Stanford University Reagan Moore, SDSC Analysis of Protein Data Bank, GenBank and other databases Accelerate key discoveries for health and medicine

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Protein Folding in a Distributed Computing Environment Simulating protein movement governing reactions within cells Andrew Grimshaw, U Virginia Charles Brooks, The Scripps Research Institute Bernard Pailthorpe, UCSD/SDSC Computationally intensive Distributed computing power from Legion

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Telescience for Advanced Tomography Applications Integrates remote instrumentation, distributed computing, federated databases, image archives, and visualization tools. Mark Ellisman, UCSD Fran Berman, UCSD Carl Kesselman, USC 3-D tomographic reconstruction of biological specimens

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 MICE: Transparent Supercomputing Molecular Interactive Collaborative Environment Gallery allows researchers, students to search for, visualize, and manipulate molecular structures Integrates key SDSC technological strengths Biological databases Transparent supercomputing Web-based Virtual Reality Modeling Language

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 The Protein Data Bank World’s single scientific resource for depositing and searching protein structures Protein structure data growing exponentially 10,500 structures in PDB today 20,000 by the year 2001 Vital to the advancement of biological sciences Working towards a digital continuum from primary data to final scientific publication Capture of primary data from high- energy synchrotrons (e.g. Stanford Linear Accelerator Center) requires 50Mbps network bandwidth 1CD3: The PDB’s 10,000th structure.

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Biodiversity Species Workshop Web interface to modeling tools Provides geographic, climate, and other base data Species Analyst Compiles data from on-line museum collections Scientists can focus on the scientific questions NPACI partners U Kansas, U New Mexico, and SDSC Biological-scale Modeling Predicted distribution of the mountain trogon. Data points (pins) are from 14 museum collections.

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Digital Galaxy Collaboration with Hayden Planetarium American Museum of Natural History Support from NASA Linking SDSC’s mass storage to Hayden Planetarium requires 155 Mbps MPIRE Galaxy Renderer Scalable volume visualization Linked to database of astronomical objects Produces translucent, filament- like objects An artificial nebula, modeled after a planetary nebula

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 Looking out for San Diego’s Regional Ecology Unique partnership 31 federal, state, regional, and local agencies John Helly, et al., SDSC Combines technologies and multi-agency data Sensing, analysis, VRML Physical, chemical, and biological data Web-based tool for science and public policy

S AN D IEGO S UPERCOMPUTER C ENTER CSE190 AMICO: The Art of Managing Art Art Museum Image Consortium (AMICO) 28 art museums working toward educational use of digital multimedia Launch of the AMICO Library includes more than 50,000 works of art AMICO, CDL, SDSC XML information mediation SDSC SRB data management Links between images, scholarly research, educational material

CT Image (512x512x250) MacNCTPlan Treatment Planning Software Beam Characteristics Voxel Energy Deposition MCNP Particle Transport Simulation (Typically 21x21x25) MIT Nuclear Engineering Department Beth Israel Deaconess Medical Center, Boston Boron Neutron Capture Therapy Beam Port Borated Tumor Epithermal Neutrons (10 13 /s) BNCT & The Treatment Planning Process

Typical MCNP BNCT simulation: 1 cm resolution (21x21x25) 1 million particles 1 hour on 200 MHz PC ASCI Blue Mountain MCNP simulation: 4 mm resolution (64x64x62) 100 million particles 1/2 hour on 6048 CPUs ASCI Blue Mountain MCNP simulation: 1 mm resolution (256x256x250) 100 million particles 1-2 hours on 3072 CPUs MCNP BNCT Simulation Results