SM050701-1640 Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 Blue Line.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

Setting up Small Grid Testbed
Beowulf Clusters Matthew Doney. What is a cluster?  A cluster is a group of several computers connected  Several different methods of connecting them.
Beowulf Supercomputer System Lee, Jung won CS843.
© 2014 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any manner.
VOCAL System Requirements and Scalability. System Recommendations The recommended hardware system to support the VOCAL system is: 700 MHz, Pentium III.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
A Commodity Cluster for Lattice QCD Calculations at DESY Andreas Gellrich *, Peter Wegner, Hartmut Wittig DESY CHEP03, 25 March 2003 Category 6: Lattice.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Microsoft Clustering Sean Roberts, Jean Pierre SLAC.
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
PALM-3000 ATST/BBSO Visit Stephen Guiwits P3K System Hardware 126 Cahill February 11, 2010.
Beowulf Cluster Computing Each Computer in the cluster is equipped with: – Intel Core 2 Duo 6400 Processor(Master: Core 2 Duo 6700) – 2 Gigabytes of DDR.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Cluster Computing Applications Project Parallelizing BLAST Research Alliance of Minorities.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
07/14/08. 2 Points Introduction. Cluster and Supercomputers. Cluster Types and Advantages. Our Cluster. Cluster Performance. Cluster Computer for Basic.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
1 Application of multiprocessor and GRID technology in medical image processing IKTA /2002.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Workload-driven Analysis of File Systems in Shared Multi-Tier Data-Centers over InfiniBand K. Vaidyanathan P. Balaji H. –W. Jin D.K. Panda Network-Based.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Beowulf Cluster Jon Green Jay Hutchinson Scott Hussey Mentor: Hongchi Shi.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
NMS Case Study HP OpenView Network Node Manager Hong-taek Ju DP&NM Lab. Dept. of Computer Science and Engineering POSTECH, Pohang Korea Tel:
Figure 1. Typical QLWFPC2 performance results with two WFPC2 observations of a Local Group globular cluster running on a 5-node Beowulf cluster with 1.8.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
The GRID and the Linux Farm at the RCF CHEP 2003 – San Diego CHEP 2003 – San Diego March 27, 2003 March 27, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind,
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Cluster Software Overview
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Computer System Evolution. Yesterday’s Computers filled Rooms IBM Selective Sequence Electroinic Calculator, 1948.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Constructing a system with multiple computers or processors
System G And CHECS Cal Ribbens
Web Server Administration
Cornell Theory Center Cornell Theory Center (CTC) is a high-performance computing and interdisciplinary research center at Cornell.
Constructing a system with multiple computers or processors
Types of Parallel Computers
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 Blue Line Engineering SBIRs –NAS –Fully Active Subscale Telescope (FAST) –NAS –AI Based, Self-Correcting, Self-Reporting Edge Sensors MSFC CDDF –Marshall Optical Control Cluster Computer (MOC 3 )

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 Phase II completion date: March 26, 2002 Objectives: 1/8 Scale model of NGST yardstick Highly versatile testbed for NASA researchers Demonstration events in lab and exhibit hall Testbed Components Hinges, latches, actuators, and deployment mechanisms Seven, 33 cm diameter primary mirror segments Electronics for static figure correction & maintenance Motorized Stow/Deploy Diffraction-limited performance ( >2 microns) Blue Line Engineering NAS Fully Active Subscale Telescope (FAST)

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 Xinetics. NAS Large, Cryogenic Ultralightweight Mirror Technology Optical Design Aperture: equivalent to 92.5 cm dia. filled circular (0.672 m 2 ) Obscuration: <10% Stowed:cyclinder 50 cm diam X 100 cm tall Prescription:parabolic, f/1.25, 2.5m focal length FOV:>4 arc minutes Segments:hexagonal FTF diameter:33.3 cm Thickness:1.8 cm Mass:< 1 kg/segment (35 kg total including electronics) Performance: Diffraction limit at 2 µm ( /14 = 143nm ~ 1/4 wave visible)

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 NAS AI Based, Self-Correcting, Self-Reporting Edge Sensors Phase I completion date: August 17, 2001 Objective feasibility of enhanced edge sensors to deploy, align, and phase match the primary mirror segments of space based telescopes Design Features operational env.: 30 °K >T> 370 °K fuzzy logic health & status monitoring self-reporting neural networks self-correcting self-tuning. new error compensation methods super accuracy multi-mode measurements phasing gap

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 Phase I experimental testing computer simulation and modeling. In Phase II two standard model edge sensors developed, fully characterized documented.

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 MSFC CDDF Marshall Optical Control Cluster Computer (MOC 3 ) Project Schedule: FY01 & FY02 Investigators –PI:John Weir/ED19 –Co-I: Donald Larson/SD71 Objectives –10 3 fold increase in computing capability for managing active primary mirror segments –improved techniques for minimizing wave front error. –experience parallel computing technologies and software ground-based computer clusters embedded clusters in future spacecraft Beowulf Cluster Computer [after Ridge et al, 1997]

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 Plan: Purchase a Beowulf computer cluster and associated Linux software. Utilize the Beowulf in conjunction with optical test beds to develop the use of cluster computing for segmented mirror control. software for astronomy and wave front control, and application program - distributed computing (e.g. Fortran 99). Beowulf Background technology of clustering Linux computers to form a parallel, virtual supercomputer. one server node with client nodes connected together via Ethernet or some other network. no custom components; mass-market commodity hardware PC capable of running Linux, Ethernet adapters switches. Intiated in 1994 NASA High Performance Computing and Communications program Earth and space sciences project at the Goddard Space Flight Center. In October of 1996 Gigaflops sustained performance on a space science application for cost under $50K. MSFC CDDF Marshall Optical Control Cluster Computer (MOC 3 )

SM Advanced Optics & Energy Technology Center Advanced Mirror Technology Small Business Innovative Research Sandy Montgomery/SD71 MSFC CDDF Marshall Optical Control Cluster Computer (MOC 3 ) 7 Slave Node(s) 4U Rackmount ATX Case with 250 Watt UL Power Supply Dual Processor, 1 Ghz Intel Pentium III, 512 MB RAM, 20 GB HD Dolphin Interconnect’s Wulfkit Head Node 4U Rackmount ATX Case with 250 Watt UL Power Supply Dual Processor, 1 Ghz Intel Pentium III Dual Processor, 1 Ghz Intel Pentium III, 512 MB RAM, 20 GB HD 32x CD-R/W, SVGA with 32 MB, Tape back-up Dolphin Interconnect’s Wulfkit Accessories UPS Network Switch KVM Switch Rackmount Cabinet Software: Enhanced Red Hat Distribution Linux v 7.0 Portland Group Workstation 3.1 Compilers for C PVM, MPICH, LAM-MPI Communication Libraries ScaLAPACK with ATLAS Libraries Portable Batch System (PBS) Parallel Virtual File System PVFS Doglsed Administration and Monitoring Tool Lesstiff, Mesa (OpenGL), IBM Data Explorer SCA Linda (4 CPUs) MI/NASTRAN for the PC from Macro Industries “Huinalu”at MHPCC: 260 dual PIII 933 MHz nodes, each