Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fleet Numerical Update CSABC Meeting 15 April 2009 James A. Vermeulen Data Ingest Team Supervisor FNMOC 7 Grace Hopper Ave., Stop 1 Monterey, CA 93943.

Similar presentations


Presentation on theme: "Fleet Numerical Update CSABC Meeting 15 April 2009 James A. Vermeulen Data Ingest Team Supervisor FNMOC 7 Grace Hopper Ave., Stop 1 Monterey, CA 93943."— Presentation transcript:

1 Fleet Numerical Update CSABC Meeting 15 April 2009 James A. Vermeulen Data Ingest Team Supervisor FNMOC 7 Grace Hopper Ave., Stop 1 Monterey, CA 93943 James.Vermeulen@navy.mil (831) 656-4393 James A. Vermeulen Data Ingest Team Supervisor FNMOC 7 Grace Hopper Ave., Stop 1 Monterey, CA 93943 James.Vermeulen@navy.mil (831) 656-4393

2 Introduction Overview of Operations Overview of POPS and A2 Cross Domain Solutions UKMO Engagement MILCON Project FY09 Objectives Summary Outline

3 Introduction

4 Mission To provide the highest quality, most relevant and timely Meteorological and Oceanographic (METOC) support to U.S. and coalition forces.

5 Recent Accomplishments Achieved Initial Operating Capability (IOC) for the A2 supercomputer system, new host for the FNMOC models Ops Run Commenced operational support for the North American Ensemble Forecast System (NAEFS), with NOGAPS Ensemble fields delivered daily to NCEP Upgraded the NOGAPS Ensemble with Ensemble Transform (ET) initialization Implemented the coupled air-sea version of the GFDN tropical cyclone model Took on Submarine Enroute Weather Forecast (SUBWEAX) mission at SECRET and TS/SCI levels Hosted and achieved IOC for the Naval Oceanography Portal (NOP), the single unclassified and classified operational Web presence for all CNMOC activities Delivered the first spiral of the Automated Optimum Track Ship Routing System (AOTSR) Completed A76 source selection process for IT Services resulting in selection of the government’s Most Efficient Organization (MEO) bid as the winning proposal

6 Overview of Operations

7 Operations Center –Manned 24x7 by a team of military and civilian watch standers –Focused on operational mission support, response to requests for special support and products, and customer liaison for DoD operations worldwide –Joint Task Force capable –The Navy’s Worldwide Meteorology/Oceanography Operations Watch –Operates at UNCLAS, CONFIDENTIAL and SECRET levels –SUBWEAX mission at SECRET level Sensitive Compartmented Information Facility (SCIF) –Extension of Ops Center –Operational communications (including secure video teleconferencing), tasking and processing elevated to the TS/SCI level if needed –Includes significant supercomputer capacity at TS/SCI level –SUBWEAX mission at TS/SCI level Ops Run –Scheduled and on-demand 24x7 production –6 million meteorological observations and 2 million products each day –15 million lines of code and ~16,000 job executions per day –Highly automated and very reliable Operations

8 Models Fleet Numerical operates a highly integrated and cohesive suite of global, regional and local state-of-the-art weather and ocean models: –Navy Operational Global Atmospheric Prediction System (NOGAPS) –Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) –Navy Atmospheric Variational Data Assimilation System (NAVDAS) –Navy Aerosol Analysis and Prediction System (NAAPS) –GFDN Tropical Cyclone Model –WaveWatch 3 (WW3) Ocean Wave Model –Navy Coupled Ocean Data Assimilation (NCODA) System –Ensemble Forecast System (EFS) 15 km 0 km 10 km 5 km 60 m s - 1 20 m s - 1 0 m s -1 40 m s - 1 Sierra Owens Valley Mounta in Wave ~30 km Obs ~28 km Cross section of temperature and wind speeds from COAMPS showing mountain waves over the Sierras Surface Pressure and Clouds Predicted by NOGAPS

9 Satellite Products SATFOCUS SSMI and SSMI/S Scatterometer Tropical Cyclone Web Page Target Area METOC (TAM) Tactically Enhanced Satellite Imagery (TESI) Example SATFOCUS Products SATFOCUS Dust Enhancement Product SSM/I Water Vapor SSM/I Wind Speed

10 Overview of POPS and A2

11 Program of Record (ACAT IAD Program in sustainment) Provides HPC platforms to run models and applications at the UNCLAS, SECRET and TS/SCI Levels Has been traditionally composed of two Subsystems: –Analysis and Modeling Subsystem (AMS) Models (NOGAPS, COAMPS, WW3, etc.) Data Assimilation (NAVDAS, NAVDAS-AR) –Applications Transactions and Observations Subsystem (ATOS) Model pre- and post-processing (e.g., observational data prep/QC, model output formatting, model visualization and verification, etc.) Full range of satellite processing (e.g., SATFOCUS products, TCWeb Page, DMSP, TAM, etc.) Full range of METOC applications and services (e.g., OPARS, WebSAR, CAGIPS, AREPS, TAWS, APS, ATCF, AOTSR, etc.) Hosting of NOP Portal (single Web Presence for Naval Oceanography) Primary Oceanographic Prediction System (POPS)

12 Battlespace on Demand (BonD) and Current POPS Subsystems Forecast Battlespace Tier 3 – the Decision Layer Tier 2 – the Performance Layer Tier 1 – the Environment Layer Observational Data Satellites Fleet Data ATOS AMS

13 POPS Architecture Strategy Going Forward Drivers: –Knowledge Centric CONOPS, requiring highly efficient reach-back operations, including low-latency on-demand modeling –Growing HPC dominance of Linux clusters based on commodity processors and commodity interconnects –Resource ($) Efficiencies Solution: –Combine AMS and ATOS functionality into a single system –Make the system ‘vendor agnostic’ –Use Linux-cluster technology based on commodity hardware Advantages: –Efficiency for reach-back operations and on-demand modeling Shared file system and shared databases Lower latency for on-demand model response –Capability to surge capacity back-and-forth among the AMS (BonD Tier 1) and ATOS (BonD Tiers 0, 2, 3) applications as needed –Cost effectiveness of Linux and commodity hardware vice proprietary operating systems and hardware –Cost savings by converging to a single operating system [Linux] We call the combined AMS and ATOS functionality A2 (AMS + ATOS = A2)

14 Forecast Battlespace Tier 3 – the Decision Layer Tier 2 – the Performance Layer Tier 1 – the Environment Layer Observational Data Satellites Fleet Data BonD and Future POPS A2

15 A2 Hardware Specifications A2-0 (Opal) - UNCLAS Linux Cluster –232 nodes with 2 Intel Xeon 5100-Series “Woodcrest” dual-core processors per node (928 core processors total) –1.86 TB memory (8 GB per node) –10 TFLOPS peak speed –35 TB disk space with 2 GB per second throughput –Double Data Rate Infiniband interconnect at 20 Gb per second –40 Gb per second connection to external Cisco network –4 Clearspeed floating point accelerator cards A2-1 (Ruby) - SECRET Linux Cluster –146 nodes with 2 Intel Xeon 5300-Series “Clovertown” quad-core processors per node (1168 core processors total) –1.17 TB memory (8 GB per node) –12 TFLOPS peak speed –40 TB disk space with 2 GB per second throughput –Double Data Rate Infiniband interconnect at 20 Gb per second –40 Gb per second connection to external Cisco network –2 GRU floating point accelerator cards

16 A2 Hardware Specifications A2-S (Topaz) - TS/SCI Linux Cluster –Add Topaz h/w specs here

17 A2 Middleware Beyond the hardware components, A2 depends on a complicated infrastructure of supporting systems software (i.e., “middleware”): –Suse Enterprise Server Linux Operating System –GPFS: the Global Parallel File System, commercial software that provides a file system across all of the nodes of the system –ISIS: Fleet Numerical’s in-house database designed to rapidly ingest and assimilate NWP model output files and external observations –PBSPro: job scheduling software –ClusterWorx: System management software –Various MPI debugging tools, libraries, performance profiling tools, and utilities –TotalView: FORTRAN/MPI debugging tool –PGI’s Cluster Development kit, which includes parallel FORTRAN, C, C++ compilers –Intel Compiler –VMware ESX server

18 A2 High-Level Timeline A2-0 (Opal) – UNCLAS System –Procurement started Jun 06 –Full system on-site Jan 07 –Achieved IOC 15 Oct 08 A2-1 (Ruby) – SECRET System –Procurement started May 07 –System on site Sep 07 –Expect to achieve IOC Feb 09 A2-S (Topaz) – TS/SCI System –Procurement started ???? –System on site ???? –Expect to achieve IOC ????

19 1 Used for TS/SCI level modeling in the SCIF. To be replaced in FY09 by A2-S. 2 No longer used for running models. Continue to serve as a legacy Cross Domain Solution (CDS), providing transfer of data between classified and unclassified systems. POPS Computer Systems NAMETYPE#CPUsMEMORY (GB) PEAK SPEED (TFLOPS) OS AMS2 1 SGI ORIGIN 3800128 0.2TRIX AMS3 2 SGI ORIGIN 3900256 0.7TRIX AMS4 2 SGI ORIGIN 3900256 0.7TRIX DC3IBM p655+2565121.9AIX ATOS2IBM 1350s/x440s/x345s4386452.2Linux CAAPSIBM e1350s1482720.7Linux A2-0 (Opal)Linux Cluster928180010.0Linux A2-1 (Ruby)Linux Cluster1168117012.0Linux A2-S (Topaz)Linux Cluster???? Linux TOTAL???? ~??

20 Cross Domain Solutions

21 In order to perform its mission, FNMOC fundamentally requires a robust, high-bandwidth Cross Domain Solution (CDS) system to allow two-way flow of data between the UNCLAS and SECRET enclaves within its HPC environment FNMOC’s existing CDS system is nearing the end of it’s lifecycle –SGI computer hardware (AMS3, AMS4) –Trusted IRIX (TRIX) multi-level secure operating system The DoD Unified Cross Domain Management Office (UCDMO) has recently indicated that all legacy CDS systems must be replaced by one meeting their approval. Existing UCDMO solutions do not meet FNMOC’s bandwidth requirements (currently ~1 GB/sec, increasing to ~64 GB/sec by 2013)

22 CDS Way Ahead FNMOC is currently partnered with the National Security Agency (NSA) to achieve an acceptable CDS that meets high-bandwidth requirements –Seeking interim solution based on modification of one or more of the existing 14 UCDMO approved solutions, allowing phased migration to long-term solution –Seeking long-term solution based on the NSA High Assurance Platform (HAP) project –Expect that resulting solution will also meet NAVO’s requirements Joint NSA/FNMOC presentation at the Oct 2008 DISN Security Accreditation Working Group (DSAWG) resulted in approval of proposed CDS way ahead described above

23 UKMO Engagement

24 Re-Baselining NOGAPS CNMOC/FNMOC and NRL are exploring the possibility of a partnership agreement with UKMO to obtain the UKMO Unified Model (UM) as a new baseline for development of the Navy’s operational global NWP capability. Intended to foster joint NRL/UKMO model development of a new operational global model for FNMOC. NRL to retain is global NWP expertise to ensure that resulting new model retains the attributes of NOGAPS that are important to Navy, while building on the advanced dynamics, physics and data assimilation capabilities of the UKMO UM. Exploratory discussions between CNMOC (RDML Titley, Steve Payne), NRL (Simon Chang) and UKMO senior leadership took place in Exeter last month. NRL has obtained a research license for the UM and is beginning initial testing of the model on the FNMOC computer system. Moving toward a go/no-go decision on this course of action in late January. Navy remains firmly committed to NUOPC, and sees this as a way to improve the quality of the NUOPC Ensemble.

25 Current NWP Configuration Decoders QC Programs NAVDAS NAVDAS/AR NOGAPS Forecast Model Applications/ Users Observational Data

26 Possible Phase 1 Transition Decoders QC Programs NAVDAS NAVDAS/AR Applications/ Users Observational Data UM Forecast Model Freeze NOGAPS forecast model development and replace with UM. This will require new output programs to and from the forecast model. A rigorous statistical evaluation for multiple seasons will be performed.

27 Possible Phase 2 Transition NAVDAS NAVDAS/AR Applications/ Users Observational Data Decoders QC Programs UM Forecast Model Freeze QC development and replace with UM QC, and develop decoders for UM QC programs. A rigorous statistical evaluation for multiple seasons will be performed.

28 Possible Phase 3 Transition Decoders UM QC Programs UM 4D Variational Analysis UM Forecast Model Observational Data Applications/ Users Freeze NAVDAS/AR development and replace with UM 4-D analysis system. A rigorous statistical evaluation for multiple seasons will be performed. Conduct tests to determine impact on downstream products.

29 MILCON Project

30 $9.3M 14351 sq ft expansion/renovation of FNMOC Building 700 (Computer Center Building) –Adds 2500 sq ft to Computer Center for NPOESS IDPS –Expands Ops Floor from 1550 to 3200 sq ft –Expands SCIF Ops Floor from 1947 to 3012 sq ft –Expands SCIF Computer Center from 875 to 2240 sq ft –Expands Auditorium from 47 seats to 100 seats Schedule –Design/Build Contract awarded Sep 2008 –Groundbreaking Jan 2009 –Completion approximately Oct 2010

31 FNMOC MILCON Project FNMOC Building 700 Computer Center

32 FY09 Objectives

33 Top Five FY09 Objectives Implement the A76 Most Efficient Organization (MEO) as the Information Technology Services Department (ITSD). Achieve Full Operational Capability (FOC) for the A2-0 (Opal), A2-1 (Ruby) and A2-S (Topaz) computer systems. Achieve and maintain required Information Assurance (IA) accreditations, and implement DoD-approved Cross Domain Solution (CDS) for transferring data between classification levels. Design and install satellite processing systems in preparation for launch of NPOESS and NPP. Increase skill of METOC model products through implementation of new models, upgrades to existing models, and the assimilation of additional observational data from new sources and sensors.

34 Additional FY09 Objectives Execute the Building 700 MILCON Project while maintaining at least 98.5% uptime for operations. Maintain excellence in Submarine Enroute Weather Forecast (SUBWEAX) support. Achieve FOC for the Automated Optimum Track Ship Routing (AOTSR) system. Field and support next generation version of the Naval Oceanography Portal (NOP). Meet all requirements for the Weather Reentry-Body Interaction Planner (WRIP) project. Meet all requirements for the North American Ensemble Forecast System (NAEFS) and Joint Ensemble Forecast System (JEFS) projects in preparation for full engagement in the National Unified Operational Prediction Capability (NUOPC) initiative. Complete all Cost of War (COW) projects. Maintain excellence in climatology support through FNMOD Asheville. Implement Resource Protection Component of the new Weather Delivery Model. Develop a robust Continuity of Operations (COOP) plan for key capabilities. Complete all Primary Oceanographic Prediction System (POPS) Program Decision Board (PDB) actions. Achieve Project Management culture aligned to the POPS program.

35 Summary

36 Several of our recent accomplishments will be of particular interest to COPC: –Achieved IOC for the A2 supercomputer system, new host for the FNMOC models Ops Run –Commenced operational support for NAEFS, with NOGAPS Ensemble fields delivered daily to NCEP –Upgraded the NOGAPS Ensemble with ET initialization –Implemented the coupled air-sea version of the GFDN TC model We have embraced Linux-based HPC as we continue to build out our A2 cluster to accomodate the full range of our operational workload We are partnered with NSA to achieve a DoD approved Cross Domain Solution capable of meeting our operational throughput requirements Along with NRL and CNMOC, we are exploring the possibility of a partnership with UKMO to re-baseline the Navy’s global NWP model We will break ground on $9.3M MILCON project in Jan to expand and renovate Building 700 Computer Center We have a full plate of FY09 objectives, not the least of which is implementing the MEO resulting from our completed A76 Study


Download ppt "Fleet Numerical Update CSABC Meeting 15 April 2009 James A. Vermeulen Data Ingest Team Supervisor FNMOC 7 Grace Hopper Ave., Stop 1 Monterey, CA 93943."

Similar presentations


Ads by Google