Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ASTER DATA Transfer System Earth Remote Sensing Data Analysis Center Kunjuro Omagari ERSDAC 23th APAN Meeting.

Similar presentations


Presentation on theme: "1 ASTER DATA Transfer System Earth Remote Sensing Data Analysis Center Kunjuro Omagari ERSDAC 23th APAN Meeting."— Presentation transcript:

1 1 ASTER DATA Transfer System Earth Remote Sensing Data Analysis Center Kunjuro Omagari ERSDAC 23th APAN Meeting

2 2 Agenda Research Purposes Network Structure Data Transfer / Archive Data Transfer / Distribution Future Challenges Summary ERSDAC

3 3 Research Purposes - International cooperation - NASA Goddard Space Flight Center (GSFC/US) Jet Propulsion Laboratory (JPL/US) USGS / National Center for Earth Resources Observation & Science (EROS/US) ERSDAC/JAPAN ERSDAC

4 4 AIST/JAPAN ERSDAC/JAPAN Transfer all ASTER L0 data for research and development of AIST GEO Grid supercomputer System. ERSDAC Research Purposes - Domestic cooperation -

5 5 Carry out research and development of a system which aims at reducing the time between observation and data acquisition by researchers, and to provide an infrastructure ・ Time between observation and data acquisition ・ Old system: about 14 days (by sending a tape ) ・ Target: about 4 days (by network transfer) ERSDAC Research Purposes - Reduced distribution time -

6 6 Amount of received data ( NASA -> ERSDAC ) : 500 observed scenes / day about 60GB Level 0 data / day Amount of send data (ERSDAC -> EROS): 500 scenes / day about 60GB processed Level 1 data / day ERSDAC Research Purposes - Large-volume data transfer -

7 7 Transfer data for research and development of the supercomputer technology (AIST GEO Grid System ) 800 GB / day ASTER L0 data transfer in operation Transfer newly-arrived ASTER L0 data Research Purposes - Large-volume data transfer -

8 8 Send and receive observation schedule etc. (ftp) Receive satellite telemetry data ( GRE tunnel + multicast ) Voice hotline (VoIP) ERSDAC Research Purposes - Support for mission management -

9 9 ERSDAC JGN2/ TrnsPAC2 JGN2/ TrnsPAC2 ERSDAC EROS/JPL Goddard Space Flight Center Tokyo XP APAN Starlight Chicago MAX/Abilen e VBNS+ Router Router-1 Router-2 EBnet LAN Router 1G Ether Network Structure - Network overview - Using 1Gbps Ether as the access line (changed from 100M bps to 1Gbps in 2006/Nov) Connecting from JGN2 to Starlight Chicago (Internet2), via Tokyo XP APAN, with transPAC2 as the backup line Connecting from Starlight Chicago to NASA : Goddard Space Flight Center, USGS EROS and JPL, via MAX/VBNS+ etc

10 10 Using 1Gbps Ether as the access line (changed from 100M to 1G in 2006/12) Connecting from JGN2 to Starlight Chicago (Internet2), via Tokyo XP APAN, with TransPAC2 as the backup line Connecting from Starlight Chicago to NASA : Goddard Space Flight Center, USGS EROS and JPL, via MAX/VBNS+ etc. ERSDAC Network Structure - Network overview -

11 11 Tracking and Data Relay Satellite 日本 TERRA ( 5 sensors onboard) ASTER sensor Data distribution Control command Tracking and Data Relay Satellite System U.S.A. JPL GSFC Data Acquisition Request Data Processing Request ERSDAC Telemetry data Level 0 data/VoIP Control command observation observation data Telemetry data Data processing to Level 0 Level 1 data Processing/Archive ASTER Ground data system EOSDIS / NASA User EROS Observation schedule Level 1 data JGN2/ transPAC2 Tokyo XP APAN Star Light Chicago ERSDAC Network Structure - Observation system overview - Observation data Telemetry data User Data Acquisition Request Data Processing Request Data distribution Japan After observation by ASTER sensor, receive the data from satellite TERRA in the U.S Process the data to Level 0 at NASA, and transfer it to ERSDAC via network Process the data to Level 1 at ERSDAC Process the data to Level 1 at ERSDAC and transfer it to EROS Archive the data both at EROS and ERSDAC

12 12 1. After observation by ASTER sensor, receive the data from satellite TERRA in the U.S 2. Process the data to Level 0 at NASA, and transfer it to ERSDAC via network 3. Process the data to Level 1 at ERSDAC and transfer it to EROS 4. Archive the data both at EROS and ERSDAC ERSDAC Network Structure - System architecture -

13 13 IPV4 TCP/UDP network Installed the hotline between NASA and ERSDAC by two VoIP lines Receiving telemetry data with GRE tunnel + multicast packet Confirmed the data transfer by multi- session ftp and by e-mail ERSDAC Network Structure - Communication protocol -

14 14 GW-1a GSFC DAS FDDI Ring GW-1b Cisco 7200 Cisco 7513 Cisco 7200 ASTER Japan ADN- 3 ADN-13 HP Procurve 2824 JGN2 A/B Switch ERSDAC Network Structure - VoIP -

15 15 Using VoIP between NASA and ERSDAC to make emergency contact during mission operation Redundant system with two Cisco2600-series routers After discussions between Japan and U.S., we installed VoIP with the U.S. Specifications. ※ This is because global standard specifications had not been fixed at the time of the development. In addition, because NASA has several operation centers, we decided to adopt the VoIP protocol used at NASA, and let NASA develop it. ERSDAC Network Structure - VoIP -

16 16 Using multicast packet to receive telemetry data of mission operation Encapsulation by GRE Tunnel has been carried out to ensure and protect data transfer between related parties. Network Structure - GRE tunnel - ERSDAC

17 17 Receiving the GRE tunnel, which is used for multicast packet transfer, by WAN connecting router The firewall that ERSDAC currently has can’t let multicast packet go through. => unable to use the firewall Change of the GRE tunnel - Before change - ERSDAC

18 18 To extend the end of the GRE tunnel to the connecting router (Cisco8201) of a required segment, as well as to duplicate it. => able to make good use of the existing firewall ERSDAC Change of the GRE tunnel end - After change -

19 19 Network Structure - GRE tunnel before - ERSDAC

20 20 ERSDAC Change of the GRE tunnel - After change -

21 21 Network Architecture - Packet loss - Target (100Mfull Duplex connection) packet loss: less than 1/1,000,000 Initial packet loss: 46/1,000,000 Changed the Switch held by ERSDAC within APAN ⇒ 42/1,000,000 packet loss (effects unknown) Changed the media converter within the carrier (Initially, 25/1,000,000 packet loss was detected by carrier’s internal check.) ⇒ Reduced to 1/4,000,000 packet loss Ensured 90 Mbps throughput between Otemachi and Kachidoki ERSDAC

22 22 Delay in data transfer because of the network configuration ERSDAC 100Mfull Duplex NASA 1GAuto ⇒ at NASA: 100MfullDuplex (as of 2007/1, maintaining 100MfullDuplex due to the capacity of connected machines) Changed the access line from 100M to 1G from 2006/11 (2005/2 – 2006/10: 100Mbps) ERSDAC Network Structure - Network configuration -

23 23 Adjust TOP buffer size at the transfer server Maximum buffer size (byte) 1,048,576(Default) -> 536,870,912 Send-window size (byte) 16,384(Default) -> 2,621,440 Receive-window size (byte) 24,576(Default) -> 2,621,440 ERSDAC Network Structure - TCP buffer configuration -

24 24 1. Receive L0 data 2. Transfer the received L0 data to AIST 3. Put the data on DTF-2 tapes into storage 4. Generate primary data by data processor 5. Archive the primary data ERSDAC Data Transfer / Archive - ERSDAC internal system -

25 25 Tokyo XP APAN JGN2 Data transfer server Tape storage Data processing server Data distribution control server Distribution server ERSDAC InterNET Receive L0 data From NASA Transfer the received L0 data to AIST Data Transfer / Archive - ERSDAC internal system - Put the data on DTF-2 tapes into storage Generate primary data by data processor Archive the primary data

26 26 1. Send the primary data to EROS 2. Distribute the primary data to Users 3. Retrieve products for high-level processing 4. Distribute the high-level data to Users 5. Archive the high-level data ERSDAC Data Transfer / Distribution - ERSDAC internal system -

27 27 Tokyo XP APAN JGN2 ERSDAC InterNET Send the primary data to EROS Distribute the primary data to Users Data Transfer / Distribution - ERSDAC internal system - Distribution server Data transfer server Data distribution control server Tape storage Data processing server Retrieve products for high-level processing Distribute the high-level data to Users Archive the high-level data

28 28 Future Challenges To apply the results of research and development of supercomputer technology ( GEO Grid System ) by AIST To assess the transfer of not only ASTER-data, but also PALSAR-data ( TBD ) ERSDAC

29 29 ERSDAC Summary - Interval between observation and L1 data generation -

30 30 Summary As of now, the interval between observation and Level 1 data distribution has been reduced to about 4 days, while it used to be around 12 days. (The interval between observation and data generation has been reduced to about 2 days.) We plan to review and change the archive method, so that we can process the data immediately after receiving it. (Currently, we wait for two-day’s data to be stored and then process it.) (TBD) We started the data transfer for development of the AIST GEO Grid system, with 800GB/day (2007/Nov) We plan to use the GEO Grid system. ERSDAC


Download ppt "1 ASTER DATA Transfer System Earth Remote Sensing Data Analysis Center Kunjuro Omagari ERSDAC 23th APAN Meeting."

Similar presentations


Ads by Google