Presentation is loading. Please wait.

Presentation is loading. Please wait.

EDC Intenet2 AmericaView TAH Oct 28, 2002 AlaskaView Proof-of-Concept Test Tom Heinrichs, UAF/GINA/ION Grant Mah USGS/EDC Mike Rechtenbaugh USGS/EDC Jeff.

Similar presentations


Presentation on theme: "EDC Intenet2 AmericaView TAH Oct 28, 2002 AlaskaView Proof-of-Concept Test Tom Heinrichs, UAF/GINA/ION Grant Mah USGS/EDC Mike Rechtenbaugh USGS/EDC Jeff."— Presentation transcript:

1 EDC Intenet2 AmericaView TAH Oct 28, 2002 AlaskaView Proof-of-Concept Test Tom Heinrichs, UAF/GINA/ION Grant Mah USGS/EDC Mike Rechtenbaugh USGS/EDC Jeff Harrison UA/ARSC

2 EDC Intenet2 AmericaView TAH Oct 28, 2002 Proof-of-Concept Test Goals Receive Landsat 7 data using Alaska SAR Facilities’ (ASF) existing 11- meter antenna, receiver, and bitsync Capture raw signal data to disk using EDC-supplied data capture system. Transfer raw data over Internet2 to USGS EROS Data Center (EDC) for processing Pick up finished image products for public distribution by Geographic Information Network of Alaska (GINA) servers Turn L7 scene around in less than 36 hours –Testing done in August 2002.

3 EDC Intenet2 AmericaView TAH Oct 28, 2002 EDC Landsat 7 Processor EDC Ingest Server EDC FTP Server Alaska User EROS Data Center, Sioux Falls, South Dakota Alaska SAR Facility Antenna and Receiver EDC/GINA Data Capture System GINA Data Servers http ftp Internet-2 NASA/USGS Landsat 7 ETM+ Level 1G Raw I & Q ftpbbftp Overview

4 EDC Intenet2 AmericaView TAH Oct 28, 2002 Example of L7 Scene Acquired During the Test Near Provideniya, Russian Far East Time from reception to completed pickup: 26 hours Success: turn-around in less than 36 hour standard. August 15, 2002 Path 85 Row 15

5 EDC Intenet2 AmericaView TAH Oct 28, 2002 bbftp Transfers UAF to EDC Average transfer rate: –60 Mbits/second –26.3 GBytes/hour Average file size transferred –4.2 GB –range 0.4 to 5.5 GB

6 EDC Intenet2 AmericaView TAH Oct 28, 2002 Comparisons bbftp: 6 Mbps per stream ftp pickup from EDC: 1.7 Mbps (Sun ftp client) ftp on LAN: 82 Mbps (100Mbps interface) Host stacks (window sizes) are limiting factor on WAN

7 EDC Intenet2 AmericaView TAH Oct 28, 2002 iperf testing 85 Mbits/sec maximum rate 100 Mbps LAN interface was the bottleneck 4 to 6 Mbps per stream

8 EDC Intenet2 AmericaView TAH Oct 28, 2002 Routes RTT = 110 ms Traverses multiple high-speed networks: VBNS+, Abeline, Gigapop [uaftest@edclxw50 uaftest]$ /usr/sbin/traceroute ctsdev1.gina.alaska.edu traceroute to ctsdev1.gina.alaska.edu (137.229.79.81), 30 hops max, 38 byte packets 1 fe00-72a-edc.cr.usgs.gov (152.61.128.254) 0.700 ms 0.565 ms 0.514 ms 2 edcnsbp1.cr.usgs.gov (152.61.213.1) 0.443 ms 0.386 ms 0.348 ms 3 edcnsbp1.cr.usgs.gov (152.61.213.1) 0.362 ms 0.369 ms 0.354 ms 4 vbp1-24a-edc.cr.usgs.gov (152.61.213.10) 0.495 ms 0.403 ms 0.388 ms 5 ext-edc-100-1.cr.usgs.gov (152.61.100.1) 0.674 ms 0.629 ms 0.593 ms 6 jn1-at1-1-0-2025.dng.vbns.net (166.61.9.13) 13.061 ms 13.105 ms 12.788 ms 7 Abilene.dng.vbns.net (166.61.8.58) 17.452 ms 17.169 ms 17.287 ms 8 kscy-ipls.abilene.ucaid.edu (198.32.8.5) 26.624 ms 26.651 ms 26.698 ms 9 dnvr-kscy.abilene.ucaid.edu (198.32.8.13) 37.343 ms 37.333 ms 54.059 ms 10 sttl-dnvr.abilene.ucaid.edu (198.32.8.49) 65.565 ms 66.058 ms 65.581 ms 11 hnsp1-wes-so-5-0-0-0.pnw-gigapop.net (198.48.91.77) 66.115 ms 65.942 ms 65.653 ms 12 core1-wes-ge-0-0-0-0.pnw-gigapop.net (198.107.150.119) 66.066 ms 65.937 ms 66.084 ms 13 core1-ua-so-1-1-0-0.pnw-gigapop.net (198.107.144.86) 110.430 ms 110.587 ms 110.061 ms 14 198.32.40.132 (198.32.40.132) 121.736 ms 116.928 ms 110.173 ms 15 uafrr (137.229.2.3) 110.350 ms 110.559 ms 110.397 ms 16 ctsdev1.gina.alaska.edu (137.229.79.81) 110.062 ms 110.235 ms 110.298 ms

9 EDC Intenet2 AmericaView TAH Oct 28, 2002 Host Stacks -- General TCP send and receive window (buffer) sizes critical for high latency connections Send and receive buffers have both a default value and a maximum value which limits the size an application such as bbftp or iperf can request Default size affects memory usage. Too large of a default will consume excessive memory –Total default usage= Number of connections * Default buffer size Optimum: Window = Bandwidth * Round Trip Time Currently must be set by hand. Eagerly awaiting self-tuning stacks running on all OS’s, including Sun Solaris and SGI IRIX.

10 EDC Intenet2 AmericaView TAH Oct 28, 2002 Host Stacks – For Test On SGI data capture system: send and receive buffers set to default of 60 kB (max 512kB) On Linux ingest server: send and receive buffers set to default of 64kB (max 64 kB) On Sun ftp client: default receive buffer set to 24kB (max 1024kB) Round trip time (RTT) measured by traceroute: 110ms Bandwidth = Window / RTT –64 kB / 110 ms = 4.5 Mbits/sec –24 kB / 110 ms = 1.7 Mbits/sec (as observed during product pickup)

11 EDC Intenet2 AmericaView TAH Oct 28, 2002 Lessons Learned Brute force (multiple streams) can overcome stack tuning issues Pay more attention to stack tuning at outset Examine window sizes with tcpdump to get the real story AlaskaView application enabled by high speed research networks including Internet2, PNW Gigapop, and VBNS+.

12 EDC Intenet2 AmericaView TAH Oct 28, 2002 Credits USGS EROS Data Center –Grant Mah and Jason Williams on-site –Tremendous support from EDC staff across the entire effort: Engineering, IS support, Network support, User services, Management UAF Alaska SAR Facility –Mike Stringer primary on-site –Brett Delana, Dayne Broderson, Carel Lane, and the ASF Operations staff Alaska Ground Station –Equipment support coordinated by Richard Franchek


Download ppt "EDC Intenet2 AmericaView TAH Oct 28, 2002 AlaskaView Proof-of-Concept Test Tom Heinrichs, UAF/GINA/ION Grant Mah USGS/EDC Mike Rechtenbaugh USGS/EDC Jeff."

Similar presentations


Ads by Google