Presentation is loading. Please wait.

Presentation is loading. Please wait.

25-09-2006EXPReS FABRIC meeting at Poznan, Poland1 EXPReS FABRIC WP 2.2 Correlator Engine Meeting 25-09-2006 Poznan Poland JIVE, Ruud Oerlemans.

Similar presentations


Presentation on theme: "25-09-2006EXPReS FABRIC meeting at Poznan, Poland1 EXPReS FABRIC WP 2.2 Correlator Engine Meeting 25-09-2006 Poznan Poland JIVE, Ruud Oerlemans."— Presentation transcript:

1 25-09-2006EXPReS FABRIC meeting at Poznan, Poland1 EXPReS FABRIC WP 2.2 Correlator Engine Meeting 25-09-2006 Poznan Poland JIVE, Ruud Oerlemans

2 25-09-2006EXPReS FABRIC meeting at Poznan, Poland2 WP2.2 Correlator Engine Develop a Correlator Engine that can run on standard workstations, deployable on clusters and grid nodes 1.Correlator algorithm design (m5) 2.Correlator computational core, single node (m14) 3.Scaled up version for clusters (m23) 4.Distributed version, middle ware (m33) 5.Interactive visualization (m4) 6.Output definition (m15) 7.Output merge (m24)

3 25-09-2006EXPReS FABRIC meeting at Poznan, Poland3 Current broadband Software Correlator Station 1Station 2Station N Raw data 16 MHz, Mk4 format on linux disk Channel extraction Extracted data Delay corrections Delay corrected data Correlation. SFXC Data Product Pre-calculated,Delay tables From Mk5 to linux disk Raw data BW=16 MHz, Mk4 format on Mk5 disk DIM,TRM, CRM DCM,DMM, FR SU Correlator Chip EVN Mk4 equivalents

4 25-09-2006EXPReS FABRIC meeting at Poznan, Poland4 High level design Distributed Correlation Process VEX CALC DelayCCF VEX WFM SCHED schedule VEX Grid Node Grid Node Grid Node Field System Mark5 System Field System Mark5 System Field System Mark5 System Principle Investigator Central operator Telescope operator JIVE archive EOP

5 25-09-2006EXPReS FABRIC meeting at Poznan, Poland5 Grid considerations/aspects  Why use grid processing power? It is available, no hardware investment required It will be upgraded regularly  Degree of distribution is trade-off between Processing power at the grid nodes Data transport capacity to the grid nodes  Data logistics and coordination More complicated when more distributed  Processing at telescope and grid nodes Station related processing at telescope site and correlation elsewhere All processing at grid nodes

6 25-09-2006EXPReS FABRIC meeting at Poznan, Poland6 Data distribution over grid sites (1) Baseline slicing Pros Small nodes Simple implementation at node Cons Multiplication of large data rates, especially when number of baselines is large Data logistics complex Scalability complex

7 25-09-2006EXPReS FABRIC meeting at Poznan, Poland7 Data distribution over grid sites (2) All data to one siteAll data to different sites Time slicing Channel slicing Pros Simple data logistics Central processing Live processing easy Slicing at the grid site Dealing with only one site. Cons Powerful central processing site required Pros Smaller nodes Live processing possible Data slicing at nodes Cons Multiplication of large data rates Simultaneous availability of sites when processing live Pros Smaller nodes Live processing per channel Simple implementation Easy scalable Cons Channel extraction at telescope increases data rate Pros Smaller nodes Smaller data rates Simple implementation Easy scalable No data mulitplication Cons Complex data logistics after correlation Live correlation complex 1 2 34

8 25-09-2006EXPReS FABRIC meeting at Poznan, Poland8 Correlator architecture for file input SA SB SC SD Core1 CP1 Time slice 1 SA SB SC SD Core2 CP2 Time slice 2 SA SB SC SD Core3 CP3 Time slice 3 Core1 CP Processes data from one channel Easy scalable, because one application has all the functionality Can exploit multiple processors using MPI Code reuse through OO and C++ This software architecture can work for data distributions 1,2 and 3 Offline processing

9 25-09-2006EXPReS FABRIC meeting at Poznan, Poland9 Correlator architecture for data streams SASBSCSD 1.1 1.2 1.3 2.1 2.2 2.3 3.1 3.2 3.3 Core1Buffer 1 4.1 4.2 4.3 Core2Buffer 2 Core3Buffer 3 Core4 1.1 1.2 1.3 2.1 2.2 2.3 3.1 3.2 3.3 4.1 4.2 4.3 CP File on disk Memory buffer with short time slices Time Real time processing

10 25-09-2006EXPReS FABRIC meeting at Poznan, Poland10 Other issues  Swinburne University, Adam Deller Last summer exchange of expertise on their software correlator  New EXPReS employee: Yurii Pidopryhora, Astronomy background Data analysis and testing  New SCARIe employee: Nico Kruithof Computer science background Scari, NWO funded project aimed at sw correlator on Dutch Grid

11 25-09-2006EXPReS FABRIC meeting at Poznan, Poland11 WP 2.2.? Status Work PackageMStatus 1.Correlator algorithm design5almost finished 2.Correlator computational core14active 3.Scaled up version for clusters23active 4.Distributed version33pending 5.Interactive visualization4pending 6.Output definition15designing 7.Output merge24designing


Download ppt "25-09-2006EXPReS FABRIC meeting at Poznan, Poland1 EXPReS FABRIC WP 2.2 Correlator Engine Meeting 25-09-2006 Poznan Poland JIVE, Ruud Oerlemans."

Similar presentations


Ads by Google