Presentation is loading. Please wait.

Presentation is loading. Please wait.

APOGEE-2 Data Infrastructure Jon Holtzman (NMSU) APOGEE team.

Similar presentations


Presentation on theme: "APOGEE-2 Data Infrastructure Jon Holtzman (NMSU) APOGEE team."— Presentation transcript:

1 APOGEE-2 Data Infrastructure Jon Holtzman (NMSU) APOGEE team

2 – Data infrastructure for APOGEE-2 will be similar to that of APOGEE-1, generalized to multiple observatories, and with improved tracking of processing – APOGEE raw data and data products are stored on the Science Archive Server (SAS) – Reduction and analysis software is (mostly) managed through the SDSS SVN repository – Raw and reduced data described (mostly) through SDSS datamodel – Data and processing documented via SDSS web pages and technical papers Data infrastructure

3 – APOGEE instrument reads continuously (every ~10s) as data are accumulating, 3 chips at 2048x2048 each Raw data are stored on instrument control computer (current capacity is several weeks of data) Individual readouts are “annotated” with information from telescope and stored on “analysis” computer (current capacity is several months). These frames are archived to local disks that are “shelved” at APO (currently 20 x 3TB disks) – “quick reduction” software at observatory assembles data into data cubes and compresses (lossless) for archiving on SAS Maximum daily compressed data volume ~ 60 Gb Raw data

4 Does not include NMSU 1m + APOGEE data LCO data will be concurrent Total 2.5m raw data to date: ~11 TB

5 “quick reduction” software estimates S/N (at H=12.2) which is inserted into plate database for use with autoscheduling decisions APOGEE-1 – Data transferred to SAS next day, transferred to NMSU later that day, processed with full pipeline following day, updated S/N loaded into platedb, initial QA inspection APOGEE-2 proposal: – Process data at observatory with full pipeline next day, or at SAS location (Utah) and/or – Improve “quick reduction” S/N Initial processing

6 Three main stages (+1 post-processing) – APRED : processing of individual visits (multiple exposures at different detector spectral dither positions) into visit-combined spectra, with initial RV estimates. Can be done daily – APSTAR: combine multiple visits into combined spectra, with final RV determination. For APOGEE-1, has been run annually (DR10: year 1, DR11: year 1+year2) – ASPCAP: process combined (or resampled visit) spectra through stellar parameters and chemical abundances pipeline For APOGEE-1, has been run 3 times – ASPCAP/RESULTS: apply calibration relations to derived parameters, set flag values for these Pipeline processing

7 Raw data: data cubes (apR) Processed exposures (maybe not of general interest?) – 2D images (ap2D) – Extracted spectra (ap1D) – Sky subtracted and telluric corrected (apCframe) Visit spectra – Combine multiple exposures at different dither positions – apVisit files: native wavelength scale, but with wavelength array Combined spectra – Combine multiple visits, requires relative RVs – apStar files: resampled spectra to log(lambda) scale Derived products from spectra – Radial velocities and scatter from multiple measurements (done during combination) – Stellar parameters/chemical abundances from best-fitting template Parameters: Teff, log g, microturbulence (fixed), [M/H], [alpha/M], [C/M], [N/M] Abundances for 15 individual elements – aspcapStar and aspcapField files: stellar parameters of best-fit, pseudo-continuum normalized spectra and best fiitting templates Wrap-up catalog files (allStar, allVisit) APOGEE data products

8 APOGEE data volume Raw data: 2.5m+APOGEE: ~4 TB/year APOGEE-1  ~6 TB/year with MaNGA co-observing 1m+APOGEE: ~2 TB/year LCO+APOGEE: ~3 TB / year TOTAL APOGEE-1 + APOGEE-2 : ~75 TB Processed visit files: ~ 3 TB/year (80% individual exposure reductions) Processed combined star files: ~500 GB/100,000 stars Processed ASPCAP files: raw FERRE files ~500 GB/100,000 stars Bundled output: ~100 GB / 100,000 stars TOTAL APOGEE-1 + APOGEE-2 (one reduction!): ~ 40 TB

9 APOGEE data access “Flat files” available via SDSS SAS: all intermediate and final data product files summary ``wrap-up” files (catalog) “Catalog files” available via SDSS CAS: apogeeVisit, apogeeStar, aspcapStar Spectrum files available via SDSS API and web interface Planning 4 data releases in SDSS-IV: DR14: July 2017 (data through July 2016) DR15: July 2018 (data through July 2017 – first APOGEE-S) DR16: July 2019 (data through July 2018) DR17: Dec 2020 (all data)

10 APOGEE software products apogeereduce: IDL reduction routines (apred and apstar) aspcap speclib: management of spectral libraries, but not all input software (no stellar atmospheres code, limited spectral synthesis code) ferre: F95 code to interpolate in libraries, find best fit idlwrap: IDL code to manage ASPCAP processing apogeetarget: IDL code for targetting

11 APOGEE pipeline processing Software all installed and running on Utah servers Software already in pipeline form (few lines per full reduction step to distribute and complete among multiple machines/processors) Some need to improve distribution of knowledge and operation among team Some external data/software required for ASPCAP operation Generation of stellar atmospheres (Kurucz and/or MARCS) Generation of synthetic spectra (ASSET, but considering MOOG and TURBOSPECTRUM)

12 APOGEE software/personnel apogeereduce developer: Nidever, Holtzman, (Nguyen) operation: Holtzman, (Hayden, Nidever, Nguyen) ASPCAP grids: ASSET: Allende Prieto / Koesterke Turbospec: Zamora, Garcia-Hernandez, Sobeck, Garcia- Perez, Holtzman MOOG: Shetrone, Holtzman (pipeline), others speclib postprocessing: Allende-Prieto, Holtzman ferre: Allende Prieto idlwrap: Holtzman, Garcia-Perez (Shane) Operation: Holtzman (Shane, Shetrone)

13 END

14 Star level bitmasks Targeting flags APOGEE_TARGET1, APOGEE_TARGET2: main survey vs ancillary, telluric, etc. STARFLAG: bitmask flagging potential conditions, e.g. LOW_SNR BAD_PIXELS VERY_BRIGHT_NEIGHBOR PERSIST_HIGH

15 Data quality/issues: ASPCAP Current ASPCAP runs are fits for 6 parameters: Teff, log g, [M/H], [alpha/M], [C/M], [N/M] Teff, log g, [M/H], and [alpha/M] have been “calibrated” using observations of clusters: systematic corrections have been applied to these parameters, and are nonzero for Teff, log g, and [M/H] Results for [C/M] and [N/M] are more challenging to verify, and are more suspect In flat fields, PARAM (calibrated parameters) vs FPARAM (fit parameters) In CAS database, TEFF, LOGG, METALS, ALPHAFE (calibrated) vs/ FIT_TEFF, FIT_LOGG, FIT_METALS, FIT_ALPHAFE (fit) Key catalog bitmasks ASPCAP_FLAG: bitmask flagging potential conditions, e.g., STAR_BAD STAR_WARN PARAMFLAG: details about nature of ASCPAP_FLAG bits

16 DR10: Data taken from April 2011 through July 2012 – First year survey data all observed spectra, even if all visits not complete: summed spectra of what is available release spectra and ASPCAP results – Commissioning data (through June 2011): degraded LSF (especially red chip). No ASPCAP – 170 fields (includes a few commissioning-only fields) – 710 plates (+ sky frames + calibration frames/monitors) – 40-50K stars Looking past DR10 – 250+ fields available as of May, currently being combined – Plan to have DR10-level reductions of all year 2 data around time of DR10 release Scope of Data

17 Data access: flat files SAS: “flat” files Datamodel: http://data.sdss3.org/datamodel/http://data.sdss3.org/datamodel/ APOGEE_TARGET: targeting files include all _possible_ targets as well as selected ones APOGEE_DATA: raw data cubes APOGEE_REDUX: reduced data APOGEE_REDUX: currently corresponds to http://data.sdss3.org/sas/bosswork/apogee/spectro/redux/ http://data.sdss3.org/sas/bosswork/apogee/spectro/redux/ Embedded web pages provide a guide and some static plots Embedded web pages Versions / organization Identify via apred_version/apstar_version/aspcap_version/results_version apred_version : contains visit files (apVisit) organized by plate/MJD apstar_version – contains combined star files, organized by field location aspcap_version – raw ASPCAP results, organized by field location results_version – adds ASPCAP “calibrated” results and sets some additional data quality bits Current version is r3/s3/a3/v302; DR10 version likely to be v303?

18 Summary “wrap-up” files Main summary data files allStar-v302.fits: catalog data for all DR10 stars allStar-v302.fits allVisit-v302.fits: catalog data for all DR10 visits allVisit-v302.fits: These files are not overly large (~60000 star entries in allStar currently), so are really quite manageable Pay attention to bitmasks! allstar=mrdfits(‘allStar-v302.fits’,1) ; skip stars with STAR_BAD (bit 23) and NO_ASPCAP_RESULT (bit 31)set in aspcapflag badbits=(2L^23 or 2L^31) gd=where((allstar.aspcapflag and badbits) gt 0) plot,s[gd].teff,s[gd].logg,…. ; find giant binaries badbits=(2^23 or 2^31) gd=where(allstar.vscatter gt 1 and (allstar.aspcapflag and badbits) eq 0 and s.logg lt 3.8)

19 Data access: API Can get programmatic access to data via APOGEE API (soon)APOGEE API One particularly useful application: downloading subset of spectra Also basis for SAS web app: visual interface to spectra APOGEE API currently under development, available in next several months Database used by API is loaded, graphical spectrum access available via web app: https://spectra.sdss3.org:8100/

20 Data access: CAS Data from summary files (allStar, allVisit, allPlates has been loaded into CAS (TESTDR10, currently restricted access) tables apogeePlate, apogeeStar, apogeeVisit, aspcapStar Example: Example SELECT top 10 p.star,p.ra, p.dec, p.glon, p.glat, p.vhelio_avg, p.vscatter, a.teff,a.logg,a.metals, v.vhelio FROM apogeeStar p JOIN aspcapStar a on a.apstar_id = p.apstar_id JOIN apogeeVisit v on a.star = v.star WHERE (a.aspcap_flag & dbo.fApogeeAspcapFlag('STAR_BAD')) = 0 and p.nvisits > 6 order by a.star Object search through CAS Object search through CAS implemented in sky server

21

22 Abundances of cooler stars Second instrument or first instrument relocation Surface gravity issues: red clump vs red giant Abundance analysis of faint bulge stars: RR Lyr and RC stars Achieving distance distribution


Download ppt "APOGEE-2 Data Infrastructure Jon Holtzman (NMSU) APOGEE team."

Similar presentations


Ads by Google