Presentation is loading. Please wait.

Presentation is loading. Please wait.

What bandwidth do I need for my image?

Similar presentations


Presentation on theme: "What bandwidth do I need for my image?"— Presentation transcript:

1 What bandwidth do I need for my image?
Adrian M. Price-Whelan (NYU) David W. Hogg (NYU) NYU - SPS 10/14/09

2 Overview Quick introduction to astronomical imaging
Hogg, Lang arXiv: Brief explanation of Stochastic Resonance Gammaitoni et al. 1998 Our experiment Results Conclusion 1) Sadly, don’t look at the stars very often - mostly looking at data!

3 Crash Course in Astronomical Imaging
What do astronomers do? In general: Digitally image the sky Measure properties of galaxies, stars, GRB, etc. such as: Flux Spectral content Noise level of the image 1) Sadly, don’t look at the stars very often - mostly looking at data!

4 Crash Course in Astronomical Imaging
Information from the sky is automatically discretized by the instrumentation via ADC Information content is naturally limited by the noise in the image Cosmic rays, HEP, internal radiation, dust, etc. 1) The CCD’s that record photon events in telescopes already at a layer of discretization 2) Even when the shutter is closed on telescopes, there are still events recorded and they can be very roughly modeled to be Gaussian noise

5

6 Crash Course in Astronomical Imaging
Information from the sky is automatically discretized by the instrumentation via ADC Information content is naturally limited by the noise in the image Cosmic rays, HEP, internal radiation, dust, etc. In a sense, too much information is preserved 3) Even given these constraints, telescopes can have a downlink of about 200 GB to 200 TB/day. Economically, it is not feasible to keep all of the data! There is a lot of useless data - namely the noisy parts of the image - and so what astronomers do is just measure the properties of the noise so they can resolve the noise distribution. We’ll come back to this later...

7

8 Stochastic Resonance Noise is not always a bad thing!
Consider taking all the data in an image and putting each row one after the other - to make a really long line of data 2) We in a sense unravel the 2d array of data into a 1d array, and then plot it - we might get something like this:

9 Stochastic Resonance When we discretize data, we have to choose a resolution at which to save the data (which we can think of as the number of bits we assign to an image) If the resolution is too fine, we save too much data Too coarse, and we throw out too much What if we add noise before discretizing?

10 Stochastic Resonance - (I apologize for the lack of labels I threw these images together pretty quickly) - Say we are now looking at just part of the signal shown previously, in which there is a lot of small variation (noise) and one clear source in the middle (probably a star) - The red lines represent the bounds on ‘bins’ that we’re going to sort the data in to - are you familiar with histograms? We’re just creating a pixel histogram based on the values of the pixels - The top left image is just the raw signal, with 2 bins - on the left is the pixel histogram - what we can see from it is there are more data points below 0 than above zero - The bottom left image is the same signal, but with some added Gaussian noise with known properties Variance = 1, Mean = 0 - we got lucky this time in that the noise actually BOOSTED the star up, so that if we switch to a finer resolution of 4 bins

11 Stochastic Resonance - The bottom left image is the same signal, but with some added Gaussian noise with known properties Variance = 1, Mean = 0 - we got lucky this time in that the noise actually BOOSTED the star up, so that if we switch to a finer resolution of 8 bins - we can now see the star in the signal that we added noise to, but still can’t in the top image.

12 Stochastic Resonance I think thats all I’m going to say about stochastic resonance, I just wanted to convey that noise isn’t always bad - in fact you’ll see that it actually enables us to detect faint sources after coadding multiple exposures of the same region of the sky!

13 Some Definitions Minimum Representable Difference ∆
The smallest difference in pixel values that can be represented Can be thought of as ‘resolution’ As ∆ gets bigger, resolution gets worse Delta getting bigger means the minimum representable difference gets bigger, which means we can’t represent small differences in pixel values - therefore the resolution gets coarser

14

15

16

17

18

19 So what we do here is take a fake astronomical image that we KNOW has a variance of 1, and we vary the resolution. Clearly the left part of the plot we’re measuring the value very poorly - so the minimum representable difference is bigger, and as it gets smaller - our error gets smaller. So we vary the resolution, and measure the variance. 1 value, 2 values, etc.

20 Our Experiment We ask, at what limit of the minimum representable difference is too much scientific information lost To answer this, we now consider an image with a star of known flux and point-spread function (and noise, of course) We vary the resolution as shown before, and ask how well can we centroid the star’s position

21 Results For a star of flux 8 our offset (measured position of the star - actual position of the star) saturates at around 2^-3, which is pretty good but it also shows that it saturates around 1/∆ = 1 - that is the minimum representable pixel value is 1!

22 Results For a star of flux 256 it is so bright that we can measure it very well even at very low resolution.

23 Our Experiment In some cases, regions of images that look entirely dark or noise-dominated actually do contain scientific information Often if one has many exposures of the same region of the sky, they can be co- added to reveal the “hidden source;” a star with flux less than the mean noise level We perform the same experiment as before, except with a non-visible star and co-add after varying the resolution Previously, we could visibly see a star - the flux of the star was big enough to show up over the noise level in the image. In this case, we chose a flux that is below, actually a factor of 4 lower than the noise level, add it to an image of pure Gaussian noise - vary the resolution to many different values, and then coadd 1024 images with different noise but the same star location.

24 Results This is perhaps the most surprising and exciting result, hidden in this plot. What this shows, is that even for a star of flux 0.25 (which is much less than the variance of the noise = 1), we can have a resolution as coarse as about ∆=1 and STILL measure the centroid of the star very accurately, as long as we have many exposures of the same region of the sky. I’ll emphasize this again: the minimum representable pixel value difference is 1, and our star flux is intuitively you would never be able to recover this data - but thanks to the noise and co-adding we can! Even though we threw away a ton of data at ∆=1, no science is lost.

25 Results For reference, flux = 8 co-added

26

27 Conclusions Our results hint at the possibility of applying a lossy image compression technique to ST data to cut costs and save time We suggest setting the minimum representable difference ∆ to be ∆=0.5σ We checked the Hubble calibration settings, it seems as if they preserve just a bit too much information at 0.25σ<∆<0.33σ Where sigma is the root variance of the noise distribution


Download ppt "What bandwidth do I need for my image?"

Similar presentations


Ads by Google