Parameterizing dark energy: a field space approach

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Brief introduction on Logistic Regression
EARS1160 – Numerical Methods notes by G. Houseman
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Parameterizing Dark Energy Z. Huang, R. J. Bond, L. Kofman Canadian Institute of Theoretical Astrophysics.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Falsifying Paradigms for Cosmic Acceleration Michael Mortonson Kavli Institute for Cosmological Physics University of Chicago January 22, 2009.
Bright High z SnIa: A Challenge for LCDM? Arman Shafieloo Particle Physics Seminar, 17 th February 09 Oxford Theoretical Physics Based on arXiv:
Voids of dark energy Irit Maor Case Western Reserve University With Sourish Dutta PRD 75, gr-qc/ Irit Maor Case Western Reserve University With.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Objectives of Multiple Regression
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Weak Lensing 3 Tom Kitching. Introduction Scope of the lecture Power Spectra of weak lensing Statistics.
Black hole production in preheating Teruaki Suyama (Kyoto University) Takahiro Tanaka (Kyoto University) Bruce Bassett (ICG, University of Portsmouth)
Bayesian parameter estimation in cosmology with Population Monte Carlo By Darell Moodley (UKZN) Supervisor: Prof. K Moodley (UKZN) SKA Postgraduate conference,
Probing the Reheating with Astrophysical Observations Jérôme Martin Institut d’Astrophysique de Paris (IAP) 1 [In collaboration with K. Jedamzik & M. Lemoine,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
1 1 Eric Linder University of California, Berkeley Lawrence Berkeley National Lab Interpreting Dark Energy JDEM constraints.
Clustering in the Sloan Digital Sky Survey Bob Nichol (ICG, Portsmouth) Many SDSS Colleagues.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Principal Component Analysis: Preliminary Studies Émille E. O. Ishida IF - UFRJ First Rio-Saclay Meeting: Physics Beyond the Standard Model Rio de Janeiro.
General Relativity Physics Honours 2008 A/Prof. Geraint F. Lewis Rm 560, A29 Lecture Notes 10.
Advanced Stellar Populations Advanced Stellar Populations Raul Jimenez
Cosmological Model Selection David Parkinson (with Andrew Liddle & Pia Mukherjee)
The Feasibility of Constraining Dark Energy Using LAMOST Redshift Survey L.Sun.
3rd International Workshop on Dark Matter, Dark Energy and Matter-Antimatter Asymmetry NTHU & NTU, Dec 27—31, 2012 Likelihood of the Matter Power Spectrum.
Holographic Dark Energy and Anthropic Principle Qing-Guo Huang Interdisciplinary Center of Theoretical Studies CAS
Dark Energy and baryon oscillations Domenico Sapone Université de Genève, Département de Physique théorique In collaboration with: Luca Amendola (INAF,
Dark Energy in the Early Universe Joel Weller arXiv:gr-qc/
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
On the Dark Energy EoS: Reconstructions and Parameterizations Dao-Jun Liu (Shanghai Normal University) National Cosmology Workshop: Dark Energy.
Jochen Weller Decrypting the Universe Edinburgh, October, 2007 未来 の 暗 黒 エネルギー 実 験 の 相補性.
Dark Energy: Hopes and Expectations Mario Livio Space Telescope Science Institute Mario Livio Space Telescope Science Institute.
In Dynamic Dark Energy Models. 1. Accelerating expansion & interpretation 2. What is Dynamic dark energy model 3. recent observational results.
Is Cosmic Acceleration Slowing Down? Invisible Universe-UNESCO-Paris 29 th June-3 rd July 2009 Arman Shafieloo Theoretical Physics, University of Oxford.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
Estimating standard error using bootstrap
Univariate Gaussian Case (Cont.)
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Deep Feedforward Networks
12. Principles of Parameter Estimation
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
CJT 765: Structural Equation Modeling
Probing the Coupling between Dark Components of the Universe
Recent status of dark energy and beyond
Carlo Baccigalupi, SISSA
Latent Variables, Mixture Models and EM
Physics 114: Exam 2 Review Material from Weeks 7-11
Complementarity of Dark Energy Probes
Quantum One.
Basic Estimation Techniques
2. Solving Schrödinger’s Equation
Quantum One.
Quantum One.
More about Posterior Distributions
Discrete Event Simulation - 4
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Alcaniz, Chen, Gong , Yu , Zhang
Introduction: A review on static electric and magnetic fields
Shintaro Nakamura (Tokyo University of Science)
Probing the Dark Sector
Linear Model Selection and regularization
Dark Energy Distance How Light Travels
Principles of the Global Positioning System Lecture 11
LECTURE 09: BAYESIAN LEARNING
Parametric Methods Berlin Chen, 2005 References:
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
12. Principles of Parameter Estimation
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Presentation transcript:

Parameterizing dark energy: a field space approach Robert Crittenden University of Portsmouth Work with E. Majerotto, F. Piazza, L. Pogosian

Phenomenology of dark energy Key questions from the theory-observation divide: What are the variables we should use to describe dark energy? What do observations tell us? What are the theoretical priors on those variables? How do we design experiments to address the questions do we want to answer?

Parameterizations of dark energy Without a good theory, our choices for how we parameterize dark energy are fairly arbitrary. Candidates: constant linear kink density acceleration, jerk Any such parameterization amounts to a data compression and means potentially throwing away information. What if interesting DE evolution is orthogonal to these parameterizations? What does theory suggest might be interesting variables?

What can observations tell us? Principal components: Parameterize with enough bins in red shift to allow significant freedom in w(z) (e.g. 30 ) Find the eigenvectors of the Fisher matrix to see what could be measured with future data By combining data we may eventually be able to learn about 4-5 parameters, starting with low frequency, but we’ll eventually get higher frequency modes. Very sensitive to assumptions about systematic errors! Gaussian approximation to likelihood. Crittenden & Pogosian Huterer & Starkman Huterer & Linder, Knox et al.

Phenomenology of dark energy Spectra of eigenvalues from future experiments: Most informative Higher ones are best determined ~1/2 Where do we draw the line? It depends on what we think we already know. In the absence of any prior information, they are all informative. But we always know something! Least informative

Parameterizing dark energy Why not report our constraints in the same way? Choose a parameterization with plenty of degrees of freedom. Report the best determined eigenmodes, their amplitudes and eigenvalues of the likelihood. This would allow us test any w(z) we wanted, not missing any potential useful high frequency information. We can always project to any particular parameterization later using this information! See also Albrecht & Berstein (2007).

The importance of priors Bayesian evidence comparison To compare models, we integrate the likelihood of the data over the possible model parameters: Key questions: What fraction of the parameter volume improves the fit? Occam’s razor Prior parameter distribution How much better does this model fit the data? Best fit likelihood The prior plays a key roll in comparing models, particularly if the fit is not dramatically better. But it is generally unknown!

Ruling out dynamical w(z) Whatever the data are, there’s bound to be some dynamical DE model which fits better than a cosmological constant. Whether its interesting or not depends on our priors. These data are quite consistent with a cosmological constant, but there could be a better fit.

Phenomenology of dark energy Whatever the data are, there’s bound to be some dynamical DE model which fits better than a cosmological constant. Whether its interesting or not depends on our priors. An oscillating function might be a better fit, which would be missed if the chosen parameterization didn’t allow that freedom. Because of the size of the errors in this case, we would likely prefer w=-1 unless we had a model that predicted this precise behavior. However, if the errors were smaller, the improvement in the fit might justify a more complex theory.

A phenomenological prior on w(z) Rather than implicitly putting hard priors by the choice of parameters, we can put in soft priors explicitly. One way to do this is to treat w(z) as a random field described by a correlation function: This is independent of binning choice and has the effect of preferring smooth w(z) histories over quickly changing ones. Strong long range correlations will reproduce the constant or linear prior. But if the data are strong enough to overcome the priors, then higher frequency modes could be seen.

Quintessence priors Quintessence uses a dynamical scalar field to produce acceleration. In principle can reproduce any w(z), but that doesn’t mean all are equally likely! Ideally we would like to know the probability distribution for the various DE histories based on theoretical prejudice, mapping priors on V and initial conditions into w(z). Unfortunately, we have yet to agree on which models should be included (or their relative weightings), much less how the parameters of a given model should be distributed. This makes them hard to falsify! Weller & Albrecht

Thawing and freezing Two generic classes of quintessence models (Caldwell & Linder 05): Thawing models - fixed at early times and rolls when Hubble friction drops. These start out at w=-1 and then increases as the field begins to roll. Freezing models - field runs down steep (divergent) potential and stops when potential flattens out and friction becomes important. These typically start with constant w > -1, and then naturally approach w=-1 as the field takes over driving the expansion.

Priors from quintessence Can we use what we observe about dark energy to help us parameterize it? Equations for a minimally coupled scalar field: In inflation the slow roll approximation is usually used, but when cold dark matter is present, this isn’t usually justified: Work with E Majerotto and F.Piazza

Smoothness of potential Constraints on the potential: Observed density Still evolving today If we assume, If f and its derivatives are of order 1, the constraints suggest a smoothness scale of order: What does this smoothness mean for w(z)? Key assumption!

Small field displacement Observationally we know the equation of state is close to w = -1, which indicates the field hasn’t moved in recent times: The field displacement is thus small compared to the typical smoothness scale of the potential, so it should be reasonable to expand the potential around its present value.

 approximation If we expand the quantity And keep the leading terms in (1+w), where We can then analytically solve for (a) and the dark energy density:

Comparison to exact This approximation gives an impressive fit to the full numerical evolution for thawing models. Blue - full field dynamics Red -  approximation Black - linear parameterization These are fit to the same w and derivative today, and may be improved by fitting in the middle of the range of interest. Gives another two parameter description of DE and gives us a field space measure on DE models.

Likelihoods We can compare to observations using SN, CMB and BAO data. Top - linear parameterization Bottom -  parameterization, matching w, w’ today. Similar likelihood curves show differences in evolution not well constrained with present data. Shows a focusing of the models near w = -1, excluded regions require large change in  (Scherrer 06).

Priors on w(z) The previous curves show only the likelihood, without accounting for the prior probability of the models. The analytic solution allows us to relate the probability of potential to probability of w(z) via the Jacobian: Reflects the fact that if the potential is locally flat, the field doesn’t move and the rest of the potential is not relevant. Uniform grid in (0,1)

Posterior We can fold the prior with the likelihood from the data to find the final posterior distribution. A large volume of the models live near the best fit data, which means the evidence for these models will be large. This however makes it hard to rule out a large fraction of the possible models without greatly improving the error on (1+w).

Conclusions Priors on DE are impossible to avoid; they are necessary to discriminate between models and to decide what we choose to measure. Priors are implicit in how we choose to parameterize DE, so we might be better off allowing a large degree of freedom and making the priors explicit. Thus far little has been done to relate the priors on dark energy parameters to more fundamental parameters. In quintessence we have made an attempt to do this, which shows a focusing of models near w=-1 and also provides a simple template for thawing models.

Post-doc at Portsmouth Soon to be advertised: Post-doc in theoretical cosmology Dark energy, inflation, brane worlds, etc. STFC rolling grant Proposed start date 1 January 2008

Figures of merit We have to choose something to optimize to decide what experiments to build, which is usually called the figure of merit. Often the volume of the error ellipsoid is minimized, which is related to the determinant of the Fisher matrix. This could lead to squeezing in only one dimension at the expense of the others. An alternative is related to the trace of the inverse Fisher matrix, which is simply the mean squared error: This is dominated by the modes which have the greatest errors. Using it will tend to spread what we learn over a large number of independent modes, giving a better Another possibility is to minimize the projected chi-squared, which is the trace of the Fisher matrix.