# Introduction to Function Minimization. Motivation example Data on height of a group of 10000 people, men and women Data on gender not recorded, not known.

## Presentation on theme: "Introduction to Function Minimization. Motivation example Data on height of a group of 10000 people, men and women Data on gender not recorded, not known."— Presentation transcript:

Introduction to Function Minimization

Motivation example Data on height of a group of 10000 people, men and women Data on gender not recorded, not known who was man and who woman Can one estimate number of men in the group? Asymmetric histogram Non-Gaussian: two subgroups (men & woman) ? two superposed Gaussians ? ______________________________ This is artificially simulated data, just demo, two Gaussians with different mean randomly men/women = 7/3

See error bars!

Best Gaussian fit

Two Gaussians best fit This is artificially simulated data, just demo, two Gaussians were used for simulation, with different mean, randomly men/women = 7/3

Two Gaussians best fit This is artificially simulated data, just demo, two Gaussians were used for simulation, with different mean, randomly men/women = 7/3

Two Gaussians best fit This is artificially simulated data, just demo, two Gaussians were used for simulation, with different mean, randomly men/women = 7/3

Press-the-button user: find the best fit by two Gaussians but: How it is done? Gaussian Two Gaussians Find the best values of the parameters Needs goodness-of-fit criterion Fit function is nonlinear in its parameters

50 histogram bins each bin represents "one data point" Goodness-of-fit (least squares) :

Mathematically, the problem is the following I have a function of n variables (parameters of goodness-of-fit) I want to find the point for which the function achieves its minimum The function is often non-linear, so analytical solution of the problem is usually hopeless. I need numerical methods to attack the problem

Numerically, the problem is the following I have a function (in the sense of program subroutine) of n parameters I want to find the point for which the function achieves its minimum. Each call to evaluate the function value is often time- consuming, having in mind its definition like I need a numerical procedure which can call the function FCN and by repeating calls with possibly different values of the parameters finally finds the minimizing set of parameter values

Stepping algorithms Start at some point in the parameter space Choose direction and step size Walk according to some clever strategy doing iterative steps and looking for small values of the minimized function

One dimensional optimization problem Stepping algorithm with adaptable step size fcurrent =FCN(xcurrent); repeat forever { xtrial = xcurrent+step; ftrial=FCN(xtrial); if (ftrial { "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/14/4393512/slides/slide_13.jpg", "name": "One dimensional optimization problem Stepping algorithm with adaptable step size fcurrent =FCN(xcurrent); repeat forever { xtrial = xcurrent+step; ftrial=FCN(xtrial); if (ftrial

parabola Success – failure method

parabola line line............estimates first derivative (gradient) parabola... estimates first as well as second derivative

Stepping method can estimate gradient as well as second derivative All functions around minimum look like parabola Newton: go straight to minimum Inverse to the second derivative helps to jump straight to minimum in the direction of the negative gradient

Any stepping method needs starting point initial step size end criterion (otherwise infinite loop) step size < δ improvement in the function values < ε

Problem of local and/or boundary minima

Many-dimensional minimization start at point (p 0,p 1 ) fix value p 1 perform minimization with respect to p 0 fix value p 0 perform minimization with respect to p 1

axes in wrong directions: not in the direction of gradient cure: rotate axes after each iteration so that the first axis is in the direction of the (estimated) gradient

Simplex minimization get rid of the worst point estimated gradient direction trial point

Gradient methods Stepping methods which use in addition to the function value at the current point also local gradient at that point The local gradient can be obtained by calling a user-supplied procedure which returns the vector (n-dimensional array) of first order derivatives by estimating the gradient numerically evaluating the function value at n points in a vary small neighborhood of the current point Performing one-dimensional minimization in the direction of the negative gradient significantly improves the current point position

Quadratic approximation to the minimized function at the current point p 0 in n dimensions Knowing g, one can perform one dimensional optimization with respect to "step size" α in search for the best point If G 0 were known, the minimum will be at It would be useful, if the gradient method could also estimate the matrix G -1

If the minimized function is exactly quadratic, then G is constant in the whole parameter space If the minimized function is not exactly quadratic, we expect slow variations of G in the region not far from the minimum Local numerical estimate of G is costly, needs many calls to F, and than matrix inversion is needed Idea: can one iteratively estimate G -1 ?

Example of a variable metric method for quadratic functions the iteration converges to minimum and true G -1

Variable metric methods fast convergence in the almost quadratic region around minimum added value: good knowledge of G -1 at minimum what means that the shape of the optimized function around minimum is known

MINUIT Author: F.James (CERN) Complex minimization program (package) comprising various minimization procedures as well as other useful data-fitting tools Among them SIMPLEX MIGRAD (variable metric method)

MINUIT Originally FORTRAN program in the CERN library Now available in C++ stand alone (SEAL project) http://seal.web.cern.ch/seal/MathLibs/Minuit2/html/index.html contained in the CERN ROOT package http://root.cern.ch Available in Java (FreeHEP JAIDA project) http://java.freehep.org/freehep-jminuit/index.html

References F. James and M. Winkler, C++ MINUIT User's Guide http://seal.cern.ch/documents/minuit/mnusersguide.pdf F. James, Minuit Tutorial on Function Minimization http://seal.cern.ch/documents/minuit/mntutorial.pdf F. James, The Interpretation of Errors in Minuit http://seal.cern.ch/documents/minuit/mnerror.pdf Microsoft Visual c++ Express is FREE c++ for Windows http://www.microsoft.com/express/vc/

Download ppt "Introduction to Function Minimization. Motivation example Data on height of a group of 10000 people, men and women Data on gender not recorded, not known."

Similar presentations