Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Theory of Differentiation in Statistics Mohammed Nasser Department of Statistics.

Similar presentations


Presentation on theme: "1 Theory of Differentiation in Statistics Mohammed Nasser Department of Statistics."— Presentation transcript:

1 1 Theory of Differentiation in Statistics Mohammed Nasser Department of Statistics

2 2 Relation between Statistics and Differentiation Statistical Concepts/Techniques Use of Differentiation Theory Study of shapes of univariate pdfs An easy application of first- order and second-order derivatives Calculation/stablization of variance of a random variable An application of Taylor’s theorem Calculation of Moments from MGF/CF Differentiating MGF/CF

3 3 Description of a density/ a model dy/dx=k, dy/dx=kx Optimize some risk functional/regularized functional/ empirical risk functional with/without constraints Needs heavy tools of nonlinear optimization Techniques that depend on multivariate differential calculus and functional differential calculus Relation between Statistics and Differentiation Influence function to assess robustness of a statistical An easy application of directional derivative in function space

4 4 Relation between Statistics and Differentiation Classical delta theorem to find asymptotic distribution An application of ordinary Taylor’s theorem Von Mises CalculusExtensive application of functional differential calculus Relation between probability measures and probability density functions Radon Nikodym theorem

5 5 Monotone Function f(x) Monotone Increasing Monotone Decreasing Strictly Increasing Non Decreasing Strictly Decreasing Non Increasing

6 6 Increasing/Decreasing test

7 7 Example of Monotone Increasing Function 0

8 8 ab Maximum/Minimum Is there any sufficient condition that guarantees existence of global max/global min/both?

9 9 If the function is continuous and its domain is compact, the function attains its extremum It’s a very general result It holds for any compact space other compact set of R n. Any convex ( concave) function attains its global min ( max). Without satisfying any of the above conditions some functions may have global min ( max). Some Results to Mention Firstly, proof of existence of extremum Calculation of extremum Then

10 10 What Does Say about f Fermat’s Theorem: if f has local maximum or minimum at c, and if exist, then but converse is not true

11 11 Concave Convex Point of inflection c Concavity If for all x in (a,b), then the graph of f concave on (a,b). If then f has a point of inflection at c.

12 12 Maximum/Minimum Let f(x) be a differential function on an interval I f is maximum at If for all x in an interval, then f is maximum at first end point of the interval if left side is closed and minimum at last end point if right side is closed. If for all x in an interval, then f is minimum at first end point of the interval if left side is closed and maximum at last end point if right side is closed.

13 13 Concave Convex point of inflection Normal Distribution The probability density function is given as,  continuous on R  f(x)>=0  Differentiable on R

14 14 Take log both side Put first derivative equal to zero Now, Normal Distribution

15 15 Normal Distribution Therefore f is maximum at

16 16 Normal Distribution Put 2 nd derivative equal to zero Therefore f has point of inflection at

17 17 Convex Concave Logistic Distribution The distribution function is defined as,

18 18 Logistic Distribution Take first derivative with respect to x Therefore F is strictly increasing Take2nd derivative and put equal to zero Therefore F has a point of inflection at x=0

19 19 Logistic Distribution Now we comment that F has no maximum and minimum. Therefore F is convex on and concave on Since,

20 20 Variance of a Function of Poisson Variate Using Taylor’s Theorem We know that, We are interested to find the Variance of

21 21 The Taylor’s series is defined as, Therefore the variance of Variance of a Function of Poisson Variate Using Taylor’s Theorem

22 22 Risk Functional Risk functional, R L,P (g)= Population Regression functional /classifier, g *  From sample D, we will select g D by a learning method(???)  P is chosen by nature, L is chosen by the scientist  Both R L,P (g * ) and g * are uknown

23 23  Problems of empirical risk minimization Empirical risk minimization Empirical Risk functional, =

24 24 What Can We Do?  We can restrict the set of functions over which we minimize empirical risk functionals  modify the criterion to be minimized (e.g. adding a penalty for `complicated‘ functions). We can combine two. Structural riskMinimization Regularization

25 25 Regularized Error Function In linear regression, we minimize the error function: Replace the quadratic error function by Є-insensitive error function: An example of Є-insensitive error function:

26 26 Linear SVR: Derivation Meaning of equation 3

27 27 ● ● Linear SVR: Derivation ● ● ● ● ● Complexity Sum of errors vs. Case I: Case II: “tube”complexity “tube”complexity

28 28 Linear SVR: Derivation Case I: Case II: “tube”complexity “tube”complexity The role of C ● ● ● ● ● ● ● C is small ● ● ● ● ● ● ● C is big

29 29 ● ● Linear SVR: derivation ● ● ● ● ● Subject to:

30 30 Lagrangian Minimize: f(x)= = Dual var. α n, α n *, μ n, μ * n >=0

31 31 Dual Form of Lagrangian Prediction can be made using: Maximize: ???

32 32 How to determine b? Karush-Kuhn-Tucker (KKT) conditions implies( at the optimal solutions: Support vectors are points that lie on the boundary or outside the tube These equations implies many important things.

33 33 Important Interpretations   

34 34 Support Vector: The Sparsity of SV Expansion and

35 35 Dual Form of Lagrangian (Nonlinear case) Prediction can be made using: Maximize:

36 36 Non-linear SVR: derivation Subject to:

37 37 Non-linear SVR: derivation Subject to: Saddle point of L has to be found: min with respect to max with respect to

38 38 Non-linear SVR: derivation...

39 39 U A Banach Space V, Another B-space f,a nonlinear function What is Differentiation? Differentiation is nothing but local linearization In differentiation we approximate a non-linear function locally by a (continuous) linear function

40 40 Fréchet Derivative I t can be easily generalized to Banach space valued function, f: is a linear map. It can be shown,. every linear map between infinite-dimensional spaces is not always continuous. Definition 1

41 41 We have just mentioned that Fréchet recognized, the definition 1 could be easily generalized to normed spaces in the following way: Frécehet Derivative Where and the set of all continuous linear functions between B 1 and B 2 If we write, the remainder of f at x+h, ; Rem(x+h)= f(x+h)-f(x)-df(x)(h)

42 42 Then 2 becomes Soon the definition is generalized (S-differentiation ) in general topological vector spaces in such a way ; i) a particular case of the definition becomes equivalent to the previous definition when, domain of f is a normed space, ii) Gateaux derivative remains the weakest derivative in all types of S-differentiation. S Derivative

43 43 Definition 3 When S= all singletons of B1, f is called Gâteaux differentiable with Gâteaux derivative. When S= all compact subsets of B1, f is called Hadamard or compactly differentiable with Hadamard or compact derivative. When S= all bounded subsets of B1, f is called or boundedly differentiable with or bounded derivative. Definition 2 Let S be a collection of subsets of B1, let t R. Then f is S- differentiable at x with derivative df(x) if S Derivatives

44 44 Equivalent Definitions of Fréchet derivative (a) For each bounded set, as in R, uniformly (b) For each sequence, and each sequence

45 45 (c) Uniformly in (d) (e)Uniformly in Statisticians generally uses this form or its some slight modification

46 46 Relations among Usual Forms of Definitions Set of Gateaux differentiable function at set of Hadamad differentiable function at set Frechet differentiable function x. In application to find Frechet or Hadamard derivative generally we shout try first to determine the form of derivative deducing Gateaux derivative acting on h,df(h) for a collection of directions h which span B 1. This reduces to computing the ordinary derivative (with respect to R) of the mapping which is much related to influence function, one of the central concepts in robust statistics. It can be easily shown that, (i)When B 1 =R with usual norm, they will three coincide (ii)When B 1, a finite dimensional Banach space, Frechet and Hadamard derivative are equal. The two coincide with familiar total derivative.

47 47 Properties of Fréchet derivative  Hadamard diff. implies continuity but Gâteaux does not.  Hadamard diff. satisfies chain rule but Gâteaux does not.  Meaningful Mean Value Theorem, Inverse Function Theorem, Taylor’s Theorem and Implicit Function Theorem have been proved for Fréchet derivative

48 48

49 49 Lebesgue Counting

50 50 Mathematical Foundations of Robust Statistics T(G)≈T(F)+ d 1 (F,G) <δ d 2 (T(F),T(G)) <ε (T(G)-T(F))≈

51 51 Mathematical Foundations of Robust Statistics

52 52 Mathematical Foundations of Robust Statistics

53 53 Mathematical Foundations of Robust Statistics

54 54 Given a Measurable Space ( ,F), There exist many measures on F. If  is the real line, the standard measure is “length”. That is, the measure of each interval is its length. This is known as “Lebesgue measure”. The  -algebra must contain intervals. The smallest  - algebra that contains all open sets (and hence intervals) is call the “Borel”  -algebra and is denoted B. A course in real analysis will deal a lot with the measurable space.

55 55 Given a Measurable Space ( ,F), A measurable space combined with a measure is called a measure space. If we denote the measure by , we would write the triple: ( ,F  Given a measure space ( ,F  if we decide instead to use a different measure, say  then we call this a “change of measure”. (We should just call this using another measure!) Let  and  be two measures on ( ,F), then (Notation )  is “absolutely continuous” with respect to  if  and  are “equivalent” if

56 56 The Radon-Nikodym Theorem If  <<  then  is actually the integral of a function wrt . g is known as the Radon- Nikodym derivative and denoted:

57 57 The Radon-Nikodym Theorem If  <<  then  is actually the integral of a function wrt . Consider the set function (this is actually a signed measure) Then is the  -superlevel set of g. Idea of proof: Create the function through its superlevel sets Chooseand letbe the largest set such that for all (You must prove such an A  exists.) Now, given superlevel sets, we can construct a function by:

58 58 The Riesz Representation Theorem: All continuous linear functionals on L p are given by integration against a function with That is, letbe a cts. linear functional. Then: Note, in L 2 this becomes:

59 59 The Riesz Representation Theorem: All continuous linear functionals on L p are given by integration against a function with What is the idea behind the proof: Linearity allows you to break things into building blocks, operate on them, then add them all together. What are the building blocks of measurable functions. Indicator functions! Of course! Let’s define a set valued function from indicator functions:

60 60 The Riesz Representation Theorem: All continuous linear functionals on L p are given by integration against a function with A set valued function How does L operate on simple functions This looks like an integral with  the measure! But, it is not too hard to show that  is a (signed) measure. (countable additivity follows from continuity). Furthermore,  <<  Radon-Nikodym then says d  =gd 

61 61 The Riesz Representation Theorem: All continuous linear functionals on L p are given by integration against a function with A set valued function How does L operate on simple functions This looks like an integral with  the measure! For measurable functions it follows from limits and continuity. The details are left as an “easy” exercise for the reader...

62 62 A random variable is a measurable function. The expectation of a random variable is its integral: A density function is the Radon-Nikodym derivative wrt Lebesgue measure: A probability measure P is a measure that satisfies That is, the measure of the whole space is 1.

63 63 In finance we will talk about expectations with respect to different measures. A probability measure P is a measure that satisfies That is, the measure of the whole space is 1. whereor And write expectations in terms of the different measures:


Download ppt "1 Theory of Differentiation in Statistics Mohammed Nasser Department of Statistics."

Similar presentations


Ads by Google