Download presentation

Presentation is loading. Please wait.

Published byMoriah Alderman Modified over 2 years ago

1
Estimating ETAS 1. Straightforward aspects. 2. Main obstacles. 3. A trick to simplify things enormously. 4. Simulations and examples. 1

2
1. Point process estimation (or inversion) is easy. a) Point processes can generally be very well estimated by maximum likelihood estimation (MLE). MLEs are generally consistent, asymptotically normal, and efficient. Fisher (1925), Cramer (1946), Ogata (1978), Le Cam (1990). b) Maximizing a function is fast and easy. 400 yrs of work on this since Newton (1669). Every computer package has a strong quasi-Newton minimization routine, such as optim() in R. For instance, suppose the function you want to minimize is f(a,b) = (a-4.19) 4 + (b-7.23) 4. f = function(p) (p[1]-4.19)^4 + (p[2]-7.23)^4 p = c(1,1) ## initial guess at (a,b) est = optim(p,f) c) The likelihood function is fairly simple. In practice, it’s a little easier to compute log(likelihood), and minimize that. log(likelihood) = ∑ log{ (t i,x i,y i )} - ∫∫∫ (t,x,y) dt dx dy. ETAS (Ogata ‘98), (t,x,y) = µ (x,y) + ∑g(t-t j, x-x j, y-y j; M j ), where g(t,x,y,M) = K (t+c) -p e a(M-M0) (x 2 + y 2 + d) -q or K (t+c) -p {(x 2 + y 2 )e -a(M-M0) + d} -q. 2

3
1. Point process estimation (or inversion) is easy. ETAS (Ogata ‘98), (t,x,y) = µ (x,y) + ∑g(t-t j, x-x j, y-y j; M j ) where g(t,x,y,M) = K (t+c) -p e a(M-M0) (x 2 + y 2 + d) -q or K (t+c) -p {(x 2 + y 2 )e -a(M-M0) + d} -q. To make these spatial and temporal parts of g densities, I suggest writing g as g(t,x,y,M) = {K (p-1) c p-1 (q-1) d q-1 / } (t+c) -p e a(M-M0) (x 2 + y 2 + d) -q or g(t,x,y,M) = {K (p-1) c p-1 (q-1) d q-1 / } (t+c) -p {(x 2 + y 2 )e -a(M-M0) + d} -q. Two reasons for this: 1) It’s easy to see how to replace g by another density. 2) For each earthquake (t j,x j,y j,M j ), ∫∫∫ g(t-t j, x-x j, y-y j; M j ) dt dx dy = K e a(M-M0). 3

4
2. Point process estimation is hard! Main obstacles: a) Small problems with optimization routines. Extreme flatness, local maxima, choosing a starting value. b) The integral term in the log likelihood. log(likelihood) = ∑ log{ (t i,x i,y i )} - ∫∫∫ (t,x,y) dt dx dy. The sum is easy, but the integral term is extremely difficult to compute. By far the hardest part of estimation (Ogata 1998, Harte 2010). Ogata ‘98: divide space around each pt into quadrants and integrate over them. Harte ‘10: PtProcess uses optim() or nlm(). User must calculate the ∫ somehow. Numerical approximation is slow and is a poor approx. for some values of p,q, etc. 1,000 computations x 1,000 events x 1,000 function calls = 1 billion computations. Problem b) contributes to a reluctance to repeat estimation and check on a). 4

5
3. A suggested trick. Recall that log(likelihood) = ∑ log{ (t i,x i,y i )} - ∫∫∫ (t,x,y) dt dx dy, where (t,x,y) = µ (x,y) + ∑g(t-t j, x-x j, y-y j; M j ). After normalizing g, ∫∫∫ g(t-t j, x-x j, y-y j; M j ) dt dx dy = K e a(M-M0), if the integral were over infinite time and space. Technically, in evaluating ∫∫∫ (t,x,y) dt dx dy, the integral is just over time [0,T] and in some spatial region S. I suggest ignoring this and approximating ∫∫∫ g(t-t j, x-x j, y-y j; M j ) dt dx dy ~ K e a(M-M0). Thus, ∫∫∫ (t,x,y) dt dx dy = µT + ∫∫∫ ∑ g(t-t j, x-x j, y-y j; M j ) dt dx dy = µT + ∑ ∫∫∫ g(t-t j, x-x j, y-y j; M j ) dt dx dy ~ µT + K ∑ e a(Mj-M0), which doesn’t depend on c,p,q,d, and is extremely easy to compute. 5

6
4. Simulation and example. 100 ETAS processes, estimated my way. 6

7
4. Simulation and example. % error in µ, from 100 ETAS simulations. 7

8
4. Simulation and example. CA earthquakes from 14 months after Hector Mine, M ≥ 3, from SCSN / SCEDC, as discussed in Ogata, Y., Jones, L. M. and Toda, S. (2003). 8

9
4. Simulation and example. Hector Mine data. Convergence of ETAS parameters is evident after ~ 200 days. Note that this repeated estimation over different time windows is easy if one uses the integral trick. This seems to lead to more stability in the estimates, since a little local maximum is less likely to appear repeatedly in all these estimations. 9

10
4. Simulation and example. Hector Mine data. ETAS parameter estimates as ratios of final estimates. 10

11
Suggestions. 1. Write the triggering function as a density in time and space. 2. Approximate the integral term as K ∑ e a(Mj-M0). 3. Given data from time 0 to time T, estimate ETAS using data til time T/100, then 2T/100, etc., and assess convergence. Conclusions. It’s fast, easy, and works great. Huge reduction in run time. Enormous reduction in programming. 11

Similar presentations

OK

Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.

Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on team building skills Ppt on pin diode application Ppt on tcp/ip protocol suite Ppt on limitation act 1939 Miracles in the bible ppt on how to treat Ppt on sedimentary rocks formation Competency based pay ppt online Ppt on data collection methods pdf Ppt on conservation of land Ppt on polynomials and coordinate geometry calculator