Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Methods of Prediction Motti Sorani Boaz Cohen Supervisor: Gady Zohar Technion - Israeli Institute of Technology Department of Electrical Engineering.

Similar presentations


Presentation on theme: "Advanced Methods of Prediction Motti Sorani Boaz Cohen Supervisor: Gady Zohar Technion - Israeli Institute of Technology Department of Electrical Engineering."— Presentation transcript:

1 Advanced Methods of Prediction Motti Sorani Boaz Cohen Supervisor: Gady Zohar Technion - Israeli Institute of Technology Department of Electrical Engineering The Image and Computer Vision Laboratory

2 Project Goals Enhanced prediction scheme based on the former project. Better approximation of the system behavior using Kalman Filtering. Implementation of a competative prediction tool that is based on neural network Implementation of LZ Predictor, and adapting its prediction scheme to continuous signals.

3 Enhanced Prediction Schemes The following points of weakness were diagnosed in the former project:  A need for an “optimal” criterion while searching for the optimal-evaluation-environment  Prediction limited to a fixed dimension (the fractal dimension calculated using GP algorithm)  Symmetrical environments while searching for an optimal evaluation environment - poor results near sharp areas of the system’s behavior.

4 Enhanced Prediction Schemes The following points of weakness were diagnosed in the former project:  A need for an “optimal” criterion while searching for the optimal-evaluation-environment  Prediction limited to a fixed dimension (the fractal dimension calculated using GP algorithm)  Symmetrical environments while searching for an optimal evaluation environment - poor results near sharp areas of the system’s behavior.

5 Confidence Interval Criterion The former criterions: The Neighbor Criterion Xn Xn+1 XnewXnei opt Xnew+1

6 Confidence Interval Criterion (cont) The former criterions: The Nmse Criterion Xn Xn+1 Xnew opt Xnew+1

7 Confidence Interval Criterion (cont) The New criterion: The Confidence Interval Criterion Choose the environment in which the regression has the best (minimal) confidence interval. Motivation: The confidence interval gives us the interval around in which exists in 90% probability. Small Confidence Interval  Better Evaluation Env.

8 Confidence Interval Criterion (cont) Criterions Comparison

9 Confidence Interval Criterion (cont) Criterions Comparison Confidence IntervalNeighborNMSE

10 Confidence Interval Criterion - Conclusions The Confidence Interval criterion proved its superiority over the NMSE criterion. In most cases it was better than the Neighbor criterion as well Thus, the Confidence-Interval Criterion was selected to be the major criterion in our experiments.

11 Enhanced Prediction Schemes The following points of weakness were diagnosed in the former project:  A need for an “optimal” criterion while searching for the optimal-evaluation-environment  Prediction limited to a fixed dimension (the fractal dimension calculated using GP algorithm)  Symmetrical environments while searching for an optimal evaluation environment - poor results near sharp areas of the system’s behavior.

12 Multi Dimensional Prediction In the former project: Prediction is done on a fixed dimensional state-vector. (the dimension is the fractal dimension of the set). The reason: Smaller dimension  the attractor won’t be embedded correctly in the embedding space Bigger dimension  the points go far from each other  demands a large number of samples

13 Multi Dimensional Prediction (cont) Fixed Dimensional Prediction Advantage: Speed, Speed, Speed. Disadvantage: The fractal dimension calculated is an averaged one. We know that certain areas of the attractor have a bigger dimension than the averaged value. We want to allow our prediction to increase/decrease dimension as needed

14 Multi Dimensional Prediction (cont) The Solution Xn (samples) Embedding Dim = 1 Dim = 2 Dim = 3 Dim = 10 Prediction Dim = 1 Dim = 2 Dim = 3 Dim = 10 Pick Best (in terms of Confidence Interval) Xn+1 (samples)

15 Multi Dimensional Prediction (cont) Example Set: AA N: 2180 LookAhead: 200 Multi Dim Dim = 5

16 Multi Dimensional Prediction - Conclusions As we expected, using Multi- Dimensional Prediction improved the quality of the prediction, in cost of run-time.

17 Enhanced Prediction Schemes The following points of weakness were diagnosed in the former project:  A need for an “optimal” criterion while searching for the optimal-evaluation-environment  Prediction limited to a fixed dimension (the fractal dimension calculated using GP algorithm)  Symmetrical environments while searching for an optimal evaluation environment - poor results near sharp areas of the system’s behavior.

18 Asymmetrical Evaluation Environment Xn In the former project: searching for environments that are symmetrical around Xnew. Poor results near sharp areas Xn+1 opt Xnew

19 Asymmetrical Evaluation Environment (cont) The algorithm (by example): Step 1: Partition of the range

20 Asymmetrical Evaluation Environment (cont) The algorithm (by example): Step 2: Try all possibilities

21 Asymmetrical Evaluation Environment (cont) The algorithm (by example): Step 3: Find the optimal

22 Asymmetrical Evaluation Environment (cont) The algorithm (by example): Step 4: go to step 1 (repartition)

23 Asymmetrical Evaluation Environment (cont) Examples Set: AA N: 2180 LookAhead:100 Dim: 2 Symmetric Env Asymmetric Env

24 Asymmetrical Evaluation Environment - Conclusions The algorithm succeeds in finding environment with minimum value of the quality criterion. Thus, the confidence interval is reduced, but in some cases the Hit-Ratio isn’t improved. Possible reason: Noise Contribution

25 System approximation using Kalman Filtering The model: One Dimensional Kalman Filter Noises are gaussian, independent in time, and independent one of each other.

26 Kalman Filter The filter Recursive filter Optimization problem - finding of a(k) and b(k) that minimize the error

27 Kalman Filter The Extended Kalman Filter (EKF) The model:  is non-linear x, w can be multi-dimensional

28 Kalman Filter The Extended Kalman Filter (EKF) The model: A, B are local linear approximation of  EKF doesn’t promise us the optimal solution!

29 Kalman Filter The Extended Kalan Filter (EKF) The filter:

30 System approximation using Kalman Filtering Our goal: To eliminate the measurement noise from the state vectors

31 Kalman Filtering examples Linear Transform N=1000

32 מספר איטרציות של סינון קלמן. רעש מדידהרעש מערכתמספר נקודות לימוד העתקה 5210 1.0913951.0932821.0938541.63505750 לינארית 0.232660.232660.232660.22485950 לינארית 0.7459150.8625830.9117363.2330150 לינארית 1.0901631.0920351.0926881.62688750 לינארית 0.6472210.6767510.8937824.46582550 לינארית 0.0004890.0004890.0004890.000534150 משולש 1.2910861.1764071.3468960.436515450 משולש Prediction using Kalman Filtering העתקת משולש NMSE of prediction

33 Prediction using Kalman Filtering Example Linear Transform N=50 ITR=5 ITR=1Without Kalman

34 Prediction using Kalman Filtering - Conclusions EKF demands accurate knowledge of the behavior of the system, but having an accurate knowledge is the reason why we use the Kalman filter… We checked the iterative process of: filter  improved transform  filter Predicting signals with fast-changes in their behavior are not improved by this scheme (the fast changes are considered as noise, the filter smoothes the behavior) Finger Rule: prediction will be efficient if measurement noise is greater than system noise in at least one order. In most cases first iteration is enough.

35 Competitive tool - neural network We implemented a competitive prediction tool that is based on neural network, to be used as a comparison to our prediction scheme. we used the backpropagation algorithm in order to train the network. The tool was written in MATLAB.

36 Competitive tool - neural network Our predictor uses the Confidence-Interval criterion Comparison

37 Competitive tool - neural network Comparison Neural NetworkOur predictor Set: AA N: 1000 LookAhead: 100

38 Competitive tool - neural network Comparison Neural NetworkOur predictor Set: D N: 1100 LookAhead: 100

39 Competitive tool - neural network - conclusions The comparison between the prediction results of our tool, and the neural network shows our tool’s superiority, for the signals that were tested.

40 Sequential Prediction Common usage: for signals with finite accuracy The Idea: The predictor is FS. (Final State Predictor) It keeps in memory only part of the past knowledge, thus can be used for sequential prediction of infinite set.

41 Sequential Prediction Some Terms before we start... Alpha-Bet: set of all possible values of measurement. For example, digital information has an alpha-bet of {0,1} We deal with the case of finite alpha-bet

42 Sequential Prediction FS Predictor Predictor keeps all the information needed for the prediction inside. In other words the FS predictor keeps an approximation for the system’s state which it updates sequentially.

43 Sequential Prediction FS Predictor For example: The alpha-bet: {-2, -1, 0, 1, 2} The Classes: Negative {-2, -1} Non-negative {0, 1, 2}

44 Sequential Prediction The sequential FS Prediction scheme: f - stochastic g- deteministic The problem: Find optimal f & g that minimize the fraction of errors

45 Sequential Prediction Markovian Predictor Markovian predictor of order k is a FS-predictor with the following properities: The state is composed of k-order embedding of the last samples. The f-function is: Empiric probability The problem: Increasing k as n is increased

46 LZ Predictor FS predictor that increases its order automatically. Based on LZ parsing.

47 LZ Parsing The result of parsing: 00101010100 is: 0, 01, 010, 1, 0100 The dictionary tree is actually the g-function. The probabilities in the nodes generate the f-function. The tree is self-increasing.

48 Applying LZ Predictor on continuous signals Xn 1n C  Maps cont. to disc. LZ Predictor For Example: Predicting the aim of the signal NOTE: The partitioning of the continuous space to the cells is very important for the quality of the prediction

49 Applying LZ Predictor on continuous signals Results Predicting the sequence of 000100010001… with Salt&pepper noise. 641282565121024204840968192Np \ N 0.410.340.300.300.260.250.230.210.100000 0.410.270.210.170.110.090.070.050.010000 0.380.260.220.150.110.080.060.040.001000 0.360.230.200.160.100.080.060.040.000100 0.330.260.200.160.100.070.050.040.000010

50 Applying LZ Predictor on continuous signals Example - Stocks Prediction of the aim of the signal Fraction of Errors LEV 0.4754652 0.4890024 0.4957708 0.46700516 0.45854532

51 Applying LZ Predictor on continuous signals - Conclusions The fraction error is lower-bounded as it can be seen in the case of the binary-sequence (decreasing the noise probability doesn’t decrease the error). The reason: Guessing at the leaves of the dictionary tree. Discretization of a continuous signals shows good results especially for the STOCKS signal. Partitioning the space to cells proved to be very effective.


Download ppt "Advanced Methods of Prediction Motti Sorani Boaz Cohen Supervisor: Gady Zohar Technion - Israeli Institute of Technology Department of Electrical Engineering."

Similar presentations


Ads by Google