Forecasting Financial Time Series using Neural Networks, Genetic Programming and AutoRegressive Models.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
NEURAL NETWORKS Backpropagation Algorithm
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Ch11 Curve Fitting Dr. Deshi Ye
Institute of Intelligent Power Electronics – IPE Page1 Introduction to Basics of Genetic Algorithms Docent Xiao-Zhi Gao Department of Electrical Engineering.
Machine Learning Neural Networks
1 Introduction to Bio-Inspired Models During the last three decades, several efficient machine learning tools have been inspired in biology and nature:
Chapter 13 Multiple Regression
Decision Support Systems
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Data Mining Techniques Outline
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Simple Linear Regression
Chapter 12 Multiple Regression
A new crossover technique in Genetic Programming Janet Clegg Intelligent Systems Group Electronics Department.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Neural Networks Chapter Feed-Forward Neural Networks.
The Performance of Evolutionary Artificial Neural Networks in Ambiguous and Unambiguous Learning Situations Melissa K. Carroll October, 2004.
1 Simple Linear Regression 1. review of least squares procedure 2. inference for least squares lines.
Radial Basis Function G.Anuradha.
Radial-Basis Function Networks
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
NEURAL NETWORKS FOR TECHNICAL ANALYSIS: A STUDY ON KLCI 授課教師:楊婉秀 報告人:李宗霖.
Radial Basis Function Networks
Neural networks - Lecture 111 Recurrent neural networks (II) Time series processing –Networks with delayed input layer –Elman network Cellular networks.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
1 What does genetic programming teach us about the foreign exchange market ? Chris Neely* Paul Weller † Rob Dittmar** December 1-2, 1998 *Economist, Federal.
by B. Zadrozny and C. Elkan
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Evolving a Sigma-Pi Network as a Network Simulator by Justin Basilico.
Multiple-Layer Networks and Backpropagation Algorithms
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Chapter 11 – Neural Networks COMP 540 4/17/2007 Derek Singer.
Statistics for Business and Economics 7 th Edition Chapter 11 Simple Regression Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Ch.
Classification / Regression Neural Networks 2
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
EQT 373 Chapter 3 Simple Linear Regression. EQT 373 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
M. E. Malliaris Loyola University Chicago, S. G. Malliaris Yale University,
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
Applications of Neural Networks in Time-Series Analysis Adam Maus Computer Science Department Mentor: Doctor Sprott Physics Department.
Y X 0 X and Y are not perfectly correlated. However, there is on average a positive relationship between Y and X X1X1 X2X2.
© 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang 12-1 Chapter 12 Advanced Intelligent Systems.
Akram Bitar and Larry Manevitz Department of Computer Science
© 1999 Prentice-Hall, Inc. Chap Chapter Topics Component Factors of the Time-Series Model Smoothing of Data Series  Moving Averages  Exponential.
 Based on observed functioning of human brain.  (Artificial Neural Networks (ANN)  Our view of neural networks is very simplistic.  We view a neural.
Mutation Operator Evolution for EA-Based Neural Networks By Ryan Meuth.
Robert J. Marks II CIA Lab Baylor University School of Engineering CiaLab.org Artificial Neural Networks: Supervised Models.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Financial Data mining and Tools CSCI 4333 Presentation Group 6 Date10th November 2003.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Chapter 11 – Neural Nets © Galit Shmueli and Peter Bruce 2010 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning Supervised Learning Classification and Regression
Artificial Neural Networks
Learning in Neural Networks
Radial Basis Function G.Anuradha.
CSC 578 Neural Networks and Deep Learning
Chapter 12 Advanced Intelligent Systems
Lecture 11. MLP (III): Back-Propagation
Artificial Neural Networks
Neural Network - 2 Mayank Vatsa
Artificial Intelligence 10. Neural Networks
Prediction Networks Prediction A simple example (section 3.7.3)
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Forecasting Financial Time Series using Neural Networks, Genetic Programming and AutoRegressive Models

Mohammad Ali Farid Faiza Ather M. Omer Sheikh Umair H. Siddiqui

Financial time series data : non linear non trivial Stochastic data makes prediction difficult Efficient Market Hypothesis AIM of the project : To test the predictability of financial time series data using both parametric and nonparametric models

EFFICIENT MARKET HYPOTHESIS The possibility of arbitrage makes it impossible to predict the future values The prediction which is realized by everyone in the market will not turn true Assumes that every investor has perfect and equal information If EMH is true then investing in speculative trade is no better than gambling If EMH is NOT true then the market can be predicted

AGENDA Understanding the Data : Data Analysis Modeling and Prediction Using: Neural Networks Econometrical Methods Genetic Programming Conclusion

DATA SET FINANCIAL TIME SERIES: Daily Exchange Rate DATA: Exchange Rate between US Dollar and Pakistani Rupee. SERIES: 371 Points FROM 31 Jan 2002 to 4 th Feb 2003 ( Period Selected due to stability and lack of external shocks) DATA PROVIDERS: ONADA Currency Exchange

DATA ANALYSIS PROBLEM: Non Stationarity 2% difference in mean between the first half and second half SOLUTION: Preprocessing Mean Min Max Range0.001

DATA PREPROCESSING Unprepocessed data exhibits non - stationarity The trend does not allow modeling First Order Differencing required

STATISTICAL ANALYSIS Mean : E-06 Standard deviation: Skewness: (negatively skewed) Kurtosis: (leptokurtic) Min: Max: Range:

MODELING EXCHANGE RATE Numerous factors affect exchange rates Impossible to factor in all the variables Solution: Use Correlation Use past values to predict the future

Data Sets Windowing: Selecting the input and output set The project used five different types of data sets for the training. The variations in the data are based on different window sizes and different level of data processing.

DATA SETS Data sets used for modeling were: Data Set A: Primary series, change in exchange rate, daily values with 7-1 window. Data Set B: Primary series with 14-1 window. Data Set C: Moving averages, average of three days with 7-1 window. Data Set D: Primary series with 7-3 window.

MODELING AND PREDICTION USING NEURAL NETWORKS Feed Forward Networks Radial Basis Networks Recurrent Elman Networks

COMPARISON OF DATA SET A AND DATA SET B

FEED FORWARD NETWORKS Universal Approximators: Capable of representing non-linear functional mappings between inputs and outputs Can be trained with a powerful and computationally efficient training algorithm called the Error Back Propagation Architecture:

COMPARISON OF ALL THE FFNs FFNS with varied: Activation Functions, Training algorithms and number of hidden layers

FEED FORWARD NETWORKS BEST: Single hidden layer Activation function: logsig and linear Training algorithm:gradient descent with momentum

What are radial basis networks? Radial Basis Function Networks are based on the viewpoint that learning is similar to finding a surface in a multi-dimensional space that provides a best fit to the training data Hidden units provide a set of “functions” that constitute an arbitrary “basis” for the input patterns when they are expanded into the hidden-unit space; these functions are called radial-basis functions. Architecture : The input layer: source nodes. The hidden layer: high enough dimension The output layer: supplies a response of the network to the activation patterns applied to the input layer. RADIAL BASIS NETWORK

In the radial basis network the Gaussian function was used as the basis function in its hidden layer Φ (r) = exp ( - r 2 / 2σ 2 ) for σ> 0, & r >= 0 (σ ) Spread of the radial basis is a significant factor in design of the network RADIAL BASIS NETWORK

COMPARISON OF RADIAL BASIS NETWORKS RB4 spread = 0.3 RB5 spread = 1.5

RADIAL BASIS NETWORK BEST: ( σ ) Spread = 0.3 i.e 1/3 * range

RECURRENT ELMAN NETWORKS A modification of the feedforward architecture A “context” layer is added, which retains information between observations. New inputs are fed into the RNN that is, previous contents of the hidden layer are passed into the context layer. These are then fed back into the hidden layer in the next time step

COMPARISON OF ELMAN NETWORK

RESULTS WITH ELMAN NETWORK BEST: Elman 7 Hidden layers : 2 ; 10 neurons in the first and 5 in the second Activation function for both layers: logsigmoid Training algorithm: gradient descent with momentum

A COMPRISON OF ALL THE NEURAL NETS

AUTO REGRESSIVE MODEL WITH 16 LAGS

CoefficientsStandard Errort StatP-value Intercept E E deltaE E-17 deltaE E-21 deltaE E-13 deltaE E-12 deltaE E-10 deltaE E-08 deltaE deltaE deltaE deltaE deltaE deltaE deltaE deltaE deltaE deltaE

CoefficientsStandard Errort StatP-value Intercept E E deltaE E-17 deltaE E-21 deltaE E-13 deltaE E-12 deltaE E-10 deltaE E-08 deltaE deltaE deltaE deltaE deltaE deltaE deltaE deltaE deltaE

CoefficientsStandard Errort StatP-value Intercept E E deltaE E-17 deltaE E-21 deltaE E-13 deltaE E-12 deltaE E-10 deltaE E-08 deltaE deltaE deltaE deltaE deltaE deltaE deltaE deltaE E-05

CoefficientsStandard Errort StatP-value Intercept E E deltaE E-20 deltaE E-22 deltaE E-14 deltaE E-12 deltaE E-08 deltaE E-07 deltaE deltaE deltaE deltaE deltaE deltaE deltaE

CoefficientsStandard Errort StatP-value Intercept E E deltaE E-19 deltaE E-21 deltaE E-14 deltaE E-14 deltaE E-09 deltaE E-11 deltaE deltaE deltaE deltaE deltaE deltaE

CoefficientsStandard Errort StatP-value Intercept 5.1E E deltaE E-19 deltaE E-22 deltaE E-15 deltaE E-15 deltaE E-12 deltaE E-11 deltaE deltaE deltaE deltaE deltaE

CoefficientsStandard Errort StatP-value Intercept E E deltaE E-19 deltaE E-24 deltaE E-15 deltaE E-19 deltaE E-12 deltaE E-11 deltaE deltaE deltaE deltaE

CoefficientsStandard Errort StatP-value Intercept E E deltaE E-19 deltaE E-24 deltaE E-18 deltaE E-19 deltaE E-12 deltaE E-11 deltaE deltaE deltaE

AUTO REGRESSIVE MODEL WITH 9 LAGS

SkewnessKurtosisK - SL1 Upper Bound Lower Bound Original SIMULATION TESTS FOR NORMALITY

Lag 1Lag 2Lag 3Lag 4 Upper Bound Lower Bound Original SIMULATION TESTS FOR AUTOCORRELATION

Goldfeld Quandt Upper Bound Lower Bound Original SIMULATION TEST FOR HETEROSKEDASTICITY

Chow Upper Bound Lower Bound Original SIMULATION TEST FOR STRUCTURAL STABILITY

Inspired by Darwin's Theory of Evolution Inventor John Holland: ‘Adaptation in Natural and Artificial Systems’ (1975) Solution: an Evolved Solution Applications: music, military, optimization techniques etc. GENETIC ALGORITHMS

Genetic Programming (GP): a branch of GA’s John Koza (1992): GA’s to evolve programs to perform certain tasks LISP programs used Prefix notation GENETIC PROGRAMMING (GP)

Generate an initial population of random functions Execute each program Assign a fitness value to the program Create a new population of computer programs by Crossover (sexual reproduction) Mutation Solution is the best computer program in any generation STEPS IN GP

TSGP by Mahmoud Kaboudan (School of Business, University of Redlands) Fitness criteria: minimize the sum of squared error (SSE) Variables in the program: data points in Historical (Training) set: T number of data points to Forecast: k data points for ex post Forecast: f population size: p number of generations: g number of explanatory variables: n number of searches desired (to prevent local minima): s GP & FINANCIALTIME SERIES FORECASTING

The variables we manipulated Data in Historical set (T) (Increase T  search time increases exponentially) Total points to forecast (k) Population size (p) (interesting observations) Number of generations (g) Upon completion files generated having results of each search A Results files with forecasts based on the best evolved model found RESULTS & OBSERVATIONS

RESULTS FROM A SAMPLE RUN T = 80 K = 15 P = 1000 G = 150 N = 7 S = 50

T = 80 K = 15 P = 2000 G = 200 N = 14 S = 100 AN IMPORTANT OBSERVATION Reason for the anomaly: Research shows that after some limit it is not useful to use very large populations This is exactly what we have seen from our results

CONCLUSION Overview of the Project Neural Networks AutoRegressive Models Genetic Programming Results: Prediction with Radial Basis and Feedforward can give profitable returns Milestones Significance Further research

COMPARISON OF ALL MODELS

MILESTONES More than 100 neural networks trained and tested First documented study on Pakistani currency market One of the most broad studies on the subject Results can be used for policy making, investment decisions and financial speculative trading Developed a system that can make PROFITS

Developing hybrid models to improve the predictability of the system Developing trading rules for investing in the currency market Making the system resilient to external non market shocks FURTHER RESERACH

Q & A