Download presentation

Presentation is loading. Please wait.

Published byRosa Almond Modified over 3 years ago

1
Neuroeconomics at URI: The CBA Student Directed Hedge Fund with Big Data Informatics By Gordon H. Dash, Jr. 1 and Nina Kajiji 2 1 College of Business, University of Rhode Island 2 The NKD Group, Inc. and Computer Science and Statistics, University of Rhode Island. www.GHDash.netwww.GHDash.net www.ninakajiji.netwww.ninakajiji.net ghdash@uri.edughdash@uri.edu nina@nkd-group.comnina@nkd-group.com URI Neuroscience Colloquium February 25, 2011

2
Credits CBA - College of Business Administration Hedge Fund – BUS 423-II, sec 02 Student Directed Investment Fund Big Data Informatics – URI Computer Science Department is heading a drive for a new tract. RANN – Radial Basis function artificial neural network URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji2

3
Dual Purpose To present the theory and application of a concentric RANN real-time automated trading (AT) algorithm and its ability to produce profitable trades. Using high frequency dimensions to represent low-frequency Fama-French-Carhart (FFC) firm fundamental variables, we estimate scale elasticity metrics to explain profitable trades produced by the AT algorithm. 3URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

4
AT System Development for Strategic Effects The AT algorithm is predicated on a four phase concentric strategic decision cycle that is capable of responding to various forecasts of future events (OODA): 1. Observation (of markets) 2. Orientation by neruoeconomic informatics (align forecasts with reality) 3. Decision on asset position (open / close / hold ?) 4. Action (execute trade and store record structure) 4URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

5
Applied Problems: Image, Sound, and Pattern recognition Financial Time Series Mapping Decision making Knowledge discovery Context-Dependent Analysis …and more… Artificial Intellect: Who is stronger and why? NEURO-INFORMATICS - modern theory about principles and new mathematical models of information processing, which based on the biological prototypes and mechanisms of human brain activities A Neural Network (AI) Primer 5 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

6
Concentric OODA 6URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

7
Justification BIG DATA: …high frequency trading firms can generate more than a million trades in a single day … more than 50 percent of equity market volume. Stated another way: a firm that trades one million times per day may submit 90 million or more orders that are cancelled. Mary L. Schapiro, Chairman, U.S. Securities and Exchange Commission, 07-Sep-2010. INFORMATICS: The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject…With high-dimensional data,... the correct specification of the parametric model an impossible challenge … The new generation of statisticians cannot be afraid to go against standard practice … The science of learning from data (i.e. statistics) is arguably…one in which we try to understand the very essence of human beings. Mark van der Laan, Jiann-Ping Hsu, Sherri Rose, AMSTATNEWS, September 2010. 7URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

8
Data Acquisition Data Informatics Interpretation and Decision Making Signals & parameters Characteristics & Estimations Rules & Knowledge Productions Data Acquisition Data Informatics Decision Making Big Data Knowledge- Base Adaptive Machine Learning via Neural Network An Intelligent Data Analysis Experiment 8 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

9
Learning and Adaptation NN are capable to adapt themselves (the synapses connections between units) to special environmental conditions by changing their structure or strengths connections. Non-Linear Functionality Every new state of a neuron is a nonlinear function of the input pattern created by the firing nonlinear activity of the other neurons. Robustness of Associability NN states are characterized by high robustness or insensitivity to noisy and fuzzy input data owing to use of a highly redundant distributed structure. Principles of Neurocomputing 9URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

10
Artificial Neural Networks Single Layer (e.g., the RBF) Inputs Output An artificial neural network is composed of many artificial neurons that are linked together according to a specific network architecture. The objective of the neural network is to transform the inputs into meaningful outputs. 10URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

11
Neural Network Mathematics Inputs Output 11 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

12
Learning with RBF Neural Networks RBF neural network: Data: Error: Learning: Only the synaptic weights of the output neuron are modified. An RBF neural network learns a nonlinear function. 12URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

13
Bayesian Network Training Conventional training can be interpreted in statistical terms as variations on maximum likelihood estimation. The idea is to find a single set of weights for the network that maximizes the fit to the training data, perhaps modified by some sort of weight penalty to prevent overfitting. Ideal Bayesian training: – Before the start of data analysis obtain prior opinions about what the true relationship might be expressed as a probability distribution over the network weights that define this relationship. – After training the network collect revised opinions as a posterior distribution over network weights. – Exact analytical methods for models as complex as neural networks are out of the question Practical Bayesian Training: – Find the weights that are most probable, using methods similar to conventional training with regularization, and then approximate the distribution over weights using information available at this maximum. – Approximation is preferred for computational efficiency instead of using a Monte Carlo method to sample from the distribution of the weights. 13URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

14
K4-RBF ANN (RANN) Extensions The Kajiji (2001) extension to the traditional RANN specification introduced multiple objectives within a Bayesian RANN framework. Efficient mapping. By adding a weight penalty or Tikhonovs regularization parameter to the SSE optimization objective, the modified SSE is restated as the following cost function: Optimal Weight Decay. Additionally, Kajiji proposed a closed- form solution to the estimation of the weight parameter based on Hemmerles extensions to the traditional ridge- regression parameters and Crouses incorporation of priors: Under the Kajiji specification the function to be minimized is stated as: 14URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

15
K4 RANN Contributions Excellent mapping capabilities of the K4-RANN topology allows for better financial forecasts for a system exhibiting Brownian motion with drift and volatility – such as stock price movements. The algorithmic speed of the generalized RBF is greatly enhanced by the K4 enhancements making it suitable for HF valuation in N dimensional space. 15URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

16
Developing an Auto Trader using the K4-RANN URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji16

17
The AT Algorithmic System Domain Objective Continuous-Time Stochastic Processes Real world implementation with multiple RANNs for speed and mapping accuracy – WINKS – End-of-Day forecasts – 20 Minute HF forecasts Evaluation of trading performance FFC Firm Fundamentals and AT production Scale Economies Summary 17 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

18
Trading System Evolution To assign stocks to wealth-building groups based on the historical trading performance of WINKS. To define the operational characteristics of the HF Trading system -- WINKS. To estimate the relative quasi- elasticity of firm-fundamental metrics to explain the production of WINKS trading profitability. Automated Trading System (WINKS) Mathematical Finance Business Decision Theory MCDA 18URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

19
Investor Trading Behavior Let (Ω, θ, P) be the measure space. Let P(Ω) = 1. Then the probability space is (Ω, θ, P) with a filtration of {Γ t : t є [0, ]}. This is an increasing sequence of σ-algebras of Γ that determines the relevant timing of information. That is, Γ t is loosely viewed as the set of events whose outcomes are certain to be revealed to investors as true or false by, or at, time t. 19URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji We assume the Investors probability space is a continuous-time stochastic process (Kajiji and Forman, 2013) :

20
Model for Equity Trading Profits Let X t represent an individual stocks market price at time t We assume that the price process of X follows a geometric Brownian motion with a constant drift and volatility. Let θ represent a trading strategy. Let θ t (ω) be the quantity of each security held in each state ω є Ω and at each time t. We assume that the trading strategy can only make use of available information at any time t. This prevents the possibility of unlimited gains through uncontrolled high- frequency trading or flash-crash trading – i.e. θ is adapted. We thus can define the total financial gain as: between any time s and r – (Itos stochastic integral). 20URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

21
Simple Buy and Hold Strategy (BH) Position is initiated after some stopping time T and closed at a later stopping time U. For a position size θ t (ω), the trading strategy θ, is defined by θ t = 1 (T

22
The N-Dimensional Trading Strategy Suppose we have n different securities with price process: X 1, … X n and a trading strategy of θ = (θ 1, … θ n ) then the total gain is: 22URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

23
The WINKS OOAD Algorithm 23 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

24
The WINKS Trading Model The high frequency model implemented in the automated trader is: y i = f(x 1, x 2 ) Where: y i = Ln(1+r i ) -- r i is the return on security i X 1 = Lag(Ln(1 + r VXX )) X 2 = Lag(Ln(1 + r PLW )) Note: – VXX = iPath S&P 500 VIX Short-Term Futures (ETN) – PLW = PowerShares 1-30 Laddered Treasury (ETF) URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji24

25
The database of 2,225 securities was established by random selection from the 8,000 tickers followed by Yahoo! Finance. These five were chosen for demonstration as they have the some of the highest trading profit. Trading Efficiency = (difference between BH and Trading Profit) / Ini. Investment) *100. Note: All equities positions were created with an investment of $1000 Results: The BH Strategy v/s HF K4-RANN AT A Sample of Trading Profitability, June 01, 2009 to March 19, 2010 25URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

26
Trading Report Jul 1 st, 2010 26URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

27
Summary Report: Page 1 Jul 1 st, 2010 27URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

28
Summary Report: Page 2 Jul 1 st, 2010 28URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

29
Trading History – Ticker = ADS Jul 1 st, 2010 29 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

30
Trading Summary – Ticker = ADS Jul 1 st, 2010 30 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

31
Do Firm Fundamentals Explain Profitable Trades? Goal: To use the K4 mapping efficiency to estimate non-parametric quasi- elasticity metrics of individual firm fundamental variables in the production of K4 trading profitability (see Dash and Kajiji, 2008 for univariate detail and Kajiji and Dash, 2013 for multivariate extensions). Which firm fundamental variables to use? – Fama & French (1993) found a three factor model efficiently modeled the excess returns of individual firms. – Subsequently, Carhart (1997) extended the FF model by including a fourth factor. – We extrapolate from the low frequency principle of the Carhart four factor model by incorporating a real-time HF moving average: Vasiceks Beta (Bayesian corrected Beta)Standard CAPM Market CapitalizationFFM added factor Book to Last Trade PriceFFM added factor Percent change from 50 day Moving AverageCarhart extension 31URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

32
K4- Estimation of Quasi-Scale Economies Establish historical period: 01-Jun-2009 through 19-Mar-2010, inclusive. Create research sample (SAM): – Obtain 20-minute price observations for 2,225 securities. – Eliminate all non-equity stocks, and stocks that do not have fundamental information on Yahoo! -- sample size reduced to 1,765. – For efficient cross-sectional modeling we sample from within the full content population. The data sampling is guided by the use of the target variable of the study – percent positive trades (PPT). – 793 securities form the training set; 972 securities form the validation set. For SAM, obtain stock fundamentals (source: Yahoo Finance) – Vasiceks Beta - created from reported Yahoo beta.(P 1 ) – Market Capitalization (P 2 ) – Book to Last Trade Price(P 3 ) – Percent change from 50 day Moving Average(P 4 ) 32 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

33
Results Number of Positive Trades by Security for SAM of 1,765 Securities Notice that the %positive trades form a band between 30% and 70% 33 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

34
The Production Model Use K4 to estimate the double-log production theoretic model for positive trades. p i = f(P 1, P 2, P 3, P 4 ) Where: p i = Ln(PPT) P 1, P 2, P 3, P 4 = Ln transformation of indicator variables as previously defined K4-RBF implemented with a softmax transfer function K4 RBF weights are interpreted as quasi-production elasticity estimates Sum to capture system returns to scale for profitable trading 34URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

35
A Sample of Alternative Modeling Results K4-RBF Analysis Using Softmax Transfer Function Dependent Variable: Ln(% Positive Trades) Indicator Variables: Ln(Fundamental Variables) 35 URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

36
Results: FFC Mapping of % Positive Trades Actual and Predicted 36URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

37
Results: FFC Quasi-Elasticity Metrics Comparative K4 RBF Weights from the Alternative Models Dependent Variable: Ln(% Positive Trades) Model selected – Norm2 Except for the variable Market Capitalization, all other variables show an inverse relationship to PPT. That is, as firm market capitalization increases so does the expected performance of the WINKS K4-AT. Interestingly any increase in Book Value to Price tends to lessen the probability that WINKS will trade the stock profitably. The WINKS K4-AT exhibits decreasing returns to scale of 0.484 units. This implies a decrease in PPT given a simultaneous and proportionate change in all indicator variables (factors of production). 37URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

38
Summary This research provides a synthesis of stochastic equity price behavior and the cognitive science of trading by implementation of the dual objective K4-RBF ANN as incorporated in WINKS. – WINKS is an MCDA trading algorithm that integrates the AI properties of two unique, but coordinated, high-frequency RBF ANNs. – Test results produced transaction cost adjusted trading profits that exceeded those generated by the simple buy-hold strategy. WINKS performance was modeled using a four-factor firm pricing model to estimate the system-wide returns to scale of PPT. The results suggest that the portfolio selection of stocks based on the estimated quasi-elasticity coefficients would greatly enhance trading profits. A test of this conclusion is left for future research. 38URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

39
Future Research Development of intelligent interfaces between investor and AT parameterization Use of psychology-based theories to explain human response to high frequency trade signals associated with capital market events Interrogation of reasoning under uncertainty or imprecision URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji39

40
References 1.Dash Jr., G.H. and N. Kajiji, Operations Research Software. Vol. I and II. 1988, Homewood, Illinois: IRWIN. 2.Dash Jr., Gordon H., and Kajiji, Nina. Engineering a Generalized Neural Network Mapping of Volatility Spillovers in European Government Bond Markets, Handbook of Financial Engineering, Series: Springer Optimization and Its Applications, Vol. 18, Edited By: C. Zopounidis, M. Doumpos, and P. Pardalos, Springer, 2008 3.Kajiji, N., and Forman, J., Production of Efficient Wealth Maximization Using Neuroeconomic Behavioral Drivers and Continuous Automated Trading, Edited by: N. Thomaidis, and G. Dash, Recent Advances in Computational Finance, Nova Science Publishers, Inc., 2013. 4.Kajiji, Nina and Dash Jr., Gordon H. Computational Practice: Multivariate Parametric or Nonparametric Modeling of European Bond Volatility Spillover? Recent Advances in Computational Finance, Edited by: N. Thomaidis, and G. Dash. Nova Science Publishers, Inc., 2013. 2.Forman, J., Essentials of Trading: From the Basics to Building a Winning Strategy. Wiley Trading. 2006, Hoboken, New Jersey: John Wiley & Sons, Inc. 3.Brock, W., J. Lakonishok, and B. LeBaron, Simple Technical Trading Rules and the Stochastic Properties of Stock Returns. The Journal of Finance, 1992. 47(5): p. 1731-1764. 4.Refenes, A.-P.N., et al., eds. Neural Networks in Financial Engineering. 1996, World Scientific: Singapore. 6.Kajiji, N., Adaptation of Alternative Closed Form Regularization Parameters with Prior Information to the Radial Basis Function Neural Network for High Frequency Financial Time Series, in Applied Mathematics. 2001, University of Rhode Island: Kingston. 7.Broomhead, D.S. and D. Lowe, Multivariate Functional Interpolation and Adaptive Networks. Complex Systems, 1988. 2: p. 321-355. 9.Lohninger, H., Evaluation of Neural Networks Based on Radial Basis Functions and Their Application to the Prediction of Boiling Points from Structural Parameters. Journal of Chemical Information and Computer Sciences, 1993. 33: p. 736-744. 12.Tikhonov, A. and V. Arsenin, Solutions of Ill-Posed Problems. 1977, New York: Wiley. 13.Hoerl, A.E. and R.W. Kennard, Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics, 1970. 12(3): p. 55-67. 14.Hemmerle, W.J., An Explicit Solution for Generalized Ridge Regression. Technometrics, 1975. 17(3): p. 309-314. 15.Crouse, R.H., C. Jin, and R.C. Hanumara, Unbiased Ridge Estimation with Prior Information and Ridge Trace. Communication in Statistics, 1995. 24(9): p. 2341-2354. 40URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

41
Questions 41URI-Neuroscience Colloquium 25-Feb-2013 – Dash and Kajiji

Similar presentations

OK

Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making Knowledge discovery Context-Dependent Analysis.

Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making Knowledge discovery Context-Dependent Analysis.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on hydro power plant Ppt on unipolar nrz Ppt on different types of pollution Ppt on chief minister of india Ppt on bluetooth based smart sensor networks security Ppt on normal distribution Seminar ppt on brain computer interface Beautiful backgrounds for ppt on social media By appt only movie main Ppt on blood stain pattern analysis terminology