David Lubo-Robles*, Thang Ha, S. Lakshmivarahan, and Kurt J. Marfurt

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

EE-M /7: IS L7&8 1/24, v3.0 Lectures 7&8: Non-linear Classification and Regression using Layered Perceptrons Dr Martin Brown Room: E1k
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Neural networks Introduction Fitting neural networks
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Statistical Characterization Using Automatic Learning Gaussian Mixture Models in Diamond M Field, TX David Lubo*, The University of Oklahoma, Simon Bolivar.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
AASPI Workplan Kurt J. Marfurt Kurt J. Marfurt Jamie Rich Vikram Jayaram Marcilio Matos OU AASPI Team Attribute-Assisted Seismic Processing and.
Machine Learning Neural Networks
x – independent variable (input)
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Chapter 5 NEURAL NETWORKS
Prediction Networks Prediction –Predict f(t) based on values of f(t – 1), f(t – 2),… –Two NN models: feedforward and recurrent A simple example (section.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Artificial Neural Networks
LOGO Classification III Lecturer: Dr. Bo Yuan
October 28, 2010Neural Networks Lecture 13: Adaptive Networks 1 Adaptive Networks As you know, there is no equation that would tell you the ideal number.
Introduction to Directed Data Mining: Neural Networks
Neural networks.
July 11, 2001Daniel Whiteson Support Vector Machines: Get more Higgs out of your data Daniel Whiteson UC Berkeley.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Appendix B: An Example of Back-propagation algorithm
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
Classification / Regression Neural Networks 2
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Applying Neural Networks Michael J. Watts
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Applying Statistical Machine Learning to Retinal Electrophysiology Matt Boardman January, 2006 Faculty of Computer Science.
How to create property volumes
1 RPSEA Project – Facies probabilities from seismic data in Mamm Creek Field Reinaldo J Michelena Kevin Godbey Patricia E Rodrigues Mike Uland April 6,
Prediction of the Foreign Exchange Market Using Classifying Neural Network Doug Moll Chad Zeman.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Reservoir Uncertainty Assessment Using Machine Learning Techniques Authors: Jincong He Department of Energy Resources Engineering AbstractIntroduction.
Neural Networks for EMC Modeling of Airplanes Vlastimil Koudelka Department of Radio Electronics FEKT BUT Metz,
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
Neural Networks - Berrin Yanıkoğlu1 MLP & Backpropagation Issues.
Today’s Lecture Neural networks Training
Multiple-Layer Networks and Backpropagation Algorithms
Deep Feedforward Networks
به نام خدا هوالعليم.
The Gradient Descent Algorithm
Neural Networks Winter-Spring 2014
Learning in Neural Networks
Soft Computing Applied to Finite Element Tasks
A Simple Artificial Neuron
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
NEURAL NETWORK APPROACHES FOR AUTOMOBILE MPG PREDICTION
Machine Learning Today: Reading: Maria Florina Balcan
Goodfellow: Chap 6 Deep Feedforward Networks
Chapter 3. Artificial Neural Networks - Introduction -
Artificial Neural Network & Backpropagation Algorithm
Neuro-Computing Lecture 4 Radial Basis Function Network
Neural Networks Chapter 5
A case study in the local estimation of shear-wave logs
Artificial Neural Networks
An Introduction To The Backpropagation Algorithm
Lecture Notes for Chapter 4 Artificial Neural Networks
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
Introduction to Radial Basis Function Networks
Model generalization Brief summary of methods
Neural networks (1) Traditional multi-layer perceptrons
Artificial Intelligence 10. Neural Networks
Linear Discrimination
Machine Learning – a Probabilistic Perspective
Prediction Networks Prediction A simple example (section 3.7.3)
Presentation transcript:

David Lubo-Robles*, Thang Ha, S. Lakshmivarahan, and Kurt J. Marfurt USING PROBABILISTIC NEURAL NETWORKS FOR ATTRIBUTE ASSESSMENT AND GENERAL REGRESSION David Lubo-Robles*, Thang Ha, S. Lakshmivarahan, and Kurt J. Marfurt By The University of Oklahoma

Predict missing logs in well data Seismic attribute assessment. AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Probabilistic Neural Networks (PNN) is an interpolation method in which different mathematical operations can be organized in different layers forming a neural network structure (Specht 1990, Hampson et al., 2001). Compared to Multi-layer Feedforward Networks which commonly use a sigmoid function for its implementation, PNN uses a Gaussian distribution as an activation function (Specht, 1990). PNN was designed fundamentally for classification between two or more classes. However, it can be modified into the General Regression Neural Network for mapping/regression tasks (Masters, 1995). Objectives: Predict missing logs in well data Seismic attribute assessment. Predict reservoir properties using seismic attributes. 1

Hidden Layer Linear Nonlinear MLFN vs. PNN AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Multi-layer Feedforward Network (MLFN): Linear Nonlinear W1 W2 net = W1*i1 + W2*i2 𝟏 𝟏+ 𝒆 −𝒏𝒆𝒕 Activation function W13 Hidden Layer Input Layer Hidden Layer Output Layer i1 i2 i3 i4 2

Pattern Layer Summation Layer AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study 𝒈 𝒙 = 𝟏 𝑵 𝒏=𝟏 𝑵 𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 Probabilistic Neural Network (PNN): 𝒆 − 𝒊 𝟏 −𝒑𝟏𝟏 𝟐 𝟐 𝝈 𝟐 𝒆 − 𝒊 𝟏 −𝒑𝟏𝟐 𝟐 𝟐 𝝈 𝟐 Pattern Layer 𝒆 − 𝒊 𝟏 −𝒑𝟏 𝟐 𝟐 𝝈 𝟐 + 𝒆 − 𝒊 𝟏 −𝒑𝟐 𝟐 𝟐 𝝈 𝟐 + … Summation Layer Input Layer Class 2 Class 1 Pattern Layer Summation Layer Output 𝒊 𝟏 p11 p12 𝒈 𝟏 𝒈 𝟐 𝒊 𝟐 𝒊 𝟑 p21 𝒊 𝟒 p22 3

AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study General Regression Neural Network (GRNN): Input Layer Pattern Layer Summation Layer Numerator Denominator Predicted Value Output p11 𝒊 𝟏 𝒗 = 𝒏=𝟏 𝑵 𝒗𝒏𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 𝒏=𝟏 𝑵 𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 𝒗 4

Validation Data (input) Training Data (pattern) AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study GRNN: 𝒗 = 𝒊=𝟏 𝑵 𝒗𝒏𝒆 − 𝒏=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 𝒏=𝟏 𝑵 𝒆 − 𝒎=𝟏 𝑴 ( 𝒊 𝒎 −𝒑𝒏𝒎) 𝟐 𝟐 𝝈 𝟐 Validation Data (input) Attribute #2 Attribute #1 Property (𝑣) Training Data (pattern) Depth/Time Attribute #1 Attribute #2 Predicted Property ( 𝑣 ) Depth/Time 5

AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Optimizing σ: Exhaustive Search. Cross-validation is used to train the neural network. (Hampson, 2001) Training Data Validation Data 𝟎.𝟔≤σ≤𝟕 Property (𝒗) Predicted Property ( 𝒗 ) e = 𝒗 − 𝒗 𝟐 𝑵 e --- > Error Leaving out Sigma // Error 1 2 3 4 σ1 // e1 σ2 // e2 σ3 // e3 σ4 // e4 6

Attribute assessment: Combination List Leaving out: Well #2 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Attribute assessment: Training Data Validation Data Combination List Leaving out: Well #2 Attribute (s) Sigma // Error 1 2 3 1,2 1,3 2,3 1,2,3 σ1 // e1 σ2 // e2 σ3 // e3 σ1,2 // e1,2 σ1,3 // e1,3 σ2,3 // e2,3 σ1,2,3 // e1,2,3 7

MLFN vs. PNN 8 Introduction GRNN Conclusions Optimize 𝝈 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study 8

AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study CASE STUDY: PREDICTING POROSITY USING WELL LOGS IN DIAMOND M FIELD, TX. 9

Wells: Garnet, Emerald, Topaz, Jade. AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Geologic Background: Wells: Garnet, Emerald, Topaz, Jade. Top Reef Modified from Walker et al., 1995 Diamond M Field is located in Scurry County, TX., approximately 80 mi northeast of Midland, Texas. The trend is part of the Horseshoe Atoll Reef Complex, an arcuate chain of reef mounds, made of mixed types of bioclastic debris that accumulated during the Late Paleozoic in the Midland basin (Vest, 1970). The massive carbonates presented in the Atoll are separated by correlative shale beds (Davogustto, 2010) . 10

correlation coefficient AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Logs used for prediction: Gamma Ray (GR), Resistivity (RD), Density (𝜌 ), and Photoelectric Factor (PEFZ). GR vs. Porosity ρ vs. Porosity RD vs. Porosity PEFZ vs. Porosity GR (API) ρ (g/cm3 ) RD( omh-ft) PEFZ Porosity 𝒄𝒐𝒓𝒓𝒙,𝒚= 𝒄𝒐𝒗 (𝒙,𝒚) 𝝈𝒙𝝈𝒚 𝒄𝒐𝒓𝒓𝒙,𝒚 --- > Pearson’s correlation coefficient Input Attribute Correlation Gamma Ray (GR) 0.5202 Density (ρ) -0.2926 Resistivity (RD) -0.2649 Photoelectric Factor (PEFZ). -0.2424 11

Leaving out well: Garnet Leaving out well: Emerald AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Combination Lists e = 𝒗 − 𝒗 𝟐 𝑵 Leaving out well: Garnet Leaving out well: Emerald Error= 0.00125 Error=0.00122 12

Leaving out well: Topaz AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Combination Lists e = 𝒗 − 𝒗 𝟐 𝑵 Leaving out well: Jade Error= 0.00256 Leaving out well: Topaz Error=0.000977 13

MLFN vs. PNN 14 Introduction GRNN Conclusions Optimize 𝝈 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Error=0.00122 Error= 0.00125 Error= 0.00256 Error=0.000977 14

MLFN vs. PNN Garnet: Jade: Emerald: Topaz: 15 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study 𝒄𝒐𝒓𝒓𝒙,𝒚= 𝒄𝒐𝒗 (𝒙,𝒚) 𝝈𝒙𝝈𝒚 Cross-plots Garnet: (Corr = 0.7004) Jade: (Corr = 0.6998) Emerald: (Corr = -0.0881) Topaz: (Corr = 0.7020) 15

MLFN vs. PNN Future work 16 Introduction GRNN Conclusions Optimize 𝝈 AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study Future work 16

PNN provides faster learning speed than MLFN. AASPI Introduction GRNN Conclusions MLFN vs. PNN Optimize 𝝈 Exhaustive GRNN Workflow Case study PNN provides faster learning speed than MLFN.   PNN are insensitive to outliers. Therefore it can be more accurate than MLFN. GRRN is a powerful technique to predict missing data using known properties as input.   The novel technique Exhaustive GRNN allow us to determine the best group of attributes to be used for a regression task. Computation time increases as a larger range of σ is used.   Future work: We will apply our technique using seismic attributes in order to obtain a predicted rock property survey. We will expand our Exhaustive GRNN technique for classification tasks (Exhaustive PNN). Simulated annealing, and the Gradient Descent Method will be implemented to obtain different sigma for each attribute and class. 17

References Davogustto, O., 2013, Quantitative Geophysical Investigations at the Diamond M Field, Scurry County, Texas. Ph.D Dissertation, The University of Oklahoma. Hampson, D.P., Schuelke, J.S., Quirein, J.A., 2001, Use of multiattribute transforms to predict log properties from seismic data. Geophysics, 66, 220- 236. Hughes D., and N. Correll, 2016, Distributed Machine Learning in Materials that Couple Sensing, Actuation, Computation and Communication, arXiv:1606.03508v1. Masters, T., 1995, Advanced Algorithms for Neural Networks. Specht, D., 1990, Probabilistic neural networks: Neural Networks, 3, 109 118. Walker, D. A., J. Golonka, A. M. Reid, and S. T. Reid, 1991. The effects of late Paleozoic paleolatitude and paleogeography on carbonate sedimentation in the Midland Basin, Texas; Permian Basin plays: Society of Economic Paleontologists and Mineralogists, Permian Basin Chapter, Tomorrow’s Technology Today, 141-162. 18