Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tao Zhao*, Vikram Jayaram, Bo zhang and Kurt J. Marfurt,

Similar presentations


Presentation on theme: "Tao Zhao*, Vikram Jayaram, Bo zhang and Kurt J. Marfurt,"— Presentation transcript:

1 Lithofacies Classification in the Barnett Shale Using Proximal Support Vector Machines
Tao Zhao*, Vikram Jayaram, Bo zhang and Kurt J. Marfurt, University of Oklahoma Huailai Zhou, Chengdu University of Technology

2 Outlines Introduction Theory and Formulations
Testing and Classification Discussions Conclusions Acknowledgements

3 Outlines Introduction Theory and Formulations
Testing and Classification Discussions Conclusions Acknowledgements

4 Introduction Huge amount of data High dimensionality
What is the problem? Huge amount of data High dimensionality Nonlinear relation In unconventional reservoirs, a reliable interpretation requires solving problems that are with huge amount of data, high dimensionality, and in nonlinear relation. For huge amount of data, let’s say several hundreds of wells, a large amount of engineering data, tens of seismic attributes we want to correlate. Also these are all different types of data which put us in a high dimensional space. We need to analyze all possible inputs, but data measured with different emphases need special experience and knowledge to correlate. They are not related by mathematics (expressed by a simple equation), they are related by geology, which is certainly a nonlinear relation. That’s where nonlinear machine learning/ pattern recognition methods come into play.

5 Introduction What is a proximal support vector machine (PSVM)?
Proposed by Fung and Mangasarian (2001, 2005) A recent variant of support vector machine (SVM) (Cortes and Vapnik, 1995) Supervised machine learning technique that can recover the latent relation between existing properties and measurements Classification between male and female Proximal support vector machine (PSVM) is a recent variant of SVM, which is a supervised machine learning technique to solve the kind of problems I just mentioned. It can recover the latent relation between existing properties and measurements. Let’s look at a simple example of what PSVM can do. We want to classify male and female from two measurements: height and hair length. Let’s say we give such information as well as gender of 10 people (which we call “true data”) to the classifier, and the classifier will learn the pattern between these 2 measurements and gender. Then if we give person 1’s data to the classifier, it will probably tell us this person is a male, and surely the second is female. But, a machine is not always that smart… P1 P2 Height 6’2’’ 5’7’’ Hair length 1 in. 20 in.

6 ? Introduction What is a proximal support vector machine (PSVM)?
Proposed by Fung and Mangasarian (2001, 2005) A recent variant of support vector machine (SVM) (Cortes and Vapnik, 1995) Supervised machine learning technique that can recover the latent relation between existing properties and measurements Classification between male and female Let’s look at this guy, who happens to be our IT specialist Sean. He is one or two inches shorter than me, and has roughly 15 inches of hair. What will the classifier say about his gender? Probably an answer he doesn’t like. This is when we need more dimensions. If we add a third dimension, let’s say with or without beard, then the classifier will certainly make Sean happy. ? Height 5’8’’ Hair length 15 in. IT Specialist Need more dimensions!

7 Introduction Why we use PSVM? Explicit geologic meaning for each class
Faster than traditional SVM Superior than ANNs We use it because of the following reasons. 1. being a supervised learning technique, it can provide Explicit geologic meaning for each class. 2. It runs faster than classic SVM. 3. It has been tested and proved by many researchers that SVM is superior to ANNs. 1. Unsupervised classification (e.g. SOM, generative topographic mapping or GTM (Roy et al., 2013, Roy et al., 2014)) can only give posteriori knowledge from geology. Supervised classification can assign geological meanings to each class explicitly. 2. PSVM can provide comparable classification performance to standard SVM but at considerably computational savings (Fung and Mangasarian, 2001, 2005; Mangasarian and Wild, 2006). 3. Torres and Reveron (2013) tested binary PSVM classifiers on lithofacies classification between sand and shale from elastic properties with satisfactory result.

8 Introduction How we use PSVM?
We applied PSVM to delineate shale and limestone in the Barnett Shale from both seismic and well log data. On seismic data we did waveform classification, which is only based on seismic amplitudes. On well log data we classified on three basic well logs. General stratigraphy of the Ordovician to Pennsylvanian section in the FWB through a well in the study area (After Loucks and Ruppel, 2007).

9 Outlines Introduction Theory and Formulations
Testing and Classification Discussions Conclusions Acknowledgements

10 Theory and Formulations
Why we use PSVM? Sphericity Unsupervised learning Attribute Sample Color Sphericity 1 Red High 2 Mid-Low 3 Green 4 Purple Medium 5 Blue 6 Yellow-Green Low 7 8 Red-Yellow 9 Blue-Purple 10 Low Vitamin C? High 1 2 Medium Vitamin C? 3 4 Medium 5 High Vitamin C? 6 In this part, I want to show a quick example of how supervised and unsupervised learning handle a same task differently. We want to classify Vitamin C content in fruits using two measurements: color and sphericity. Let’s look at the unsupervised learning first. If we are using an unsupervised learning technique, all we can do is to plot the samples into a feature space, and do clustering base on a certain distance. If no other data are available, this would be the final result we can get, which tells us nothing about Vitamin C content. However, we can correlate such clusters with posteriori knowledge, and luckily it is the case in most unconventional resource plays. In this example, the posteriori knowledge is we know the Vitamin C content of 4 fruits. We plot this 4 fruits in the same space, and based on where they lie, we can probably tell the Vitamin C content of our samples. But there may always be insufficient posteriori knowledge to correlate with clusters. 7 8 Low Medium-High Vitamin C? 9 Red Yellow Green Blue Purple Color 10 Low Vitamin C High Vitamin C Medium Vitamin C

11 Theory and Formulations
Why we use PSVM? Sphericity Supervised learning Attribute Sample Color Sphericity 1 Red High 2 Mid-Low 3 Green 4 Purple Medium 5 Blue 6 Yellow-Green Low 7 8 Red-Yellow 9 Blue-Purple 10 Low Vitamin C High 1 2 3 4 Medium 5 High Vitamin C 6 Medium Vitamin C Example of fruit classification using supervised learning technique (e.g. SVM). Classifier is built by training data (priori knowledge). Our samples will lie in classes with explicit meanings, even if we have fewer classes than natural classes. 7 8 Low 9 Red Yellow Green Blue Purple Color 10 Low Vitamin C High Vitamin C Medium Vitamin C

12 Theory and Formulations
Fundamentals for PSVM Cartoon illustration for a 2D PSVM classifier

13 Theory and Formulations
Fundamentals for PSVM Cartoon illustration for a 3D PSVM classifier

14 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Cartoon illustration for an linearly inseparable problem

15 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Cartoon illustration for an linearly inseparable problem

16 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Cartoon illustration for an linearly inseparable problem

17 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Cartoon illustration for an linearly inseparable problem

18 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Cartoon illustration for an linearly inseparable problem

19 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Cartoon illustration for an linearly inseparable problem

20 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Decision-boundary This is an explicit mapping. In real applications people use kernel functions to implement this mapping. These two classes are now separable by a 3D plane. Cartoon illustration for an linearly inseparable problem

21 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥,𝑦, 𝑥 2 + 𝑦 2 ) Decision-boundary Cartoon illustration for an linearly inseparable problem

22 Theory and Formulations
Mapping into higher dimensional space 𝑥 2 + 𝑦 2 =1 𝑥 2 + 𝑦 2 =2 A: B: Denotes “A” Denotes “B” (𝑥,𝑦) (𝑥, 𝑥 2 + 𝑦 2 ) Decision-boundary Happens to be able to separate in a new 2D space. Not always the case. Cartoon illustration for an linearly inseparable problem

23 Outlines Introduction Theory and Formulations
Testing and Classification Discussions Conclusions Acknowledgements

24 Testing and Classification
Seismic waveform classification Binary classification between shale and limestone in a Barnett Shale play dim t.1 t.2 1 2 shale 3 4 5 limestone 6 8 samples per trace are extracted to represent a waveform. This makes an eight dimensional input space. 7 PSVM classifier 8

25 Testing and Classification
Seismic waveform classification Sample traces are selected by interpreters across the survey This time slice is about the average depth of Forestburg formation. Average time thickness of Forestburg Limestone is about 15 ms in this survey. The analysis window is on par with this thickness. 161 sample traces (true data) are picked by interpreter. They are labeled as “limestone” or “shale”. 14 ms time window Time slice at 1376 ms

26 Testing and Classification
Seismic waveform classification Testing the robustness Percentage of Traces Used in Training Number of Training Traces Number of Testing Traces Correctness (%) 10% 16 145 83.45 20% 32 129 87.6 30% 48 113 84.1 40% 64 97 80.41 50% 80 81 90.12 93.75 60% 70% 80% 90.63 90%

27 Testing and Classification
Seismic waveform classification Classification result N Marble Falls Limestone Upper Barnett Shale Forestburg Limestone Lower Barnett Shale Upper Barnett Shale Inline Crossline Time (ms) 1370 1384 shale limestone 0.5 miles

28 Testing and Classification
Well log classification Well base map inline crossline 25 50 75 100 125 150 175 200 0.5 miles well D well C well B Automatic lithofacies classification (top picking) is promising in highly developed assets where hundreds of wells are available. Here we only show an example using 3 wells as training. The accuracy will improve when more wells are used for training. well A Training well Testing well

29 Lithology from well log interpretation
Testing and Classification Well log classification Well log classification correlating with lithologic interpretation Lithology from well log interpretation Blue: Limestone Green: Shale Lithology from PSVM Marble Falls Limestone Upper Barnett Limestone Upper Barnett Shale Lower Barnett Shale Forestburg Limestone P-wave (ft/s) Gamma Ray (API) Density (g/cc) Training correctness: 89% Testing correctness: 88% Depth (ft) 7800 8000 8200 8400 8600

30 Outlines Introduction Theory and Formulations
Testing and Classification Discussions Conclusions Acknowledgements

31 Discussions Seismic waveform classification
The boundary between two PSVM classes matches the interpreted formation boundary nicely. Lower Barnett Shale Reliable classification rate can be achieved by training with as little as 0.2% of the data. It can provide a reliable reference when human interpretation is tedious. Upper Barnett Shale Forestburg Limestone 0.3 miles A zoom-in view of the previous PSVM classification map

32 Discussions Well log classification
Blind well testing correctness (88%) is close to the training correctness (89%), which indicates the PSVM classifier is capable of generalizing to a well with distance. Three fundamental well logs are used as inputs instead of more advanced elastic properties, which can still guarantee a reliable classification. It can provide a fast and reliable reference when human interpretation is tedious. A segment from the previous PSVM well log classification result

33 Discussions One step further?
Originally SVMs are built to solve binary classification problems. Multiclass PSVM has been proposed by researchers, and we improved the classification robustness. We then applied multiclass PSVM for brittleness index estimation in the Barnett Shale and it has provided promising result.

34 Discussions Brittleness index estimation
BI_N BI_C σ Depth (ft) 30% of normalized BI are randomly selected for training and 70% for testing. We ran cross-validation 100 times and the correlation between normalized BI and predicted BI is about 90%. It is still 88.5% if we only look at the predicted 70% of data. Brittleness index (BI) estimation using PSVM on well logs from four rock properties

35 Discussions Brittleness index estimation Normalized Brittleness index
BI_N = 10 BI_N = 9 BI_N = 8 BI_N = 7 BI_N = 6 BI_N = 5 BI_N = 4 BI_N = 3 BI_N = 2 BI_N = 1 Brittleness Index Depth (ft) Normalized Brittleness index

36 Discussions Brittleness index estimation
1.2 1.3 1.4 30 60 120 90 t0 (s) CDP Number 180 BI_C Miles 0.2 10 150 Marble Falls Upper Barnett Forestburg Lower Barnett Viola Estimated brittleness index (BI) using PSVM on seismic prestack inversion

37 Outlines Introduction Theory and Formulations
Testing and Classification Discussions Conclusions Acknowledgements

38 Conclusions PSVM lithofacies classification showed promising results in both seismic and well log data. Multiclass PSVM classifiers are also available and ready for more complicated applications. Brittleness index estimation proves the capability of PSVM in a 3D multi-attribute classification using a vector of seismic attributes. We also anticipate comparisons between PSVM and other supervised (e.g. artificial neural networks or ANN) and unsupervised (e.g. SOM, generative topographic mapping or GTM) classification algorithms.

39 Outlines Introduction Theory and Formulations
Testing and Classification Discussions Conclusions Acknowledgements

40 Acknowledgement Thanks to Devon Energy for providing the data, all sponsors of Attribute Assisted Seismic Processing and Interpretation (AASPI) consortium group for their generous sponsorship, and colleagues for their valuable suggestions.

41 Questions and suggestions?
THANKS Questions and suggestions?

42 References Cortes, C. and V. Vapnik, 1995, Support-vector networks: Machine Learning, 20, Fung, G. and O. L. Mangasarian, 2001, Proximal support vector machine classifiers: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM 2001, Fung, G. M. and O. L. Mangasarian, 2005, Multicategory proximal support vector machine classifiers: Machine Learning, 59, Loucks, R. G. and S. C. Ruppel, 2007, Mississippian Barnett Shale: Lithofacies and depositional setting of a deep-water shale-gas succession in the Fort Worth Basin, Texas: AAPG Bulletin, 91, Mangasarian, O. L. and E. W. Wild, 2006, Multisurface proximal support vector machine classification via generalized eigenvalues: IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, Platt, John C., Nello Cristianini, and John Shawe-Taylor, 1999, Large margin DAGs for multiclass classification: nips, 12, Roy, A., B. J. Dowdell, and K. J. Marfurt, 2013, Characterizing a Mississippian tripolitic chert reservoir using 3D unsupervised and supervised multiattribute seismic facies analysis: An example from Osage County, Oklahoma: Interpretation, 1, SB109-SB124. Roy, A., A. S. Romero-Peláez, T. J. Kwaitkowski, and K. J. Marfurt, 2014, Generative topographic mapping for seismic facies estimation of a carbonate wash, Veracruz Basin, southern Mexico: Interpretation, 2, SA31-SA47. Torres, A. and J. Reveron, 2013, Lithofacies discrimination using support vector machines, rock physics and simultaneous seismic inversion in clastic reservoirs in the Orinoco Oil Belt, Venezuela:  SEG Technical Program Expanded Abstracts 2013,

43 Appendix Multiclass classification?
How we assign a class to an unknown sample A B C D 0.3 -1.2 2.3 -0.3 0.8 -1.1 1.2 -0.8 -1.9 -2.3 1.1 1.9 Set class “A” as the pilot class Turn all classes into active Examine the binary PSVM classification factor (CF) of the current pilot class against every other active classes. Example of a classification factor table All CFs are positive? Yes Assign the current pilot class to this sample and exit No Find the class corresponds to the most negative CF value, then assign that class as the new pilot class, and turn the current pilot class into inactive.

44 Appendix Multiclass classification?
Testing results for multiclass classification Dataset Sample size Testing size Dimension Number of class nu delta Sample reduced to (%) Training correctness Testing correctness Pendigits 7494 3498 16 10 2000 0.0001 97.72% 97.11% 20 99.25% 97.20% 30 99.56% 98.20% 40 99.64% 97.71% 50 99.73% 97.94% letter_scale 15000 5000 26 20000 0.1 82.69% 82.06% 89.70% 89.42% 93.23% 91.86% 94.83% 93.44%


Download ppt "Tao Zhao*, Vikram Jayaram, Bo zhang and Kurt J. Marfurt,"

Similar presentations


Ads by Google