Presentation on theme: "I was there when it all happened, a long time ago. Of course I was on the opposite side commercially. Time series limitations - Frequency domain methods."— Presentation transcript:
I was there when it all happened, a long time ago. Of course I was on the opposite side commercially. Time series limitations - Frequency domain methods are currently accepted as seismic gospel. I point out some conceptual problems when it comes to inversion. The ADAPS approach - There are vast differences in nonlinear inversion approaches. ADAPS uses general optimization logic, analyzing large chunks of raw data in an iterative manner. Linear vs. non-linear inversion - Why linear systems stop short of true spiking, the distribution of information in the frequency domain and how non-linear processes can redistribute error. The glory of sonic log simulation - the need for intelligent detuning, the purpose of integration and the fallacy of power spectra after inversion. Matching to well logs verifies the inversion process - Sonic log images consist of integrated reflection coefficients and that is all we are interested in. If you can get me the pre-stack gathers I will give you a PowerPoint report. The birth of the frequency domain - Intelligent inversion opens exciting new interpretation possibilities - Click on an icon to enter topic The ADAPS offer - Untangling seismic tuning is the challenge we should focus on – just collapsing the wavelet is not enough!
Back before the digital era seismic filtering was analog and clumsy. Then an industry consortium came up with the time series mathematical concept that modeled time data by describing it in terms of frequencies, calling the required code a “transform”. Once there they found it possible to return to the original form with manageable error. This allowed filter design to be done in the frequency domain, where it was much easier. The industry fell in love with this beautiful concept and most researchers today seem to assume a seamless mathematical connection between the two extremes. When all this was happening, I had hired on with Western Geo. as manager of digital operations, with the responsibility of helping evaluate a time series package bought from the MIT team that was the heart of the mentioned consortium. For the life of me I could not see it doing any good. When the Western R&D boss proudly showed me a section where the deep half had been drastically changed, I was able to point out that the lower part was an exact duplicate of the top. Late, when I examined filters the package had designed, I found all the action always concentrated at the very front, which did not fit my idea of a system that could effectively remove side lobes. So I wrote the first predictive deconvolution in the time domain. It worked well enough to put Western first in digital processing, and they used it for years. I spent the rest of my Western time defending it against the time series researchers. It was a non-linear pioneer, iterating to improve the down wave guess, and the ancestor of my current ADAPS system. After two successful years I left Western as an employee and they hired one of the MIT experts to pick my brain (in terms of my time domain iterative program). I had been giving talks to geophysicists around the country, with no communications problem, yet the two of us did not seem to connect. It was not until the mathematical guru was able to put a few formulae on the board that progress was made. It was then I realized how different our thinking processes were. It was not just that I did not fully understand him, but that he could not think in my logical terms either. This is our real communication problem.
The challenge for most readers will be to back off the “top down” mathematical certainty inherent in other approaches. A few years back I made a presentation to a group of BP researchers. They seemed impressed with the results but wanted mathematical proof of the procedures I had used. While that stopped the show, this illustrates the chasm between the mathematical and the logical lines of thought. Obviously, at least to me, iterative optimization cannot be proven by closed formulae. Taking this thought farther, if current time series systems really produced the promised results there would be no need for creative, non-linear thinking. In other words I start with the visible truth in the form of remarkable well matches. So, the question becomes one of finding where the time series thinking falls short. Transforms are mathematical modeling devices – Weather forecasters are proud of their models, but we all know how flaky they can be. At the same time we understand there are variables beyond the predictive capabilities of their modeling that can dominate the statistics. The same thing is true moving back and forth between time and frequency. Transforms assume only one wave shape. For years the industry has swept the noise problem under the rug. About the only type they paid attention to was multiples. When I started serious pre-stack development at Ikon, I had to get even more obnoxious in my demands for gathers. In the dozens I worked with almost all had serious noise that needed attention. When you consider how most systems blindly work with stacked data you wonder how many results you can trust. I have often said I would die of a broken neck because I shake my head so much. The picture to the left is from my PowerPoint. It shows a 200+ gather set. The conflagration to our immediate left is caused by the emergence of a standing wave? coming from the chalk formation. It is probably happening all over the North Sea, yet we seem to have the entire industry ignoring it. The energy permeates the entire vertical section. I named the show AVOanyone for a good reason. When we came on the Nexen scene all we heard had to do with “angle stacks”. The troubling thing is, that is where the conversation still is. But I deviate from the point. There may be a higher set of mathematical logic that can tie all of this together, but we are not there yet. For now, being able to drive individual components using non-linear means makes sense to me. To , click on the bomb.
Before I describe how ADAPS works I show some proof. Comparing an input stack with the ADAPS simulated sonic log output. For starters, open your mind to this display protocol. It keeps the well match in perspective and shows the true amplitude relationships where they are meaningful. A before and after that illustrates the absolute need for intelligent detuning. Once we know the probable stratigraphy it is easy to go back and see hints in the stacked input. ADAPS has completely re-arranged the input energy, making sense out of the tangle. Now we can see an obvious on-lap and also we probably know where the well should have been drilled. There probably is faulting involved and we have not quite clarified that on this one simulated sonic log section (a 3-d study gave us a pretty good idea however). Of course the well match tells us that ADAPS knew what it was doing when it removed all those extra lobes. This is seismic de-tuning at its max. The emphasis of ADAPS is on exploration, including new reservoir detection and reservoir extension. Please spend some time on the well matches, before and after. Note that leftward nodes should equate to red, and right handed ones to blue. On the “before” match you will see this is almost never the case, whereas below we have a fine level of agreement. If you are not understanding the importance of this we have failed to communicate. The well images used are integrated reflection coefficients, since relative bed velocity is the only attribute of interest to us in this all-important match. So - ADAPS is an optimizing tool. It works in the time domain and its “advanced pattern recognition” is the primary driving mechanism - It uses no well input – It makes process error manageable by shifting it into the statistical world, enabling true “spiking” and subsequent sonic log simulation. Resolution takes on a new face. Optimization is a recognized means of solving complex problems. After pre-stack cleanup, ADAPS collects a defined matrix of input traces and runs a preliminary lobe analysis to get a first wavelet guess. It then uses this guess to establish a set of reflection coefficient guesses, or spikes. Each time a spike is established, the system lifts off the “energy” used, the goal being to explain the entire trace. At the end of the trace loop, these spikes are used to improve the wavelet guess. A complex set of overlapping iterations continues until the job is done on this trace. The system then continues with the next trace until the defined set is finished.. ADAPS does not trust industry techniques of establishing synthetic traces. Further, because it is tied to what it sees statistically, it has no way of using well log input. Of course results prove the wisdom here. Frequency domain approaches have to limit their spiking goals. As I try to show next, they become unstable if pushed too far. The same error faces ADAPS, but it can make the best guess statistically possible, without going wild. This enables it to integrate those guesses, producing a simulated log. Error is now spread in a manageable way. ADAPS seismic resolution depends on pattern recognition accuracy and not on some artificial frequency assumptions. We are after a spread of thicknesses of course so evaluating on the basis of power spectra is somewhat specious. Again, the beauty of our well matches is our justification.
“Shape” is the thing - The goal of any inversion is to replace the individual reflected events with approximations of the contributing reflection coefficients. To do this we must start with the shape of the down wave as it passed through this complex subsurface combination. ADAPS determines this shape by statistical averaging, all in the time domain. What we can see is what we get could be our motto. See previous slide. Most other current approaches model the shape by measuring frequency attributes. Their assumption here is that wave shape can be accurately determined using the averaged content of the target trace group. The idea is that the final computed frequency spectrum can then be compared to a theoretical ideal, and filters designed to transform the time data to where it would have that ideal spectrum. The problem lies in how vital information is distributed. If our wavelet consisted of a single frequency continuous train, describing it as a single spike in the frequency domain would be simple. As soon as we cut it off at some finite time, we have to introduce a ton of frequencies to achieve the dampening. Taking that to the real world, when significant side lobes are involved the vital information gets spread out. The same is true as the wavelet gets more complex. This spreading of the vital information makes the process more susceptible to the influence of noise. Asking linear algorithms to come up with single spike answers drives them to instability, so they back off. The best they can do? ADAPS predicts single spikes based on the best pattern recognition matches it can make under the noise that exists. Error becomes eminently manageable. Collapsing wavelets does not finish detuning as the following before and after shows. The point is that the extensive whitening applied to the data before we got it did collapse the wavelet. However it was left to ADAPS to complete the detuning process by integrating the individual spikes. Of course, to do this the spikes had to reasonably equate to reflecting interfaces. When inversion is not perfect, low frequency corrections must be made. ADAPS keeps track of this error and builds it into the driving logic. Again I rest on the proof shown in examples like this, and ask the reader to let me know if someone else can do better. Technical closure becomes more and more important with time. I ask you to once again study the well matches. I do it all the time. They are not perfect here, but the simulated sonic log at the bottom obviously comes much closer to what we want. Getting there is not simple, and pre-stack optimization is always part of the struggle. One never knows when an improvement might make all the difference in an interpretation, but the justification is obvious. The mythology of frequency limits and analysis – Time after time, people would suggest that ADAPS results did not show the high frequency improvement in their power spectra they had expected. My somewhat impatient reply has been that they obviously did not understand that outlining thick beds was a major goal, and that sonic log data was not sinusoidal. If we see sharp, thin beds on the logs, we expect our system to find them (and have seen many great cases of course). High frequency (thin bed that is) limits are imposed by how well we correlate, using our own pattern recognition tools. Obviously we are limited by sampling rate (we interpolate to one ms.), and general data quality, but we have seen many examples of thin bed accuracy. Our ideal is to see a broad match of course, but we have nature’s filtering to contend with.
The glory of sonic log simulation - The first example, from illustrates how bed thickness helped clarify a complex strike slip fault pattern. The stars mark a known shale that lies just above the target sands. In the show you can see the fault breaks develop. The fact that the obvious cross fault correlation did not seem to continue to the right of the green fault helped identify it as the main lateral movement feature in the prospect. While the existing interpretation saw it, the ancillary faulting (which I consider trapping) was not identified before ADAPS.http://adaps.com/a/nseafaults.pps The second, from shows an extremely probable reservoir at the upper left. The presentation illustrates how this lead was not visible on the input stack because of wave trap noise emanating from the bed. In any case the ability to make sense of the actual bed velocity enables this capability. Of course the need for intelligent detuning is the subject of the discussion.http://adaps.com/a/examples.pps
I’m not crazy about industry synthetics We can appreciate that rock physicist want to extract all sorts of attributes from seismic, but we are only interested in what actually creates reflections. While ADAPS does not want any help in the actual inversion process, we base its reputation on matching its output to well log information. I have been singularly unimpressed with the initial comparisons I have been given, all of which involved the generation of synthetic seismograms. In the first place I have little faith in the ability of current software to model the down wave that they use in the process. In today’s super whitened data these wavelets seem way too leggy. So, I ask either for finished sonic log images, or for velocities files. ADAPS has a module that computes reflection coefficients and then interpolates them, applying a low frequency averaging based on the observed wave lengths..
Does your data pass the smell test? Before you put too much faith in AVO or inversion results you should know. This type of wave trap generated noise is actually quite common, but one has to look. Before you start comparing inside-middle-outside trace stacks, a pre-stack analysis is helpful. Ikon Science has bought the proprietary rights to my software so I have none to sell. However I have the right to use my improved version on a service basis, a computer bank and an addiction for seismic exploration. The need to prove the merits of my creation is paramount, and I am willing to take all the risk looking at your data. If you can supply me with an adequate volume of gathers I will process enough to prepare a report. If you are happy with the results I will bill you $5000 which you can opt to pay or not. Click here for noise discussion. Or here to see examples. Or here for a report example. Or for the site base. Or here to start over. Click on image to me