We often only look at reflection data in a stack. But, along the way a lot of “noise” has been processed out of the seismic data to chisel out the “signal”. It’s all about signal against noise. However, one person’s noise may just be another person’s signal. Let me elaborate.
If you’ve been around during the dotcom bubble you have seen the rise of ebay and it used to be an auctioning platform where you could buy another man’s trash and hope for a treasure. Another trash is another person’s treasure. Was popularized again during this era.
Seismic technology has become more and more sophisticated. We get a pretty high fidelity dataset of the boat already with many amazing algorithms at our finger-tips. But possibly we are actually losing some valuable information.
This part may not be as cutting edge as the opener makes it out to be. Yet, it’s a good account of how the perception of noise in the seismic sector has changed.
Diffractions are seismic reflection elements that are somewhat below the resolution limit. They act as a point in the subsurface, instead of reflecting the wave like a mirror. They behave like a crack in the mirror that will cast light all over the place.
Back in the day this was a serious noise issue. Migration algorithms do harness diffractions to sharpen the image. In a migration we backpropagate the diffraction pattern to the point source it is. Nowadays, we changed from editing out diffraction hyperbolae from our pre-stack to using them for enhancing the information in the data.
In Hamburg we were trying diffraction reflection separation using the attributes created by the Common Reflection Surface. Sergius Dell from CGG was on the forefront of this development and working with partial migrations.
Now we will get a little bit more obscure. But as we get more obscure we also venture in more interesting fields. Massimo from Schlumberger is one of the big proponents in this field. If we could separate our multiple wavefield from our primary wavefield, we would actually double our obtained data.
Multiples are a nuisance. A multiple reflection may cut right through our area of interest. This makes the processing of this multiple attenuation incredibly important and expensive. Additionally, the source and receiver ghost, which are multiples extremely close to the primary wavefield. This means they start to interfere with your primary wavefield, creating the “ghost notch”. That is a notch in the carefully crafted frequency spectrum of the data, often right in the most interesting frequencies.
On the hardware side we can now use developments like Geosstreamer and Isometrix, which are marine multi-component acquisition systems. They record the particle velocity additionally to the pressure data we are used to. This enables us to separate wavefields of the receiver ghost coming from the water surface and the one directly arriving. Several multiple modelling tools like surface related multiple elimination (SRME) combined with adaptive subtraction can help us estimate multiples from strong reflectors. This is data we can use to constrain inversion workflows. The wavefield has spent a proportionally longer amount relative to a designated surface, when compared to the primary wavefield. Since the velocity is a rock property, a wave spending twice the time in a certain layer, is delayed by the fraction it spends extra in this layer.
Another field of active development and the reason we know about the inner core is solid. Yet, for decades and often still converted waves are extremely hard to distinguish in a data set. As shear waves do not “see” the fluid they can give us important information of the rock matrix in a reservoir. Shear wave and converted wave processing is something I have not had a lot of contact with, yet it is something to look out for when we edit noise.
Shear waves are slower than P waves, therefore they achieve better resolution of the subsurface and can add value to the processing workflow. Shear wave systems have added immensely to reservoir characterization. A PS-constrained AVO inversion to this day has always outperformed simple P-data AVO. Harnessing converted waves bears a huge potential in seismic.
Before, we were in all the familiar fields of active source seismic, usually with a twist. Microseismic is a little different. Microseismic uses the seismic acquisition, without an active source. One test case is the Valhall field in the North Sea which uses LOFS. With long-term OBS or optic fibre installations, microseismic has become a feasible field of applicable research.
Microseismic is, quite literally listening to the signal of the noise. Explaining this without getting to esoterically hand-wavy has put me in an interesting spot. Microseismic is listening to the noise of the Earth. This ususally only works in seismologically active zone such as faults or volcanoes and producing reservoirs. The cracking and moving of the rock creates minimal seismic sources that get recorded in the noise. With methods such as seismic interferometry we can invert the time-series of the receivers to create an image of the subsurface.
When I was university this field gained traction. Geophysical problems tend to be ill-posed and microseismic is leading the list of ill-posed problems. Some companies have sent their data to all (six) microseismic contractors and have gotten six completely different answers. The field has had several years and Microseismic Ltd and NorSar have been doing some important work in this area since.
In 70 years, noise has made an interesting change in the seismic industry. Multiple and diffraction imaging seem to be on the verge of being a viable tool in the seismic processing chain. (Depending what you consider diffraction imaging. Pretty sure everyone is migrating their data these days.).
Do you have a favorite change from noise to signal? Do you think this is purely academic or can be used in the industry?