I worked with the seismic processing algorithm Common Reflection Surface for the past years. During this time I have come to the following conclusion: CRS has a marketing problem. I have worked with CRS in university, at Fugro Seismic Imaging and Wester Geco, respectively Schlumberger.
Its development is closely tied to advanced concepts like the NIP wave tomography and can automatically extract the curvature from data. Yet it is one of the concepts that is met with a vast amount of skepticism and immensely undervalued in the seismic community.
The marketing pitch
CRS is a data-driven algorithm that can build a velocity model without prior knowledge of the subsurface. This process additionally extracts wavefield attributes from the data that are utilized further along the processing flow.
These attributes include amazing features, clients would love to get their hands on the reflector curvature (Normal incidence) , the diffraction curvature (Normal incidence point) and the emergence angle.
What clients and processors hear
It’s a Blackbox, where you put in your data and hope it will give a pleasant outcome, while having close to no control over the calculation.
Word of mouth
Additionally to the fear of a Blackbox, early errors in the delivered software have introduced some of the following connotations with CRS
CRS does not work with diffractions
When CRS was developed, diffraction were seen as noise in the seismic data. Very similar to the recent development that companies do not try to eliminate multiples but separate them from the primary data and use them as additional information.
This notion is mirrored in the default parameters of the CRS code, which had a cutoff for an emergence angle of 60 degrees to take computation cost. However, these angles are most important for diffractions and steep reflection events. A simple adjustment of the parameters fixes this.
CRS does not work in geologically complex situations
This is in part true. When do its own devices, the CRS algorithm assumes coherence within the geology, therefore, possibly smearing over faults.
The velocity model will be a smooth function and sometimes it may be too smooth. In my opinion this is exactly where CRS is mis-marketed, more on that later.
You cannot trust the CRS output
This is a rather odd one, but I heard it repeated several times. As with any processing steps of course, we have to be careful with the outputs. However, CRS is only a stacking process, very similar to our standard CMP stack. It seems odd to be extremely careful with CRS in my eyes.
CRS does not fit into our work flow
Oftentimes, I have experienced that the standard CRS stack did not fit into the workflow of a person or even a company. The stack belongs at the end of the flow, while the CRS stack and its various side processes work flexibly between and being called a stack (and often delivering a stack).
All these claims are in part true.
But only if we do CRS the old way.
How to CRS
First of all some of the default parameters have to be revised to reflect the changes in seismic data processing. But the most important part is as follows.
Any of these claims vanish once we work with a seed or reference velocity file. A velocity function picked by hand accounts for faults and complex geology. As we work with a trustworthy velocity file, the output wavefield attributes can also be trusted as they’re derived with help of the velocity file.
CRS has to be adjusted to properly work with varying surface velocities, otherwise any parameter sections for land seismic data in CRS will be worthless. I have not had a look at professional code and I am guessing this has been implemented by TEEC and other professionals.
Once CRS is provided with a velocity file, solutions if the curvature section will be much more reliable. Additionally, the derivative processing modules will improve significantly. The CRS seismic interpolation is very stable once we supply a velocity function to guide the process.
The possible future
The CRS search is still based on a Nelder-Mead optimization. This search process is less than ideal for the complex purposes of CRS. Jan Walda at Uni Hamburg has made some great steps forward, concerning a global search with genetic algorithms. This method does as of yet not support a velocity model input. The preliminary results and my own tests with the software showed amazingly stable solutions in the stack and the wavefield attributes. This is something to look out for. However, this is a step back to a more blackboxy approach.
What the Common Reflection Surface is really about
The search for wavefield attributes and the stack including these can improve images significantly. It is however best to skip the velocity search and provide an actual velocity model.
I do not believe the main advantage of CRS is the model-independent data driven search. My tests have shown that the main benefits of CRS show, when a model is provided.
Latest posts by Jesper Dramsch (see all)
- Research Talk — Deep Learning for 4D Pressure Saturation Inversion [Youtube] - 2019-04-18
- Geysers in Slow Motion - 2019-02-04
- Keynote Bonanza and No Coffee – The EAGE / PESGB ML Workshop - 2018-12-17