Dumped with a load of data, collected from numerous wells over a long time period, by different analysts, that appear to be inconsistent?
For mature fields, there is often a wealth of legacy petrographical and reservoir quality information that is lost within volume after volume of single well characterisation studies and interim reports. This data may, or may not, have been previously integrated and properly incorporated into reservoir models.
There are a number of strands to taking this data and bringing it back to life:
- the logistics of mining the data from paper or scanned documents and compiling them into digital formats for deeper interrogation.
- understanding from a geological and petrographical perspective what the data actually mean, and assimilating it into an internally consistent database.
- once an internally consistent database is assimilated, it can then be properly interrogated and interpreted.
- If legacy samples (e.g. thin-sections) are still available, then these can be used to further refine the observations using more modern instrumentation.
Data Extraction:
Libraries full of paper reports have commonly been scanned into pdf (or tiff) documents. These documents commonly include tabulation of significant amounts of petrographical and reservoir quality data, but the data and important findings may be lost within a sea of more irrelevant information.
In rare cases, data can be simply copy-and-pasted into spreadsheet formats. Sometimes, careful image optimisation followed by optical character recognition and extraction using specialist software, is needed to enable poor quality scans of original tables with complex formating to be digitised. In the worst case, manual data-input is required.
Once the data has been collated, careful quality control ensures the integrity of the imported data.
Data Assimilation
The petrographical and reservoir quality data may have been collected over many years, often by different contractors / analysts, and there will be variable degreees of consistency in how different mineralogical components are reported between reports.
Using our experitise in petrography and reservoir quality analysis, we are able to review legacy reports and comprehend how the features described by different workers relate to one-another, for example:
- one analyst might describe poorly-resolved clay in pore systems as “detrital clay,” whereas another might call the same mateiral “authigenic illite”.
- different analysts may have different biases in terms of differentiation of primary and secondary pores.
- inconsistencies in thin-section staining (for carbonate and K-felsdpar) may lead different workers to identify different combinations of carbonate and feldspar minerals.
- different analysts use different protocols to relate thin-section-derived porosity data to porosities measured by routine core analysis (we have seen just about any conceivable methodology for this over the years).
- historical datasets may not differentiate between diagenetic minerals in primary pore space and those replacing detrital grains (see modal analysis), making quantitative compaction analysis difficult.
In addition to assimilation of the actual data, metadata about the source(s) of the data, and its qualtity / limitations are also compiled.
Data Interogation
Once data are assimilated they are interrogated and interpreted to understand aspects of detrital characteristics, diagenetic overprinting, pore systems and reservoir quality, in much the same way as the data from a “new” project would be treated. However, there is the additional factor of identifying, and taking into account, where operator biases complicate the interpretation.