Workshop Overview

High-Performance-Computing plays a leading role in our current energy business and will be of critical importance for a successful energy transition. Looking across multiple industries, our business undoubtedly exploits the largest High-Performance-Computing capacity. HPC helps in seeking higher productivity, lowering costs and making better use of huge amounts of data through high-performance simulation and data analytics. Algorithms performing as fast as possible on the best available hardware have a direct impact on many of the decisions shaping our business. This is particularly true in this post COVID world. Achieving this goal, through the various forms of HPC, from Supercomputers on premise to elastic and versatile solutions on the cloud, is the underlying theme of this fifth instance of the EAGE HPC for Upstream: “Heterogeneous HPC: Challenges, Current and Future Trends”.

In upstream, simulation and modelling is our principal mechanism for the accurate location of hydrocarbons and their optimal production. The reliance on data for making better business decisions at a lower cost is becoming critical. Seismic data are explored using traditional imaging algorithms such as Reverse Time Migration (RTM), Full Waveform Inversion (FWI) and Electromagnetic Modeling (EM) to illuminate the hidden subsurface of the earth and reservoir simulation is used to optimally produce fields and predict the time evolution of assets. Both are highly compute-intensive activities, which push the leading edge of HPC storage, interconnect and calculation. The industry is evolving on several fronts. Changes in the underlying hardware with the advent of coprocessing or accelerator technologies and many-core CPUs are challenging practitioners to develop new algorithms and port old ones to reap the most performance from modern hardware. The explosion of data and the recent rapid development in machine learning (ML) are leading to non-traditional ways of interpreting seismic and reservoir data. The emergence of significantly faster reservoir simulation technology is breathing new life into multi-resolution and uncertainty quantification workflows.

The ability to create and mine these data relies on the optimal utilisation of supercomputers. This is the result of various synergies between industries, companies, departments and, most importantly, people. HPC IT departments (or even HPC cloud solution providers) are focused on minimising turnaround times for various workloads, but also deploy the various compute architectures in a cost competitive fashion while adapting to the fast-paced innovation in the semiconductor industry. Research groups and software application teams in both academia and industry develop new algorithms and keep abreast with the latest while adapting and optimizing existing or new production frameworks to the latest parallel programming model, language and architecture. The workshop brings together experts in order to understand state-of-the-art key applications employed in the upstream industry and anticipate what ambitions are enabled by increased computational power.

The 3-day workshop will feature both oral presentations and quick lightning talks, panel sessions and keynotes from the leading experts in the industry, as well as plenty of discussion sessions embedded into the programme.