High-Performance-Computing plays a leading role in our current energy business and will be of critical importance for a successful energy transition. Looking across multiple industries, our business undoubtedly exploits the largest High-Performance-Computing capacity. HPC helps in seeking higher productivity, lowering costs and making better use of huge amounts of data through high-performance simulation and data analytics. Algorithms performing as fast as possible on the best available hardware have a direct impact on many of the decisions shaping our business. This is particularly true in this post COVID world.
Simulation and modelling is our principal mechanism for the accurate location of hydrocarbons, their optimal production and soon their decarbonization. The reliance on data for making better business decisions at a lower cost is becoming critical. Seismic data are explored using traditional imaging algorithms such as Reverse Time Migration (RTM), Full Waveform Inversion (FWI) and Electromagnetic Modeling (EM) to illuminate the hidden subsurface of the earth and reservoir simulation is used to optimally produce fields and predict the time evolution of assets. Both are highly compute-intensive activities, which push the leading edge of HPC storage, interconnect and calculation. The industry is evolving on several fronts. Changes in the underlying hardware with the advent of co-processing or accelerator technologies and many-core CPUs are challenging practitioners to develop new algorithms and port old ones to reap the most performance from modern hardware. The explosion of data and the recent rapid development in machine learning (ML) are leading to non-traditional ways of interpreting seismic and reservoir data. The emergence of significantly faster reservoir simulation technology is breathing new life into multi-resolution and uncertainty quantification workflows.
The ability to create and mine these data relies on the optimal utilization of supercomputers. This is the result of various synergies between industries, companies, departments and, most importantly, people. HPC IT departments (or even HPC cloud solution providers) are focused on minimizing turnaround times for various workloads, but also deploy the various compute architectures in a cost competitive fashion while adapting to the fast-paced innovation in the semiconductor industry. Research groups and software application teams in both academia and industry develop new algorithms and keep abreast with the latest while adapting and optimizing existing or new production frameworks to the latest parallel programming model, language and architecture. The workshop brings together experts in order to understand state-of-the-art key applications employed in the upstream industry and anticipate what ambitions are enabled by increased computational power.
The 3-day workshop will feature both oral presentations and quick lightning talks, panel sessions and keynotes from the leading experts in the industry, as well as plenty of discussion sessions embedded into the program.
NB: Submissions on the topic of HPC for the Energy Transition are encouraged.
Call for Abstracts will open on 1 April and the deadline to submit abstracts is 31 May 2022.