Panel Discussion: AI Autonomy, Trust, and Adoption in Subsurface Industries

Tracks
Track 1
Track 2
Track 3
Monday, March 9, 2026
9:45 AM - 10:45 AM
Plenary room

Details

Session Leads: Nicholas Satur (Aker BP), George Ghon (Capgemini) and Gareth O'Brien (Microsoft) AI is rapidly advancing in areas such as automation, data analysis, and even decision-making. Collaboration between technology providers and user organizations will accelerate development and adoption, but it may also require workforce adaptation and new skill sets. As AI becomes embedded in workflows, strong ethical principles must guide its use to ensure fairness, transparency, and accountability. AI systems should be robust, reliable, and safe under diverse conditions. This means implementing rigorous testing, conducting bias audits, and maintaining human oversight. Organizations should establish monitoring systems, risk management protocols, and clear accountability structures to ensure compliance and mitigate ethical, legal, and operational risks. The key challenge is scaling AI responsibly without stifling innovation, while remaining aligned with internal and external governance frameworks. Critical questions include: - how do we ensure governance is adhered to by end users, and what are the consequences of breaches? - where should AI be allowed to make decisions, and where must humans remain in the loop? - can we track how AI reached a decision, and if relevant, identify alternatives or uncertainties? Responsible AI adoption requires balancing innovation with trust, transparency, and accountability.

loading