Hackathon - Explainable A.I.

When:
Sunday 5 from 9:00 to 18:00 CEST
Monday 6 June from 9:00 to 17:00 CEST

Where: Room E & Online

Programme

Sunday 5 June

TimeActivity
9:00 - 9:30 CEST
Welcome and coffee
9:30 - 10:30 CEST
Introduction to the theme
10:30 - 11:00 CEST
Project and team discovery
11:00 - 18:00 CESTHacking all day


Monday 6 June

TimeActivity
9:00 - 9:30 CEST
Welcome and coffee
9:30 - 13:00 CEST
Hacking and preparation of presentations
13:00 - 13.15 CESTCODE FREEZE: Participants receive final presentation criteria
13:15 - 15:00 CEST 
Time for working and practicing presentations
15:00 - 16:30 CESTPresentations to the Judging Panel
16:30 - 17:00 CEST 
Announcement of the winners and prizes*

*In addition to exciting prizes, the presentations of the winning projects will be included in the programme of the Dedicated Session organized by the EAGE A.I. Committee.


Registration

A separate registration is required to participate in this activity. Spaces are limited so hurry up and register now!

Registration typeFee
Student (in-person or online)25 EUR
Regular (in-person or online)50 EUR


Theme: Explainable Artificial Intelligence (XAI)

XAI is the theme of this year's EAGE Annual Hackathon organized by the EAGE A.I. Committee. Teams will explore ways in which we can build more interpretable machine learning tools. The goal is more understandable and trustworthy subsurface prediction.


The Wolf-or-Husky classifier

This deep neural network can tell the difference between wolves and huskies, with 90% accuracy. More than 30% of surveyed ML researchers said they trusted it. 

Picture: Various correctly classified images, with one misclassification

Source: Ribeiro et al. https://arxiv.org/abs/1602.04938


LIME shows that the model pays attention only to the background of a sample image. It's a snow detector.


Picture: 
(Left) Husky-that-is-a-wolf
(Right) LIME's explanation


This story is often incorrectly given as an example of 'AI gone wrong'. But it was trained intentionally to test humans' ability to spot bad models. This AI was trained using Google's Inception neural network and achieves ~90% accuracy.


The researchers asked:

  1. Would you trust this classifier to work in the real world?
  2. Why?
  3. How do you think it is making decisions?

In fact, it was trained on only 20 images. ALL the wolves had snow in the picture; none of the huskies did.

More than 1/3 ML researchers trusted this model... until Ribeiro et al showed them a LIME analysis, which 'explains' the model.

This activity is organized with the support of


Interested?

Register

You can sign up to participate
either in-person or online

Sponsor

Find out how you can support this activity
or contact sponsoring@eage.org


About the EAGE A.I. Committee

The EAGE A.I. Committee is a team of EAGE members and volunteers who endeavour to share knowledge and create new connections in the digital transformation that are relevant for geoscientists. In addition to regular contributions to EAGE conferences and workshops, they curate a periodical newsletter on all things A.I., machine learning and digitalization, as well as interviews with experts and other initiatives for the community. You are welcome to join the EAGE Artificial Intelligence group on LinkedIn to receive updates on all future opportunities to get involved.

Main Sponsors