Back to Talks

Ideas for Interpreting Machine Learning

Patrick Hall H2O.ai

Audience level: Intermediate
Topic area: Modeling

Description

Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Practitioners, researchers, and consumers that use these technologies in their work and their day-to-day lives have the right to trust and understand AI. This talk is an overview of techniques for interpreting deep learning and machine learning models and telling stories from their results.

Abstract:

While understanding and trusting models and their results is a hallmark of good (data) science, model interpretability is a serious legal mandate in the regulated verticals of banking, insurance, and other industries. Moreover, scientists, physicians, researchers, analysts, and humans in general have the right to understand and trust models and modeling results that affect their work and their lives. Today many organizations and individuals are embracing deep learning and machine learning algorithms but what happens when people want to explain these impactful, complex technologies to one-another or when these technologies inevitably make mistakes?  

This talk presents several approaches beyond the error measures and assessment plots typically used to interpret deep learning and machine learning models and results. The talk will include:

  • Data visualization techniques for representing high-degree interactions and nuanced data structures. 

  • Contemporary linear model variants that incorporate machine learning and are appropriate for use in regulated industry. 

  • Cutting edge approaches for explaining extremely complex deep learning and machine learning models.  

Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human story telling: complexity, scope, understanding and trust.