Machine Learning Interpretability with Driverless AI
In this webinar, Andy Steinbach, Head of AI in Financial Services at NVIDIA, moderates a discussion with Patrick Hall, Senior Director of Product at H2O.ai on Machine Learning Interpretability with Driverless AI.
In addition to the discussion, Patrick will showcase several approaches beyond the error measures and assessment plots typically used to interpret deep learning and machine learning models and results.
This will include:
- Data visualization techniques for representing high-degree interactions and nuanced data structures.
- Contemporary linear model variants that incorporate machine learning and are appropriate for use in regulated industry.
- Cutting edge approaches for explaining extremely complex deep learning and machine learning models.
Wherever possible, interpretability approaches are deconstructed into more basic components suitable for human storytelling: complexity, scope, understanding, and trust.
Patrick Hall is a Senior Director of Product at H2O.ai and works with H2O.ai’s customers to derive substantive business value from machine learning technologies. His product work at H2O.ai focuses on model interpretability and deployment.
Patrick also is an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and R & D research roles at SAS Institute.
Andy leads the global effort to develop the NVIDIA deep learning platform in the Financial Services Industry. He specializes in developing applications of revolutionary technologies in new markets. In his last role, he built a new group to employ machine learning technology for the first time in a $1B imaging technology division of the global technology company ZEISS.
Andy earned a PhD in physics from University of Colorado, Boulder, where he studied the quantum electric circuits.