Suggested Searches

2 min read

Understanding Deep Neural Networks Through Attribution and Interactive Experimentation

Frederick Hohman
Georgia Insitute of Technology

Frederick Hohman
Frederick Hohman

My proposed research focuses on designing and developing interactive tools that enable people to make sense of deep neural networks (DNN), turning these powerful, complex models into ones that are more interpretable. I will combine two novel ideas synergistically, to guide users to visualize and discover model substructures that impact performance: (1) reveal neuron activation relationships and attribute model performance and vulnerability to model substructures; and (2) fortify user understanding of models via interactive experimentation. This proposed work leverages my hybrid expertise in scalable graph mining and human-computer interaction, combining them to create tools that are scalable, interactive, and usable. These techniques align with nearly all facets of NASA’s TA11: Modeling, Simulation, Information Technology, and Processing, and can help researchers and scientists conduct large-scale, data-driven research to produce state-of-the-art results with confidence.

Expected Results and Innovations:

  1. Novel Ideas. My proposed research produces and synergistically combines (1) a novel, principled framework that guides users to visualize and discover model substructures that impact performance; and (2) a new interactive model interpretation approach that fortifies user understanding. These tools and techniques can be used in large-scale data-driven scientific research.
  2. New Open-source Practical Tools. My interactive tools will work with real data and DNN models, to help researchers and practitioners answer important research questions. To amplify my impacts, I will broadly disseminate my research results by open-sourcing all code, tools, and datasets developed.
  3. Improved Trust in AI for Scientific Applications. Broadly, my findings could lead to new breeds of techniques that are more interpretable and are able to incorporate human knowledge. This would also improve trust in AI by users who interact with NASA deployed AI-powered systems and research. This research could further improve AI safety, helping people better understand AI systems, to gain trust in them, to understand why and when they may not work well, and to protect them from harm in combating adversarial attacks. Finally, through this strengthened trust, these new AI techniques could be applied to multiple NASA spacecraft missions, leading to spaceflight technology breakthroughs.

Back to NSTRF 2018