Topic area: Misc
We describe BEEF, a computational framework that explains in easily understood plain English, the evidence for and against a forecast made by a binary classifier, irrespective of the underlying classification engine. We will also provide a brief demonstration of BEEF on a set of diverse classification tasks using a set of diverse classifiers.
Classifiers today are very complex. Even relatively simple classifiers such as SVM have numerous parameters and kernels. More complex classifiers based on "deep learning" invent new features that are unintelligible to domain experts, even if they generate highly accurate forecasts. This talk focuses on providing, for the first time, not just explanations, but "balanced" explanations of forecasts. A balanced explanation of a forecast explains not only why the forecast might be true, but also why it might be false. A decision maker who is presented a computer generated forecast can therefore see what the evidence for/against the forecast might be, before making a decision based on the forecast.
In this talk, we define a balanced explanation of forecast - though the definition is mathematical, it is easy to grasp through visual means. We then define our BE algorithm to generate balanced explanations. After this, a simple technique transforms the computational notion of a balanced explanation to one that can be easily rendered into plain English - this is our final BEEF algorithm. BEEF can generate balanced explanations for forecasts made by any binary classifier (we are currently extending it to the multi-class case). We will demonstrate how BEEF works with a couple of brief demonstrations with different data sets and different classifiers during the talk. Interested parties can view more demos later during the conference.