Page 32 - MSDN Magazine, July 2018
P. 32

Figure 4 Using LIME to Analyze Utterances
import requests
import json
from lime.lime_text import LimeTextExplainer import numpy as np
def call_with_utterance_list(utterance_list) :
scores=np.array([call_with_utterance(utterance) for utterance in utterance_list])
return scores
def call_with_utterance(utterance) :
if utterance is None : return np.array([0, 1])
app_url ='your_url_here&q='
r = requests.get(app_url+utterance)
json_payload = json.loads(r.text)
intents = json_payload['intents']
personal_accounts_intent_score =
[intent['score'] for intent in intents if intent['intent'] ==
'PersonalAccountsIntent']
other_services_intent_score = [intent['score'] for intent in intents if
intent['intent'] == 'OtherServicesIntent'] none_intent_score = [intent['score'] for intent in intents if
intent['intent'] == 'None']
if len(personal_accounts_intent_score) == 0 : return np.array([0, 1])
normalized_score_denom = personal_accounts_intent_score[0]+ other_services_intent_score[0]+none_intent_score[0]
score = personal_accounts_intent_score[0]/normalized_score_denom complement = 1 - score
return (np.array([score, complement]))
if __name__== "__main__":
explainer = LimeTextExplainer(class_names=['PersonalAcctIntent', 'Others']) utterance_to_explain = 'What are annual rates for my savings accounts' exp = explainer.explain_instance(utterance_to_explain,
call_with_utterance_list, num_samples=500) exp.save_to_file('lime_output.html')
It’s also worth noting that you can use Scattertext in a similar fashion when you have more than two intents by comparing pairs of intents at a time.
Explaining Intent Classifications Using LIME
Now let’s look at an open source tool called LIME, or Local Inter- pretable Model-Agnostic Explanation, which allows you to explain intent classification. You’ll find the source code and a tutorial at bit.ly/ 2I4Mp9z, and an academic research paper entitled, “Why Should I Trust You?: Explaining the Predictions of Any Classifier” (bit.ly/2ocHXKv).
LIME is written in Python and you can follow the installation instructions in the tutorial before running the code in Figure 4.
LIME allows you to explain classifiers for different modalities, including images and text. I’m going to use the text version of LIME, which outputs word-level insights about the various words in the utterance. While I’m using LUIS as my classifier of choice, a wide range of classifiers can be fed into LIME; they’re essentially treated as black boxes.
The text version of LIME works roughly as follows: It randomly creates multiple modifications or samples of the input utterance
by removing any number of words, then calls LUIS on each one of them. The number of samples is controlled by the parameter num_samples, which in Figure 4 is set to 500. For the example utterance, modified utterances can include variations such as “are annual for accounts” and “what annual rates for my savings.”
LIME allows you to explain classifiers for different modalities, including images and text.
LIME uses the confidence scores returned from LUIS to fit a linear model that then estimates the effects of single words on classification confidence scores. This estimation helps you identify how the confi- dence score is likely to change if you were to remove words from the utterance and run the classifier again (as I show later).
The only major requirement for the classifier is to output con- fidence scores for the classified labels. Confidence scores over the
Figure 5 LIME Output for the “What Are Annual Rates for My Savings Accounts?” Utterance
26 msdn magazine Cognitive Services


































































































   30   31   32   33   34