Hi there,
I would like to ask if we can handle the response of the bot if the user input doesn't belong to the list of the intents. For example in restaurant bot if user would ask "I would like to book the flight" the bot can response something like "Sorry I don't understand, can you repeat it again".
I think we can handle it with the code but is there a easier way to solve this problem?
Thank you!
There is no default solution for this yet. We are working and thinking about a general applicable solution to the out of scope handling in general.
I just want to mention that a Default/Fallback entity is needed(?)/useful for NLU and CORE.
The issues should be considered together as they affect each other.
RasaHQ/rasa_nlu#565 @amn41 stated it won't be added soon but I'm not sure how easy it is currently to implement a Fallback.
I would start to implement a rasa_core.RasaNluInterpreter and attach my logic to the rasa_nlu.Interpreter, does this sound reasonable?
And where would be the best place to insert the (similar) logic for a Core Action Fallback?
Please share some of your references if there some which are publicly available, I'd like to read them.
So to get a fallback for core and nlu here is what I'd do at the moment:
out_of_scopePolicyEnsemble. e.g. extending SimplePolicyEnsemble and overriding probabilities_using_best_policy with something along the lines of def probabilities_using_best_policy(self, tracker, domain):
result = None
max_confidence = -1
for p in self.policies:
probabilities = p.predict_action_probabilities(tracker, domain)
confidence = np.max(probabilities)
if confidence > max_confidence:
max_confidence = confidence
result = probabilities
if (tracker.latest_message.intent["name"] == "out_of_scope"
or (result and np.max(result) < 0.3)):
# 0.3 is the probability cutoff for rasa core in this case
# so if the trained policy returned the most likely action with
# a probability of less than 0.3, we will run the fallback action
# instead
fallback_idx = domain.index_for_action("fallback")
return utils.one_hot(fallback_idx, domain.num_actions)
return result
class ActionFallback(Action):
def name(self):
return "fallback"
def run(self, dispatcher, tracker, domain):
from rasa_core.events import UserUtteranceReverted
dispatcher.utter_message("Sorry, didn't get that. Try again.")
return [UserUtteranceReverted()]
Is there any documentation thoroughly explaining these steps? i.e. creating the custom policy ensemble, etc? I attempted the above but I'm probably missing details. A sample would help immensely. Thanks for all the hard work guys.
Yes :) it's already on our roadmap to create a worked example for this! Thanks for the feedback
How to call above mentioned fallback function using keras policy .i have created a policy like restaurant policy example which is based on keras policy
i tried above mentioned method but face the below error.
train_dialogue()
File "serve1.py", line 105, in train_dialogue
validation_split=0.2
File "/usr/local/lib/python2.7/dist-packages/rasa_core/agent.py", line 152, in train
trainer.train(filename, remove_duplicates=remove_duplicates, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/trainer.py", line 53, in train
self.ensemble.train(training_data, self.domain, self.featurizer, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/ensemble.py", line 54, in train
policy.train(training_data, domain, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/policy.py", line 58, in train
raise NotImplementedError
the above error is resolved. After added PolicyEnsemble we got the below error in my file
File "serve1.py", line 163, in
train_dialogue()
File "serve1.py", line 107, in train_dialogue
validation_split=0.2
File "/usr/local/lib/python2.7/dist-packages/rasa_core/agent.py", line 152, in train
trainer.train(filename, remove_duplicates=remove_duplicates, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/trainer.py", line 53, in train
self.ensemble.train(training_data, self.domain, self.featurizer, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/ensemble.py", line 53, in train
policy.prepare(featurizer,
AttributeError: 'SimplePolicy' object has no attribute 'prepare'
@Selvaganapathi06 How did you fix the error ile "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/trainer.py", line 53, in train
self.ensemble.train(training_data, self.domain, self.featurizer, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/ensemble.py", line 54, in train
policy.train(training_data, domain, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/rasa_core/policies/policy.py", line 58, in train
raise NotImplementedError
I get the same error as well !
currently used simplePolicy(PolicyEnsemble). Previously used simplePolicy(Policy) but now i have faced AttributeError: 'SimplePolicy' object has no attribute 'prepare'. if you got any solution please share me..
@Selvaganapathi06 same here. If I manage to fix it, will get back to you 馃憤 ...
@Selvaganapathi06 have you included this with Custom Keras Policy (Restaurant) or in Ensemble Policy? Actually, I included it in Custom Keras policy, and added 'Fallback' function in Action class (bot.py) and included 'fallback' in domain action list. The intent confident level was below than 0.3 but not triggering fallback function
@tmbo How to handle out of scope in NLU itself
You can add a bunch of out_of_scope training data examples in NLU, so NLU recognises the out_of_scope intent when the user is talking about something irrelevant.
@akelad we don't what might user enters so how can we give some examples? Even a properly framed sentence can be in out_of_scope
basically just a bunch of sentences on random topics that have nothing to do with the domain of your bot. this makes it more likely that new out of scope sentences that users enter will be classified as the out_of_scope intent rather than any of your other intents.
@akelad thanks but i have created my own fallback like this
class SimplePolicyEnsemble(PolicyEnsemble):
def __init__(self, policies, known_slot_events=None):
super(SimplePolicyEnsemble, self).__init__(policies, known_slot_events)
def probabilities_using_best_policy(self, tracker, domain):
result = None
max_confidence = -1
for p in self.policies:
probabilities = p.predict_action_probabilities(tracker, domain)
confidence = np.max(probabilities)
if confidence > max_confidence:
max_confidence = confidence
result = probabilities
nlu_confidence = tracker.latest_message.parse_data["intent"]["confidence"]
if nlu_confidence < 0.3:
fallback_idx = domain.index_for_action("action_fallback")
return utils.one_hot(fallback_idx, domain.num_actions)
elif result and np.max(result) < 0.3:
# 0.3 is the probability cutoff for rasa core in this case
# so if the trained policy returned the most likely action with
# a probability of less than 0.3, we will run the fallback action
# instead
fallback_idx = domain.index_for_action("action_fallback")
return utils.one_hot(fallback_idx, domain.num_actions)
return result
and created custom action
class ActionFallback(Action):
def name(self):
return "action_fallback"
def run(self, dispatcher, tracker, domain):
dispatcher.utter_message("Sorry, didn't get that. Try again.")
return [AllSlotsReset()]
This is working perfect for now and i hope in future we will get from you officially
@Harish0596 This looks great! Where did you put this custom policy? i.e. in which file?
@Harish0596 yeah that looks good, i thought you wanted a different alternative to that one. Are you sure you want to return AllSlotsReset() though and not just UserUtteranceReverted()?
this functionality should be part of the next release https://github.com/RasaHQ/rasa_core/pull/390. could you please try that branch out and see if it serves your purpose?
@akelad yeah i don't want UserUtteranceReverted() because it will remove the latest message which results in error with the key 'confidence' and the reason for using AllSlotsReset() is unnecessarily some entities are being extracted which are not related to my context which is effecting the flow.
@amn41 I have already checked that and for some reason it didn't work, one among the reasons is, it should be self.core_threshold instead of core_threshold which i noticed and changed but still it didn't work. Anyhow, i will try once again. Moreover i got the idea of implementing in SimplePolicyEnsemble and control it by custom action after looking that fallback.py code
@Harish0596 AllSlotReset is imported from where ? I cant reference it properly. For now, I use UserUtternaceReverted()
@geojolly12 from: from rasa_core.events import AllSlotsReset
`from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import logging
import numpy as np
from rasa_core.policies.keras_policy import KerasPolicy
logger = logging.getLogger(__name__)
class RestaurantPolicy(KerasPolicy):
def probabilities_using_best_policy(self, tracker, domain):
result = None
max_confidence = -1
print('result value')
print(np.max(result))
for p in self.policies:
probabilities = p.predict_action_probabilities(tracker, domain)
confidence = np.max(probabilities)
if confidence > max_confidence:
max_confidence = confidence
result = probabilities
if (tracker.latest_message.intent["name"] == "out_of_scope"
or (result and np.max(result) < 0.7)):
print(np.max(result))
# 0.3 is the probability cutoff for rasa core in this case
# so if the trained policy returned the most likely action with
# a probability of less than 0.3, we will run the fallback action
# instead
fallback_idx = domain.index_for_action("fallback")
print('result value')
print(result)
return utils.one_hot(fallback_idx, domain.num_actions)
return result
def model_architecture(self, num_features, num_actions, max_history_len):
"""Build a Keras model and return a compiled model."""
from keras.layers import LSTM, Activation, Masking, Dense
from keras.models import Sequential
n_hidden = 32 # size of hidden layer in LSTM
# Build Model
batch_shape = (None, max_history_len, num_features)
model = Sequential()
model.add(Masking(-1, batch_input_shape=batch_shape))
model.add(LSTM(n_hidden, batch_input_shape=batch_shape))
model.add(Dense(input_dim=n_hidden, output_dim=num_actions))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
logger.debug(model.summary())
return model
I just put in Keras policy, and Fallback action in 'bot.py'
class ActionFallback(Action):
def name(self):
return 'fallback'
def run(self, dispatcher, tracker, domain):
from rasa_core.events import UserUtteranceReverted
dispatcher.utter_message("Sorry, didn't get that. Try again.")
return [UserUtteranceReverted()]`
But not working. What am doing wrong? @Walter-Ullon
@amn41 fallback action is action_listen? I mean which you are using in fallback.py
@Harish0596, newbie is still learning, in where should I put those codes?
What would be the best approach to create a dynamic form? Questions are coming from an API
My use case is something like a "psychologist bot", with dynamic questions for interviews (this is provided by my backend API via an Action).
This means during the interview of a patient, the bot will receive some answers for dynamic questions from the database (I don't know the questions before, there is a web app that handles this inputs).
My strategy at this point is to use a Slot to store the conversation state, something like is_waiting_an_answer[boolean] and if this is true (It will setted after sending the question to the user) then a custom PolicyEnsemble (extension of SimplePolicyEnsemble) will ignore all the predictions from the other policies and store the answer in the database (via an action).
I tried to create a PolicyEnsemble extension as suggested by @tmbo in this issue however I am getting AttributeError: 'MyPolicyEnsemble' object has no attribute 'featurizer'
I didn't find an example of how to do PolicyEnsemble extension.
2 questions:
1) About the approach, is it the best way to deal with a dynamic form? Or should I use a Policy extension (maybe returning 100% confidence during answers)? Suggestions?
2) Is there any PolicyEnsemble extension example?
I really appreciate your support guys
MyPolicyEnsemble:
class MyPolicyEnsemble(SimplePolicyEnsemble):
def probabilities_using_best_policy(self, tracker, domain):
# My logic based on the **is_waiting_an_answer** slot
My Agent:
agent = Agent(domain_file,
policies=[MemoizationPolicy(max_history=3),
KerasPolicy(max_history=3, epochs=3, batch_size=50),
MyPolicyEnsemble(None)]
My Env:
rasa-core==0.12.3
rasa-core-sdk==0.12.1
rasa-nlu==0.13.8
We have handled this using fallback mechanism but I have little different issue.
My intent is - I want to book table for tomorrow evening dinner?
Model is trained and it worked absolutely like gem.
But What if user says - "I don't want to book table for tomorrow evening dinner"
Except word "don't" - everything matches and so confidence is also high and so above intent is picked. How one should handle such cases?
Quick help would be appreciated.
I have added costume SimplePolicyEnsemble as @Harish0596 but for some reason its not working help me If anything is missing
policy:
`class SimplePolicyEnsemble(PolicyEnsemble):
def __init__(self, policies, known_slot_events=None):
super(SimplePolicyEnsemble, self).__init__(policies, known_slot_events)
def probabilities_using_best_policy(self, tracker, domain):
result = None
max_confidence = -1
for p in self.policies:
probabilities = p.predict_action_probabilities(tracker, domain)
confidence = np.max(probabilities)
if confidence > max_confidence:
max_confidence = confidence
result = probabilities
nlu_confidence = tracker.latest_message.parse_data["intent"]["confidence"]
if nlu_confidence < 0.5:
fallback_idx = domain.index_for_action("action_fallback")
return utils.one_hot(fallback_idx, domain.num_actions)
elif result and np.max(result) < 0.5:
# 0.5 is the probability cutoff for rasa core in this case
# so if the trained policy returned the most likely action with
# a probability of less than 0.5, we will run the fallback action
# instead
fallback_idx = domain.index_for_action("action_fallback")
return utils.one_hot(fallback_idx, domain.num_actions)
return result`
action:
class ActionFallback(Action):
def name(self):
return "action_fallback"
def run(self, dispatcher, tracker, domain):
dispatcher.utter_message("Sorry, didn't get that. Try again.")
return [UserUtteranceReverted()]
config.yml:
policies:
- name: MemoizationPolicy
- name: TEDPolicy
max_history: 5
epochs: 100
- name: MappingPolicy
- name: FormPolicy
Most helpful comment
So to get a fallback for core and nlu here is what I'd do at the moment:
out_of_scopePolicyEnsemble. e.g. extendingSimplePolicyEnsembleand overridingprobabilities_using_best_policywith something along the lines of