We found some issues regarding the command line interface rasa:
rasa interactive currently trains a stacked model and then executes the interactive learning process.Divide command into rasa interactive- same as before - and rasa interactive core - trains only core model, no need to define pipeline.
If it fails to load an nlu model, it should load a regex interpreter (like what rasa interactive core should do).
-c, it just got the default_config.yml. Now, if you don鈥檛 pass a config file to rasa train core:config.yml doesn鈥檛 exist, you just get an error on the config not existingconfig.yml exists but doesn鈥檛 define policies, you get an error about not having policies defined.We should add a warning and use the default config again.
rasa train core --augmentation 0 doesn't work (see https://github.com/RasaHQ/rasa/issues/3274)rasa train "succeeds" in training an NLU model with no nlu data (just a stories.md file, while rasa train nlu fails in this case. 2019-04-17 14:00:23 INFO rasa.nlu.training_data.training_data - Training data stats:
- intent examples: 0 (0 distinct intents)
- Found intents:
- entity examples: 0 (0 distinct entities)
- found entities:
2019-04-17 14:00:23 INFO rasa.nlu.model - Starting to train component WhitespaceTokenizer
2019-04-17 14:00:23 INFO rasa.nlu.model - Finished training component.
...
2019-04-17 14:00:23 INFO rasa.nlu.model - Starting to train component EmbeddingIntentClassifier
2019-04-17 14:00:23 ERROR rasa.nlu.classifiers.embedding_intent_classifier - Can not train an intent classifier. Need at least 2 different classes. Skipping training of intent classifier.
2019-04-17 14:00:23 INFO rasa.nlu.model - Finished training component.
2019-04-17 14:00:23 INFO rasa.nlu.model - Successfully saved model into '/var/folders/6z/wy9h1cbd4sx74l82v5zk73b80000gn/T/tmp8he4qwf1/nlu'
rasa test nlu or rasa test get the NLU arguments from --successes, --errors, etc. rasa shell without any flags returns the Yellow prompt:'None' not found. Using default location 'endpoints.yml' instead.
We probably shouldn't be printing None to the terminal
rasa train core thinks that nlu data passed to it is story data, and vice versa. Ideally you could still pass the data folder with both nlu and core data to rasa train core and rasa train nlu. Each should implement the data type checking that rasa train implements and only train on the correct data formatsrasa test core:rasa train. Training of the core model is skipped when no stories are found. However, rasa run or rasa shell fails afterwards as an "empty" core model is contained in the tar.gz file. @tabergma i think in this case it should be rasa train that we want to make fail right? something like
"No stories found. To train a rasa model, pass story data. To train just an NLU model, use rasa train nlu."
rasa shell doesn't take any logging arguments like --debug@MetcalfeTom this is not my experience, debug is showing fine for me, is this still the case on master rn? -ella
@erohmensing Yes. For instance:
Bot loaded. Type a message and press enter (use '/stop' to exit):
Your input -> hi
[2019-05-03 10:51:45 +0200] [30784] [INFO] Starting worker [30784]
Hey there! What do you want to talk about?
Buttons:
1: Incoming transfer (/incoming)
2: Outgoing transfer (/outgoing)
Where's all the tracker info, intent confidence, slots etc.
rasa.nlu.train --fixed_model_name current so that it doesn't create millions of models, also having model zipped is inconvenient, maybe there should be another config for thisrasa run should set up a REST channel by default, otherwise it's the same as rasa shellrasa run --help:Python Logging Options:
loglevel Be verbose. Sets logging level to INFO
loglevel Print lots of debugging statements. Sets logging level
to DEBUG
loglevel Be quiet! Sets logging level to WARNING
Why aren't the levels showing up?
rasa run, if the model located in the default directory is an NLU model and not a full stack model, the command line becomes an NLU interpreter. This is fine, but we should probably print a logging statement or something that this is the caseToms-MBP-2:starter-pack-rasa-stack tom$ rasa run
Parameter 'endpoints' not set. Using default location 'endpoints.yml' instead.
hi
{
"intent": {
"name": "greet",
"confidence": 0.9877838162617396
},
"entities": [],
"intent_ranking": [
{
"name": "greet",
"confidence": 0.9877838162617396
},
{
"name": "affirm",
"confidence": 0.003935009033688395
},
{
"name": "thanks",
"confidence": 0.0023952154297195855
},
{
"name": "goodbye",
"confidence": 0.0022507302219739508
},
{
"name": "joke",
"confidence": 0.0019255637819250888
},
{
"name": "name",
"confidence": 0.0017096652709531776
}
],
"text": "hi"
}
"Calling `rasa.core.train` directly is no longer supported. Please use `rasa train core` instead."
Then they train/run their models separately with rasa train core and rasa train nlu because they were told to and then run into issues when e.g. the rasa run core doesn't pick up the nlu model. Do you think we should add something to these, something like below?
"Calling `rasa.core.train` directly is no longer supported. Please use `rasa train` to train a combined Core and NLU model or `rasa train core` to train a Core model."
then similarly something like
"Calling `rasa.core.run` directly is no longer supported. Please use `rasa run` to run a combined Core and NLU model (trained with `rasa train`) or `rasa train core` to run a Core model with no NLU."
Enhancement -- moved to separate issue - detect config changes correctly
I think the improved change detection is an improvement which we should tackle separately
Toms-MBP-2:cxai-nlp-rasa tom$ rasa test nlu --mode crossvalidation
'crossvalidation' not found. Using default location 'models' instead.
rasa test nlu you get an error, we should probably print something usefulToms-MBP-2:cxai-nlp-rasa tom$ rasa test nlu -u data/training_data.md --report
Parameter 'model' not set. Using default location 'models' instead.
Traceback (most recent call last):
File "/Users/tom/.pyenv/versions/3.6.8/bin/rasa", line 11, in <module>
load_entry_point('rasa', 'console_scripts', 'rasa')()
File "/Users/tom/Documents/RASA/rasa/rasa/__main__.py", line 64, in main
cmdline_arguments.func(cmdline_arguments)
File "/Users/tom/Documents/RASA/rasa/rasa/cli/test.py", line 205, in test_nlu
test_nlu(model_path, nlu_data, vars(args))
File "/Users/tom/Documents/RASA/rasa/rasa/test.py", line 91, in test_nlu
nlu_model = os.path.join(unpacked_model, "nlu")
File "/Users/tom/.pyenv/versions/3.6.8/lib/python3.6/posixpath.py", line 80, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
rasa train core ... --debug, it duplicate tensorflow warnings:WARNING:tensorflow:From /Users/ghost/Documents/rasa/rasa/rasa/core/policies/embedding_policy.py:451: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
2019-05-08 11:12:18 WARNING tensorflow - From /Users/ghost/Documents/rasa/rasa/rasa/core/policies/embedding_policy.py:451: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
it didn鈥檛 do it before
rasa test core --stories data/core/test --core models/dialogue -o results/
Parameter 'endpoints' not set. Using default location 'endpoints.yml' instead.
The path 'config.yml' does not exist. Please make sure to use the default location ('config.yml') or specify it with '--config'.
rasa run vs rasa shell issue, addressed below
rasa run and rasa shell:
rasa run should always start a server. It does not matter what kind of model is provided. The server can run with all kind of models.rasa shell runs the bot on the command line, if the model includes a Core model. If just an NLU model is given, an NLU interpreter is started.rasa shell nlu that always starts an NLU interpreter as long as an NLU model is included in the provided model.rasa data nlu split: Split data is automatically in JSON, regardless of input data formatrasa train nlu -c nlu_tensorflow.yml --nlu data/bert/train.md -o models --debug
usage: rasa [-h] [--version]
{init,run,shell,train,interactive,test,show,data,x} ...
rasa: error: unrecognized arguments: --debug
make: *** [train-nlu] Error 2
--report flag for rasa test nlu doesn't work again
Most helpful comment
rasa runandrasa shell:rasa runshould always start a server. It does not matter what kind of model is provided. The server can run with all kind of models.rasa shellruns the bot on the command line, if the model includes a Core model. If just an NLU model is given, an NLU interpreter is started.rasa shell nluthat always starts an NLU interpreter as long as an NLU model is included in the provided model.