Hello,
I am facing issues while trying to use cognitive service for QnA Azure chatbot. I can get answers if i type question. But when I try to get answer using microphone the bot crashes immediately after returning the text spoken at microphone. Same behavior on chrome or firefox browser on desktop as well as Safari browser on IOS.
These are the two errors I see on the console of chrome browser.
webchat.js:2 Error: The first argument must be a Cognitive Services audio stream.
at new t (webchat.js:2)
at t.default (webchat.js:2)
at webchat.js:2
at Object.useMemo (webchat.js:2)
at useMemo (webchat.js:2)
at e (webchat.js:2)
at Ji (webchat.js:2)
at webchat.js:2
at Ha (webchat.js:2)
at Wa (webchat.js:2)
la @ webchat.js:2
webchat.js:2 WebSocket is already in CLOSING or CLOSED state.
I am using very simple code for this
(async function() {
const adapters = await window.WebChat.createDirectLineSpeechAdapters({
fetchCredentials: {
region: 'eastus',
subscriptionKey: 'MY_SPEECH_SUBSCRIPTION_KEY'
}
});
// Pass the set of adapters to Web Chat.
window.WebChat.renderWebChat(
{
...adapters
},
document.getElementById('webchat')
);
document.querySelector('#webchat > *').focus();
})().catch(err => console.error(err));
Any help on this is much appreciated
Just to add to it, if I check activity logs for my speech service I see that my requests are successfully processed, that means I think speech to text is working fine. But somewhere while sending this text to my QnA chat bot to get answer it is failing. Although I might be completely wrong about it.
Hi @shakil-san, I am wondering if the problem you are experiencing is the 4.8.0 change in credentials signature. https://github.com/microsoft/BotFramework-WebChat/pull/2916
Could you tell me what version of Web Chat you are using?
You can check this in the <meta> tags on your Web Chat page.
@corinagum thank you for your reply. Apologies for my ignorance. I am not sure how to check the web chat version. I did go to my chat bot channels->Web chat but did not see any meta tags there. I have created this bot just 3 days back, if it helps.
Also to further investigate, I did create another simple Echo chat bot and integrated it with my HTML page similar to previous one and that integration works. I am able to use microphone without any issues or crashing. That means it is something to do with my QnA bot.
To determine what version of Web Chat you are running, open your browser's development tools, and paste the following line of code into the console.
[].map.call(document.head.querySelectorAll('meta[name^="botframework-"]'), function (meta) { return meta.outerHTML; }).join('\n')If you are using Web Chat outside of a browser, please specify your hosting environment. For example, React Native on iOS, Cordova on Android, SharePoint, PowerApps, etc.
On your running Web Chat app, you can also just open developer tools and check the meta tags inside the <head> of the HTML.
I will spin up a QnA bot with Cognitive Services and see if I can reproduce your problem. I will report back here.
@corinagum Thanks for your help
Here is the output from browser console. It seems to be 4.8.0 version
<meta name="botframework-directlinespeech:version" content="4.8.0">↵<meta name="botframework-webchat:bundle:variant" content="full">↵<meta name="botframework-webchat:bundle:version" content="4.8.0">
I am able to reproduce this issue with a QnA bot that has DLS set up. We will need to investigate the cause.
@compulim can we get your eyes on this?
Will investigate this.
@shakil-san, when your bot sends an activity back to DLS, is it setting the speak tag on the activity?
In sample 11, the bot sends an activity without first setting the speak tag:
const qnaResults = await this.qnaMaker.getAnswers(context);
// If an answer was received from QnA Maker, send the answer back to the user.
if (qnaResults[0]) {
// Passing in a string does not result in an activity with a speak tag
await context.sendActivity(qnaResults[0].answer);
(link)
When DLS receives an activity without a speak tag, it will not generate an audio stream to send to the client, which is the cause of the error.
Changing the code to the following should address the issue:
const qnaResults = await this.qnaMaker.getAnswers(context);
// If an answer was received from QnA Maker, send the answer back to the user.
if (qnaResults[0]) {
await context.sendActivity({ text: qnaResults[0].answer, speak: qnaResults[0].answer });
Please try this and let us know if it works.
@corinagum Thanks for the details. It makes sense. However I am wondering how would I send the speak tag when I am using basic version of chatbot without sending or receiving the audio stream. Please pardon my ignorance here, as I am learning it newly. Is it that I need to copy latest QnABot.js from sdk, make change to it as you suggested, put it in my root folder where I am using my bot and include the file in my html file?
Below is my simple code within HTML page to setup the chatbot
(async function() {
const adapters = await window.WebChat.createDirectLineSpeechAdapters({
fetchCredentials: {
region: 'westus',
subscriptionKey: 'MY_SUBSCRIPTION_KEY'
}
});
// Pass the set of adapters to Web Chat.
window.WebChat.renderWebChat(
{
...adapters
},
document.getElementById('webchat')
);
document.querySelector('#webchat > *').focus();
})().catch(err => console.error(err));
Sorry we are not expert at QnA Maker. Let me grab someone from support team and see if they can help you.
BTW, the error message is fixed in a very recent PR #3059. But without speak property, it will just keep silent (instead of error) and not synthesizing what bot said to the user.
Thank you @compulim .
@stevkan please let me know how I would need to send the speak property to the speech service in the simple html bot I have built (as I mentioned in my previous comment). I am not able to find a way to do that online. Thanks in advance
@stevkan @corinagum @compulim Can you please help as I am still stuck with this issue.
@stevkan please take a look
@shakil-san, thank you for you patience. I apologize for the delay. I thought I had posted a message advising I was researching this, but apparently not. I have been looking this over while trying slight variations in the code in order to get the speech to initialize correctly. First, in my own testing, the error you are receiving will go away if the initial activity sent from the bot following speech input includes the speak property. Second, and this I need further confirmation of, but (see below) it appears speech will only play if both the .speak property as well as the speak property in the .sendActivity() are defined. However, it only speaks the value from .speak.
There may be other bits to change, but you can first try updating your code to the following and let me know the results? As mentioned above, the speak property will be defined in two places: .speak and in the .sendActivity().
const message - { type: ActivityTypes.Message };
message.speak = qnaResults[0].answer
await context.sendActivity({ text: qnaResults[0].answer, speak: qnaResults[0].answer });
Thanks for the response @stevkan. Your explanation makes sense. However what I am not able to understand clearly is, if I am using QnA chat bot in my simple HTML page with boilerplate code like below, where would I set the speak property in this scenario? As I am a bit new to chatbots I am not able to understand that part
(async function() {
const adapters = await window.WebChat.createDirectLineSpeechAdapters({
fetchCredentials: {
region: 'westus',
subscriptionKey: 'MY_SUBSCRIPTION_KEY'
}
});
// Pass the set of adapters to Web Chat.
window.WebChat.renderWebChat(
{
...adapters
},
document.getElementById('webchat')
);
document.querySelector('#webchat > *').focus();
})().catch(err => console.error(err));
@stevkan can you provide an update please?
@shakil-san, the code snippet I provided would be placed (or the existing code updated) in the bot's files. The code you referenced above is the code for the Web Chat client which is just the interface between your bot and any user.
If you don't know already, you can access your bot's files on Azure. Simply login, locate the resource group for the QnA bot you created, navigate to the bot registration (i.e. Web App Bot), click the "Build" blade, and from there you can download the bot's source code.
With the bot source code, it is possible to run locally and test before redeploying. If you are confident with any changes, there is a redeploy script that will push your changes to Azure.
The second attached image is from the web app online editor, but you should see a file structure similar to this. You can see there are sendActivity() functions used. It is these you would apply any speak properties to. The above code snippet should be used as a reference on how to structure the activity. For example, using lines starting at 116, the code might be updated to the following. Keep in mind, you may need to make adjustments to meet your needs.
116 var message = QnACardBuilder.GetSuggestionCard(suggestedQuestions, qnaDialogResponseOptions.activeLearningCardTitle, qnaDialogResponseOptions.cardNoMatchText);
117 message.speak = "Here are some suggested questions for you to consider.";
118 await stepContext.context.sendActivity(message, "Here are some suggested questions for you to consider.");


This should be enough to get you setup. I'm going to close this issue as resolved. If, however, you continue to experience issues / errors around this, please feel free to re-open. However, any "How to" questions should be posted on Stack Overflow.