Today, we mix typing and speech into the send box.
The end-user is not very indicative on whether the microphone is recording or not, especially without a waveform animation.
Please look at #1839 when working on this.
Today, we mix the input via keyboard and speech into a single input box.
Send box in Cortana and Siri can be either for keyboard or speech, but not hybrid.
Tomorrow, we could use the microphone button to switch between 2+ types of input. And the end-user should has a very clear indication on what type of inputs they are using.

[Enhancement]
Punting this to R9.
Thanks @compulim - I think we will need to do some testing on how best to do this, given the number of different settings that WebChat is used.
This issue is blocking my development so if you need a use case I can provide one.
Please consider this a +1 for this feature :)
Following on from the comment in: https://github.com/microsoft/BotFramework-WebChat/issues/2474#issuecomment-543299098 would adding the speaking flag be done in the same way as the redux store example https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/04.api/j.redux-actions
Dear Team,
Any updates by when "Mic Button Does Not Get Changed to Send Button when Started Typing #3110" is expected to be released.
Thanks in Advance.
Regards
Akshat Agarwal
Adding @Kaiqb to assignees for tracking purposes.
[Edit] unintentionally closed, sorry!
UX recommendations:
As discussed during our sync, this modality switching asks for a design pattern to support you. It's very hard to go for the hybrid interruption fix, let's first go for the chat versus speech fix.
(1) 1 modality
Only "chat" or "speech" modality, the bot is only accessible via 1 modality:
_"Chat"_
User types = (no > icon needed)
User shares/uploads = action button (clip or + icon) should be separate from modality, typically positioned left of insert field.
_"Speech"_
User talks = (microphone icon)
(>1) Switch between modalities, only send content via 1 modality to bot:
Hypothesis: when an agent supports both modalities, whenever the user starts typing "chat" in the "chat field" they can send over their message by clicking enter. If they want to "speak" to the agent, speech can be accessed by clicking on the speech icon.
The proposed pattern is that we stick to show the speak icon per default (on the right)/ upload icon (on the left) with supporting text in the entry field with 'Type a message', so whenever the user starts 'chat' (so typing) they can just click enter to send through their message, and “listening” when a user is recording.
Flow =
Whenever a user start typing and they decide halfway they do want to speak out their content, the user will need to erase their textual written chat, for the mic to be clickable and kick-off the conversation via voice by clicking on that mic icon. They can follow up by speaking again and clicking the microphone. Or start typing again in the "chat" field.

(See design link for exploration & design pattern)
(>1) move halfway from chat to speaking is out of scope for now.
Looking forward to this, is there any update?
We're still in the design phase for this, but the team is also excited for this feature! We are hoping to get to coding phase soon!
update? I noticed the front-burner got removed. Would like to implement this on our bot.
Most helpful comment
We're still in the design phase for this, but the team is also excited for this feature! We are hoping to get to coding phase soon!