Skip to content

Commit d99e3c2

Browse files
Merge branch 'development' into 988638-Mcp-Server-hf
2 parents 03bb618 + 01c13b8 commit d99e3c2

33 files changed

+3308
-12
lines changed

ej2-asp-core-mvc/ai-assistview/EJ2_ASP.MVC/speech/speech-to-text.md

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,18 @@ Before integrating `Speech-to-Text`, ensure the following:
2424

2525
## Configure Speech-to-Text
2626

27-
To enable Speech-to-Text functionality, modify the `Index.cshtml` file to incorporate the Web Speech API. The [SpeechToText](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/getting-started) control listens for microphone input, transcribes spoken words, and updates the AI AssistView's editable footer with the transcribed text. The transcribed text is then sent as a prompt to the Azure OpenAI service via the AI AssistView control.
27+
To enable Speech-to-Text functionality in the ASP.NET MVC AssistView control, update the `index.cshtml` file to incorporate the Web Speech API.
28+
29+
The [SpeechToText](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/getting-started) control listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the AssistView’s editable footer using the [FooterTemplate](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.InteractiveChat.AIAssistView.html#Syncfusion_EJ2_InteractiveChat_AIAssistView_FooterTemplate) property to display the transcribed text. The transcribed text is then sent as a prompt to the Azure OpenAI service via the AI AssistView control.
30+
31+
### Configuration Options
32+
33+
* **[`Lang`](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_Lang)**: Specifies the language for speech recognition. For example:
34+
35+
* `en-US` for American English
36+
* `fr-FR` for French
37+
38+
* **[`AllowInterimResults`](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_AllowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
2839

2940
{% tabs %}
3041
{% highlight razor tabtitle="CSHTML" %}
@@ -37,6 +48,14 @@ To enable Speech-to-Text functionality, modify the `Index.cshtml` file to incorp
3748

3849
![Integrating Speech-to-Text with AI AssistView](images/aiassist-stt.png)
3950

51+
## Error Handling
52+
53+
The `SpeechToText` control provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#error-handling ) section in the documentation.
54+
55+
## Browser Compatibility
56+
57+
The `SpeechToText` control relies on the [Speech Recognition API](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#browser-support), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
58+
4059
## See Also
4160

4261
* [Text-to-Speech](./text-to-speech)

ej2-asp-core-mvc/ai-assistview/EJ2_ASP.NETCORE/speech/speech-to-text.md

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,18 @@ Before integrating `Speech-to-Text`, ensure the following:
2424

2525
## Configure Speech-to-Text
2626

27-
To enable Speech-to-Text functionality, modify the `Index.cshtml` file to incorporate the Web Speech API. The [SpeechToText](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/getting-started) control listens for microphone input, transcribes spoken words, and updates the AI AssistView's editable footer with the transcribed text. The transcribed text is then sent as a prompt to the Azure OpenAI service via the AI AssistView control.
27+
To enable Speech-to-Text functionality in the ASP.NET Core AssistView control, update the `index.cshtml` file to incorporate the Web Speech API.
28+
29+
The [SpeechToText](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/getting-started) control listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the AssistView’s editable footer using the [footerTemplate](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.InteractiveChat.AIAssistView.html#Syncfusion_EJ2_InteractiveChat_AIAssistView_FooterTemplate) property to display the transcribed text. The transcribed text is then sent as a prompt to the Azure OpenAI service via the AI AssistView control.
30+
31+
### Configuration Options
32+
33+
* **[`lang`](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_Lang)**: Specifies the language for speech recognition. For example:
34+
35+
* `en-US` for American English
36+
* `fr-FR` for French
37+
38+
* **[`allowInterimResults`](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_AllowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
2839

2940
{% tabs %}
3041
{% highlight razor tabtitle="CSHTML" %}
@@ -37,6 +48,14 @@ To enable Speech-to-Text functionality, modify the `Index.cshtml` file to incorp
3748

3849
![Integrating Speech-to-Text with AI AssistView](images/aiassist-stt.png)
3950

51+
## Error Handling
52+
53+
The `SpeechToText` control provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/speech-recognition#error-handling) section in the documentation.
54+
55+
## Browser Compatibility
56+
57+
The `SpeechToText` control relies on the [Speech Recognition API](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
58+
4059
## See Also
4160

4261
* [Text-to-Speech](./text-to-speech)
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
layout: post
3+
title: Speech-to-Text With ##Platform_Name## Chat UI Control | Syncfusion
4+
description: Checkout and learn about configuration of Speech-to-Text with Azure OpenAI in ##Platform_Name## Chat UI control of Syncfusion Essential JS 2 and more.
5+
platform: ej2-asp-core-mvc
6+
control: Azure Open AI
7+
publishingplatform: ##Platform_Name##
8+
documentation: ug
9+
---
10+
11+
# Speech-to-Text in ASP.NET MVC Chat UI
12+
13+
The Syncfusion ASP.NET MVC Chat UI control integrates `Speech-to-Text` functionality through the browser's [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API). This enables the conversion of spoken words into text using the device's microphone, allowing users to interact with the Chat UI through voice input.
14+
15+
## Configure Speech-to-Text
16+
17+
To enable Speech-to-Text functionality in the ASP.NET MVC Chat UI control, update the `index.cshtml` file to incorporate the Web Speech API.
18+
19+
The [SpeechToText](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/getting-started) control listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the Chat UI’s editable footer using the [FooterTemplate](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.InteractiveChat.ChatUI.html#Syncfusion_EJ2_InteractiveChat_ChatUI_FooterTemplate) property to display the transcribed text. Once the transcription appears in the footer, users can send it as a message to others.
20+
21+
### Configuration Options
22+
23+
* **[`Lang`](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_Lang)**: Specifies the language for speech recognition. For example:
24+
25+
* `en-US` for American English
26+
* `fr-FR` for French
27+
28+
* **[`AllowInterimResults`](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_AllowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
29+
30+
{% tabs %}
31+
{% highlight razor tabtitle="CSHTML" %}
32+
{% include code-snippet/chat-ui/stt/razor %}
33+
{% endhighlight %}
34+
{% highlight c# tabtitle="SpeechToText.cs" %}
35+
{% include code-snippet/chat-ui/stt/speechtotext.cs %}
36+
{% endhighlight %}
37+
{% endtabs %}
38+
39+
![Integrating Speech-to-Text with Chat UI](images/chatui-stt.png)
40+
41+
## Error Handling
42+
43+
The `SpeechToText` control provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#error-handling ) section in the documentation.
44+
45+
## Browser Compatibility
46+
47+
The `SpeechToText` control relies on the [Speech Recognition API](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#browser-support), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
48+
49+
## See Also
50+
51+
* [Messages](./messages)
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
layout: post
3+
title: Speech-to-Text With ##Platform_Name## Chat UI Control | Syncfusion
4+
description: Checkout and learn about configuration of Speech-to-Text with Azure OpenAI in ##Platform_Name## Chat UI control of Syncfusion Essential JS 2 and more.
5+
platform: ej2-asp-core-mvc
6+
control: Azure Open AI
7+
publishingplatform: ##Platform_Name##
8+
documentation: ug
9+
---
10+
11+
# Speech-to-Text in ASP.NET Core Chat UI
12+
13+
The Syncfusion ASP.NET Core Chat UI control integrates `Speech-to-Text` functionality through the browser's [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API). This enables the conversion of spoken words into text using the device's microphone, allowing users to interact with the Chat UI through voice input.
14+
15+
## Configure Speech-to-Text
16+
17+
To enable Speech-to-Text functionality in the ASP.NET Core Chat UI control, update the `index.cshtml` file to incorporate the Web Speech API.
18+
19+
The [SpeechToText](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/getting-started) control listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the Chat UI’s editable footer using the [footerTemplate](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.InteractiveChat.ChatUI.html#Syncfusion_EJ2_InteractiveChat_ChatUI_FooterTemplate) property to display the transcribed text. Once the transcription appears in the footer, users can send it as a message to others.
20+
21+
### Configuration Options
22+
23+
* **[`lang`](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_Lang)**: Specifies the language for speech recognition. For example:
24+
25+
* `en-US` for American English
26+
* `fr-FR` for French
27+
28+
* **[`allowInterimResults`](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_AllowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
29+
30+
{% tabs %}
31+
{% highlight razor tabtitle="CSHTML" %}
32+
{% include code-snippet/chat-ui/stt/tagHelper %}
33+
{% endhighlight %}
34+
{% highlight c# tabtitle="Gemini.cs" %}
35+
{% include code-snippet/chat-ui/stt/speechtotext.cs %}
36+
{% endhighlight %}
37+
{% endtabs %}
38+
39+
![Integrating Speech-to-Text with Chat UI](images/chatui-stt.png)
40+
41+
## Error Handling
42+
43+
The `SpeechToText` control provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/speech-recognition#error-handling) section in the documentation.
44+
45+
## Browser Compatibility
46+
47+
The `SpeechToText` control relies on the [Speech Recognition API](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
48+
49+
## See Also
50+
51+
* [Messages](./messages)
22.6 KB
Loading
Lines changed: 144 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
@using Syncfusion.EJ2.InteractiveChat;
2+
@using Newtonsoft.Json;
3+
4+
<div class="integration-speechtotext">
5+
@Html.EJS().ChatUI("chatui").Created("onCreate").FooterTemplate("#footerContent").Messages(ViewBag.ChatMessagesData).User(ViewBag.CurrentUser).Render()
6+
</div>
7+
8+
<script>
9+
var chatuiObj;
10+
var chatuiFooter;
11+
var sendButton;
12+
var speechToTextObj;
13+
14+
function onCreate() {
15+
chatuiObj = ej.base.getComponent(document.getElementById("chatui"), "chat-ui");
16+
// Initialize Speech-to-Text component
17+
speechToTextObj = new ej.inputs.SpeechToText({
18+
transcriptChanged: onTranscriptChange,
19+
onStop: onListeningStop,
20+
created: onCreated,
21+
cssClass: 'e-flat'
22+
});
23+
speechToTextObj.appendTo('#speechToText');
24+
}
25+
26+
// Updates transcript in the input area when speech-to-text transcribes
27+
function onTranscriptChange(args) {
28+
document.querySelector('#chatui-footer').innerText = args.transcript;
29+
}
30+
31+
// Handles actions when speech listening stops
32+
function onListeningStop() {
33+
toggleButtons();
34+
}
35+
36+
// Handles actions after component creation
37+
function onCreated() {
38+
chatuiFooter = document.querySelector('#chatui-footer');
39+
sendButton = document.querySelector('#chatui-sendButton');
40+
sendButton.addEventListener('click', sendIconClicked);
41+
chatuiFooter.addEventListener('input', toggleButtons);
42+
43+
chatuiFooter.addEventListener('keydown', function (e) {
44+
if (e.key === 'Enter' && !e.shiftKey) {
45+
sendIconClicked();
46+
e.preventDefault();
47+
}
48+
});
49+
toggleButtons();
50+
}
51+
52+
// Toggles the visibility of the send and speech-to-text buttons
53+
function toggleButtons() {
54+
var hasText = chatuiFooter.innerText.trim() !== '';
55+
sendButton.classList.toggle('visible', hasText);
56+
speechToTextObj.element.classList.toggle('visible', !hasText);
57+
if (!hasText && (chatuiFooter.innerHTML === '<br>' || !chatuiFooter.innerHTML.trim())) {
58+
chatuiFooter.innerHTML = '';
59+
}
60+
}
61+
62+
// Handles send button click event
63+
function sendIconClicked() {
64+
const messageContent = chatuiFooter.innerText;
65+
if (messageContent.trim()) {
66+
chatuiObj.addMessage({
67+
author: @Html.Raw(JsonConvert.SerializeObject(ViewBag.CurrentUser)),
68+
text: messageContent
69+
});
70+
chatuiFooter.innerHTML = '';
71+
toggleButtons();
72+
}
73+
}
74+
</script>
75+
76+
<script id="footerContent" type="text/x-jsrender">
77+
<div class="e-footer-wrapper">
78+
<div id="chatui-footer" class="content-editor" oninput="toggleButtons" contenteditable="true" placeholder="Click to speak or start typing..."></div>
79+
<div class="option-container">
80+
<button id="speechToText"></button>
81+
<button id="chatui-sendButton" class="e-assist-send e-icons" role="button"></button>
82+
</div>
83+
</div>
84+
</script>
85+
86+
<style>
87+
.integration-speechtotext {
88+
height: 400px;
89+
width: 450px;
90+
margin: 0 auto;
91+
}
92+
93+
.integration-speechtotext #chatui-sendButton {
94+
width: 40px;
95+
height: 40px;
96+
font-size: 15px;
97+
border: none;
98+
background: none;
99+
cursor: pointer;
100+
}
101+
102+
.integration-speechtotext #speechToText.visible,
103+
.integration-speechtotext #chatui-sendButton.visible {
104+
display: inline-block;
105+
}
106+
107+
.integration-speechtotext #speechToText,
108+
.integration-speechtotext #chatui-sendButton {
109+
display: none;
110+
}
111+
112+
@@media only screen and (max-width: 750px) {
113+
.integration-speechtotext {
114+
width: 100%;
115+
}
116+
}
117+
118+
.integration-speechtotext .e-footer-wrapper {
119+
display: flex;
120+
border: 1px solid #c1c1c1;
121+
margin: 5px 5px 0 5px;
122+
border-radius: 10px;
123+
padding: 5px;
124+
}
125+
126+
.integration-speechtotext .content-editor {
127+
width: 100%;
128+
overflow-y: auto;
129+
font-size: 14px;
130+
min-height: 20px;
131+
max-height: 150px;
132+
padding: 10px;
133+
}
134+
135+
.integration-speechtotext .content-editor[contentEditable='true']:empty:before {
136+
content: attr(placeholder);
137+
color: #6b7280;
138+
font-style: italic;
139+
}
140+
141+
.integration-speechtotext .option-container {
142+
align-self: flex-end;
143+
}
144+
</style>
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
using Syncfusion.EJ2.InteractiveChat;
2+
3+
public ChatUIUser CurrentUser { get; set; }
4+
public List<ChatUIMessage> ChatMessagesData { get; set; } = new List<ChatUIMessage>();
5+
public ChatUIUser CurrentUserModel { get; set; } = new ChatUIUser() { Id = "user1", User = "Albert" };
6+
public ChatUIUser MichaleUserModel { get; set; } = new ChatUIUser() { Id = "user2", User = "Michale Suyama" };
7+
8+
public ActionResult SpeechToText()
9+
{
10+
CurrentUser = CurrentUserModel;
11+
ChatMessagesData.Add(new ChatUIMessage()
12+
{
13+
Text = "Hi Michale, are we on track for the deadline?",
14+
Author = CurrentUserModel
15+
});
16+
ChatMessagesData.Add(new ChatUIMessage()
17+
{
18+
Text = "Yes, the design phase is complete.",
19+
Author = MichaleUserModel
20+
});
21+
ChatMessagesData.Add(new ChatUIMessage()
22+
{
23+
Text = "I’ll review it and send feedback by today.",
24+
Author = CurrentUserModel
25+
});
26+
ViewBag.ChatMessagesData = ChatMessagesData;
27+
ViewBag.CurrentUser = CurrentUser;
28+
ViewBag.MichaleUser = MichaleUserModel;
29+
return View();
30+
}

0 commit comments

Comments
 (0)