I have a date picker adaptive card which I call during an intent call. I am not getting as to how can I get value entered by the user and pass it to my bot luis where I will be having an intent which will get triggered with those values
I have tried parsing the adaptive card json but I want the updated json with user entered values in it on submit button click.
private Attachment CreateAdaptiveCardAttachment()
{
// combine path for cross platform support
string[] paths = { ".", "Cards", "AddingLeaveDetails.json" };
string fullPath = Path.Combine(paths);
var adaptiveCard = File.ReadAllText(fullPath);
return new Attachment()
{
ContentType = "application/vnd.microsoft.card.adaptive",
Content = JsonConvert.DeserializeObject(adaptiveCard),
};
}
private Activity CreateResponse(IActivity activity, Attachment attachment)
{
var response = ((Activity)activity).CreateReply();
response.Attachments = new List<Attachment>() { attachmen`enter code here`t };
return response;
}
@NikhilBansal, You can skip the "Waterfall Dialog" part of this answer (it's more for posterity), and head to the "Capture User Input" section. It will likely be helpful to read the "Additional Context" and "Additional Resources" links, as well.
Using Adaptive Cards with Waterfall Dialogs
Natively, Adaptive Cards don't work like prompts. With a prompt, the prompt will display and wait for user input before continuing. But with Adaptive Cards (even if it contains an input box and a submit button), there is no code in an Adaptive Card that will cause a Waterfall Dialog to wait for user input before continuing the dialog.
So, if you're using an Adaptive Card that takes user input, you generally want to handle whatever the user submits outside of the context of a Waterfall Dialog.
That being said, if you want to use an Adaptive Card as part of a Waterfall Dialog, there is a workaround. Basically, you:
- Display the Adaptive Card
- Display a Text Prompt
- Convert the user's Adaptive Card input into the input of a Text Prompt
In your Waterfall Dialog class (steps 1 and 2):
private async Task<DialogTurnResult> DisplayCardAsync(WaterfallStepContext stepContext, CancellationToken cancellationToken)
{
// Display the Adaptive Card
var cardPath = Path.Combine(".", "AdaptiveCard.json");
var cardJson = File.ReadAllText(cardPath);
var cardAttachment = new Attachment()
{
ContentType = "application/vnd.microsoft.card.adaptive",
Content = JsonConvert.DeserializeObject(cardJson),
};
var message = MessageFactory.Text("");
message.Attachments = new List<Attachment>() { cardAttachment };
await stepContext.Context.SendActivityAsync(message, cancellationToken);
// Create the text prompt
var opts = new PromptOptions
{
Prompt = new Activity
{
Type = ActivityTypes.Message,
Text = "waiting for user input...", // You can comment this out if you don't want to display any text. Still works.
}
};
// Display a Text Prompt and wait for input
return await stepContext.PromptAsync(nameof(TextPrompt), opts);
}
private async Task<DialogTurnResult> HandleResponseAsync(WaterfallStepContext stepContext, CancellationToken cancellationToken)
{
// Do something with step.result
// Adaptive Card submissions are objects, so you likely need to JObject.Parse(step.result)
await stepContext.Context.SendActivityAsync($"INPUT: {stepContext.Result}");
return await stepContext.NextAsync();
}
Capture User Input
In your main bot class (<your-bot>.cs
), under OnTurnAsync()
, near the beginning of the method, somewhere before await dialogContext.ContinueDialogAsync(cancellationToken)
is called (step 3):
var activity = turnContext.Activity;
if (string.IsNullOrWhiteSpace(activity.Text) && activity.Value != null)
{
activity.Text = JsonConvert.SerializeObject(activity.Value);
}
Update: For your code, specifically, you need to directly modify turnContext
before sending it to your recognizer. Since RecognizeAsync
doesn't work with objects, we need to ensure we send the appropriate value. Something like:
protected override async Task OnMessageActivityAsync(ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken)
{
// Capture input from adaptive card
if (string.IsNullOrEmpty(turnContext.Activity.Text) && turnContext.Activity.Value != null)
{
// Conditionally convert based off of input ID of Adaptive Card
if ((turnContext.Activity.Value as JObject)["<adaptiveCardInputId>"] != null)
{
turnContext.Activity.Text = (turnContext.Activity.Value as JObject)["<adaptiveCardInputId>"].ToString();
}
}
// First, we use the dispatch model to determine which cognitive service (LUIS or QnA) to use.
var recognizerResult = await _botServices.Dispatch.RecognizeAsync(turnContext, cancellationToken);
// Top intent tell us which cognitive service to use.
var topIntent = recognizerResult.GetTopScoringIntent();
// Next, we call the dispatcher with the top intent.
await DispatchToTopIntentAsync(turnContext, topIntent.intent, recognizerResult, cancellationToken);
}
The reason the code two blocks above didn't work is just because it wasn't set up for your code. RecognizeAsync
looks at turnContext.Activity.Text
, which is null for an adaptive card (since adaptive card inputs come in Activity.Value
. So, this new code sets turnContext.Activity.Text
to turnContext.Activity.Value
. However, to send it to the recognizer, you need it to be a string and not an object, so be sure to change <adaptiveCardInputId>
to whatever ID you have on your adaptive card.
Additional Context
Adaptive Cards send their Submit results a little different than regular user text. When a user types in the chat and sends a normal message, it ends up in Context.Activity.Text
. When a user fills out an input on an Adaptive Card, it ends up in Context.Activity.Value
, which is an object where the key names are the id
in your card and the values are the field values in the adaptive card.
For example, the json:
{
"type": "AdaptiveCard",
"body": [
{
"type": "TextBlock",
"text": "Test Adaptive Card"
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"text": "Text:"
}
],
"width": 20
},
{
"type": "Column",
"items": [
{
"type": "Input.Text",
"id": "userText",
"placeholder": "Enter Some Text"
}
],
"width": 80
}
]
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Submit"
}
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.0"
}
.. creates a card that looks like:
If a user enters "Testing Testing 123" in the text box and hits Submit, Context.Activity
will look something like:
{ type: 'message',
value: { userText: 'Testing Testing 123' },
from: { id: 'xxxxxxxx-05d4-478a-9daa-9b18c79bb66b', name: 'User' },
locale: '',
channelData: { postback: true },
channelId: 'emulator',
conversation: { id: 'xxxxxxxx-182b-11e9-be61-091ac0e3a4ac|livechat' },
id: 'xxxxxxxx-182b-11e9-ad8e-63b45e3ebfa7',
localTimestamp: 2019-01-14T18:39:21.000Z,
recipient: { id: '1', name: 'Bot', role: 'bot' },
timestamp: 2019-01-14T18:39:21.773Z,
serviceUrl: 'http://localhost:58453' }
The user submission can be seen in Context.Activity.Value.userText
.
Note that adaptive card submissions are sent as a postBack, which means that the submission data doesn't appear in the chat window as part of the conversation--it stays on the Adaptive Card.
Additional Resources
- Blog Post on Using Adaptive Cards
AdaptiveCardPrompt
- This may be added to the SDK someday. In the meantime, you can use it as a development reference
来源:https://stackoverflow.com/questions/57251221/how-to-retrieve-user-entered-input-in-adaptive-card-to-the-c-sharp-code-and-how