问题
I have a C# Azure Functions (on the App Service plan) app built that uses HTTP Triggers and Queue Triggers. The application works by installing a script on a client's machine that pulls various files from a client database using SQL queries moving that output to a temporary Azure Blob Storage. After each file is completed an HTTP trigger is called that creates a queue message for the Queue Trigger to pick up the message and move the files from the temporary blob storage to a permanent spot in blob storage. After the HTTP trigger completes and puts a message in the queue, execution returns to the client script to begin processing the next SQL query.
My concern is that these queue messages will stack up and the client script will complete with a false success message when the Queue Trigger is actually still doing work or potentially failing, especially when multiple clients are being processed in parallel. Is there a way to make sure the queue message was successfully processed before moving on to the next SQL query?
Edit: add code example
I may have 3 clients with an application installed on their machine, each client is set to execute these scripts at 12AM and can run concurrently since they are hosted on the client machines. Client Scripts
// perform SQL query to extract data from client database
// move extracted data to temporary Storage Blob hosted on the App Service storage account
return await httpClient.PostAsync(uri of the file in temporary blob storage)
This first await
posts to HTTP when the file is ready to be processed.
Azure Functions HTTP Trigger
// get storage account credentials
// write message to storage queue "job-submissions'
return new OkResult();
Now we have files from multiple clients in the "job-submissions" queue.
Azure Functions Queue Trigger
// pick up message from "job-submissions" queue
// use the Microsoft.Azure.Storage.Blob library to move files
// to a permanent spot in the data lake
// create meta file with info about the file
// meta file contains info for when the extraction started and completed
// delete the temporary file
// job completed and the next queue message can be picked up
So the issue is, when the HTTP trigger writes a message to the queue, I have no way of knowing that the queue has finished processing the file. Right now this isn't a big issue because the process happens so quickly that by the time I have sent a message to the queue in the HTTP trigger, it only takes at most a few seconds for the queue to process the file. The reason I would like to know when the individual jobs have completed is because I have a final step in the client scripts:
Client Scripts
// after all jobs for a client have been submitted by HTTP
// get storage account credentials
// write message to a queue "client-tasks-completed"
// queue message contains client name in the message
// initialVisibilityDelay set to 2 minutes
// this ensures queue has finished processing the files
Then a separate Python Azure Function listens on that queue to do further processing:
Python QueueTrigger
# pick up message from "client-tasks-completed" queue
if 'client1' == queue_msg['ClientName']:
# standardize information within the files and write to our Azure SQL database
elif 'client2' == queue_msg['ClientName']:
# standardize information within the files and write to our Azure SQL database
elif 'client3' == queue_msg['ClientName']:
# standardize information within the files and write to our Azure SQL database
The Python Azure Function is on the consumption plan with a batchSize
set to 1
because the client files can sometimes be large and I don't want to exceed the 1.5GB memory limit. So I have two issues, the first is how can I know the first queue trigger completed its work? The second is, how can I ensure that the Python QueueTrigger doesn't start to accumulate messages? I think both issues could potentially be solved by creating separate Azure Functions for both queue triggers that listen on the same queues. That would lighten the load on both sides, but I'm not sure if that is best practice. See my question here where I asked for more guidance on question 2: Using multiple Azure Functions QueueTriggers to listen on the same storage queue
回答1:
Update:
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using System.Threading;
namespace FunctionApp31
{
public static class Function1
{
[FunctionName("Function1")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
string a = "111";
a=XX(a).Result;
return new OkObjectResult(a);
}
public static async Task<string> XX(string x)
{
await Task.Run(()=>{
Thread.Sleep(3000);
x = x + "222";
Console.WriteLine(x);
}
);
return x;
}
}
}
Original Answer:
I suggest you execute the processing logic sequentially, rather than asynchronously. Or you can wait for the asynchronous operation to complete before returning, so that you can ensure that the execution is successful before returning success.(This can avoid returning results when the queue is still processing as you described in the comment.)
I noticed that you asked a new question. I think you can extend the instance instead of creating multiple function apps. (Of course there is no problem creating multiple function apps) If you are based on a consumption plan, the instance will automatically scale according to the load.
来源:https://stackoverflow.com/questions/64685629/how-to-make-sure-a-queue-message-has-been-successfully-processed-in-azure-functi