问题
I can see azure durable functions uses a storage account for managing state and instrumentation. When running durable functions in an environment with a high amount of data tables and queues will get larger and larger, and properly slower and slower. Does durable function clean logs them self, or is this a task you need to do your self?
回答1:
After researching this, it appear that it's up to the developer to implement their own handler for this. I read through the github issue the accepted answer posted and it looks like the Functions team only implemented the API's needed, but no automated code.
Here's the example from the official docs - just adds a timer function that removes all history up to 30 days ago.
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[TimerTrigger("0 0 12 * * *")]TimerInfo myTimer)
{
return client.PurgeInstanceHistoryAsync(
DateTime.MinValue,
DateTime.UtcNow.AddDays(-30),
new List<OrchestrationStatus> { OrchestrationStatus.Completed });
}
Also in the docs they note that purging a large number of records may be...slow. They're not kidding. You can catch up the purge prior to automating it with an HTTP API call.
DELETE /runtime/webhooks/durabletask/instances
?taskHub={taskHub}
&connection={connectionName}
&code={systemKey}
&createdTimeFrom={timestamp}
&createdTimeTo={timestamp}
&runtimeStatus={runtimeStatus1,runtimeStatus2,...}
回答2:
Orchestration history will be deleted some number of days (e.g. 30 days) after the orchestration completes, fails, or terminates. Once this data is deleted, it will no longer be possible to query the status of the purged instances. The number of days will be configurable at the task hub level and the cleanup will be done automatically by the runtime.
For more details, refer to this github issue.
来源:https://stackoverflow.com/questions/58492776/azure-durable-functions-and-retention-of-data