问题
Just have a couple of questions regarding the usage of Azure Functions with an EventHub in an IoT scenario.
- EventHub has partitions. Typically messages from a specific device go to the same partition. How are the instances of an Azure Function distributed across EventHub partitions? Is it based on the performance? In case one instance of an Azure Function manages to process events from all partitions then it is enough otherwise one might end up with one instance of an Azure Function per EventHub partition?
- What about the read-offset? Does this binding somehow records where it stopped reading the event stream? I thought the functions are meant to be stateless and here we have some state.
Thanks
回答1:
Function Apps are based on WebJobs SDK, which use EventHostProcessor
to consume events from Event Hubs. So you can lookup information about EventHostProcessor
and it will be applicable to your Function App.
Particularly, you can find the implementation of IEventProcessor
here.
To your questions:
Not sure what you mean by "one instance". One listener will be created per partition, but they can be both hosted inside a single App Plan Instance if the load is low. On the high level, you should not care much: in Consumption Plan you pay per execution time, no matter how many servers/processes/threads are running. Of course, you should care whether the auto-scaling works good enough for high load, but that needs to be tested anyway.
Functions are stateless in a sense that you can't save anything in-memory between two function executions. You are totally fine to save state in external storage. Function App will use
PartitionContext.CheckpointAsync()
for checkpointing of the current offset. Azure Storage is used internally; again you can read more about how it works in Event Hubs andEventHostProcessor
docs, e.g. here.
回答2:
Each instance of an Event Hub-Triggered Function is backed by only 1 EventProcessorHost(EPH) instance. Event Hub ensures that only 1 EPH can get a lease on a given partition.
Answer to Question 1: Let's elaborate on this with a contrived example. Suppose we begin with the following setup and assumptions for an EventHub:
- 10 partitions.
- 1000 events distributed evenly across all partitions => 100 messages in each partition.
When your Function is first enabled, there is only 1 instance of the Function. Let's call this Function instance Function_0. Function_0 will have 1 EPH that manages to get a lease on all 10 partitions. Let this EPH be called EPH_0, and it will start reading events from partitions 0-9. From this point forward, one of the following will happen:
Only 1 Function instance is needed - Function_0 is able to process all 1000 before the Azure Functions' scaling logic kicks in. Hence, all 1000 messages are processed by Function_0.
Add 1 more Function instance - Azure Functions' scaling logic determines that Function_0 seems sluggish, so a new instance Function_1 is created, resulting in EPH_1. Event Hub detects that a new EPH instance is trying read messages. Event Hub will start load balancing the partitions across the EPH instances, e.g., partitions 0-4 are assigned to EPH_0 and partitions 5-9 are assigned to EPH_1.
If all Function execution succeed without errors, both EPH_0 and EPH_1 checkpoints successfully and all 1000 messages are processed. When check-pointing succeeds, all 1000 messages should never be retrieved again.Add N more function instances - Azure Functions' scaling logic determines that both Function_0 and Function_1 are still sluggish and will repeat workflow 2 again for Function_2...N, where N>9. Event Hub will load balance the partitions across Function_0...9 instances.
Unique to Azure Functions' current scaling logic is the fact that N is >(number of partitions). This is done to ensure that there are always instances of EPH readily available to quickly get a lock on the partition(s). As a customer, you are only charged for the resources used when your Function instance executes, but you are not charged for this over-provisioning.
Answer to Question 2: EPH uses a check-pointing mechanism to mark the last known successfully read message. An EventHub-Triggered Function can be setup to process 1 message or a batch of messages at a time. The option you choose needs to consider the following:
1. Speed of message processing - Processing messages in batches instead of a single message at a time is one of the factors that will speed up the ability of your Azure Function workflow to keep up with the incoming messages in your Event Hub.
2. Tolerance for duplicates - If check-pointing fails due to errors in your Function code/(Updated Aug 24th, 2017) timeout/partition least lost, then the next EPH that gets a lease on that partition will start retrieving messages from the last known checkpoint. Event Hub guarantees at-least-once delivery but not at-most-once delivery. Azure Functions will not attempt to change that behavior. If not having duplicate messages is a priority, then you will need to mitigate it in your workflow. As such, when check-pointing fails, there are more duplicate messages to manage if your Function is processing messages at batch level.
来源:https://stackoverflow.com/questions/42901284/azure-functions-event-hub-trigger-bindings