I am writing a storage writer for spark structured streaming which will partition the given dataframe and write to a different blob store account. The spark documentation sa
When you use foreachBatch, spark guarantee only that foreachBatch will call only one time. But if you will have exception during execution foreachBatch, spark will try to call it again for same batch. In this case we can have duplication if we store to multiple storages and have exception during storing. So you can manually handle exception during storing for avoid duplication.
In my practice I created custom sink if need to store to multiple storage and use datasource api v2 which support commit.
I assume that the question is about Micro-Batch Stream Processing (not Continuous Stream Processing).
Exactly once semantics are guaranteed based on available and committed offsets internal registries (for the current stream execution, aka runId
) as well as regular checkpoints (to persist processing state across restarts).
exactly once semantics are only possible if the source is re-playable and the sink is idempotent.
It is possible that whatever has already been processed but not recorded properly internally (see below) can be re-processed:
That means that all streaming sources in a streaming query should be re-playable to allow for polling for data that has once been requested.
That also means that the sink should be idempotent so the data that has been processed successfully and added to the sink may be added again because a failure happened just before Structured Streaming managed to record the data (offsets) as successfully processed (in the checkpoint)
Before the available data (by offset) of any of the streaming source or reader is processed, MicroBatchExecution
commits the offsets to Write-Ahead Log (WAL) and prints out the following INFO message to the logs:
Committed offsets for batch [currentBatchId]. Metadata [offsetSeqMetadata]
A streaming query (a micro-batch) is executed only when there is new data available (based on offsets) or the last execution requires another micro-batch for state management.
In addBatch phase, MicroBatchExecution
requests the one and only Sink
or StreamWriteSupport
to process the available data.
Once a micro-batch finishes successfully the MicroBatchExecution
commits the available offsets to commits checkpoint and the offsets are considered processed already.
MicroBatchExecution
prints out the following DEBUG message to the logs:
Completed batch [currentBatchId]