In Pyspark Structured Streaming, how can I discard already generated output before writing to Kafka?

后端 未结 0 1859
独厮守ぢ
独厮守ぢ 2021-02-09 01:56

I am trying to do Structured Streaming (Spark 2.4.0) on Kafka source data where I am reading latest data and performing aggregations on a 10 minute window. I am using "upda

相关标签:
回答
  • 消灭零回复
提交回复
热议问题