Sending request to ASP.Net Core Web API running on a specific node in a Service Fabric Cluster

后端 未结 2 734
名媛妹妹
名媛妹妹 2021-01-14 23:44

I am working on a Service Fabric Application, in which I am running my Application that contains a bunch of ASP.NET Core Web APIs. Now when I run my application on my local

相关标签:
2条回答
  • 2021-01-15 00:40

    Your problem is because you are running your APIs to do a Worker task.

    You should use your API to schedule the work in the Background(Process\Worker) and return to the user a token or operation id. The user will use this token to request the status or cancel the task.

    The first step: When you call your API the first time, you could generate a GUID(Or insert in DB) and put this message in a queue(i.e: Service Bus), and then return the GUID to the caller.

    The second step: A worker process will be running in your cluster listening for messages from this queue and process these messages whenever a message arrives. You can make this a single thread service that process message by message in a loop, or a multi-threaded service that process multiple messages using one Thread for each message. It will depend how complex you want to be:

    • In a single threaded listener, to scale you application, you have to span multiple instances so that multiple tasks will run in parallel, you can do that in SF with a simple scale command and SF will distribute the service instances across your available nodes.

    • In a multi-threaded version you will have to manage the concurrency for better performance, you might have to consider memory, cpu, disk and so on, otherwise you risk having too much load in a single node.

    The third step, the cancellation: The cancellation process is easy and there are many approaches:

    • Using a similar approach and enqueue a cancellation message
      • Your service will listen for the cancellation in a separate thread and cancel the running task(if running).
      • Using a different queue to send the cancellation messages is better
      • If running multiple listener instances you might consider a topic instead of a queue.
    • Using a cache key to store the job status and check on every iteration if the cancellation has been requested.
    • Table with job status, where you check on every iteration as you would do with the cache key.
    • Creating a Remote endpoint to make a direct call to the service and trigger a cancellation token.

    There are many approaches, these are simple, and you might make use of multiple in combination to have a better control of your tasks.

    0 讨论(0)
  • 2021-01-15 00:41

    You'll need some storage to do that.

    Create a table (e.g JobQueue). Before starting to process the job, you store in a database, store the status (e.g Running, it could be an enum), and then return the ID to the caller. Once you need to abort/cancel the job, you call the abort method from the API sending the ID you want to abort. In the abort method, you just update the status of the job to Aborting. Inside the first method (which runs the job), you'll need to check this table onde in a while, if it's aborting, then you stop the job (and update the status to Aborted). Or you could just delete from the database once the job has been aborted or finished.

    Alternatively, if you want the data to be temporary, you could use a sixth server as a cache server and store data there. This cache server could be a clustered server as well, but then you would need to use something like Redis.

    0 讨论(0)
提交回复
热议问题