Why are these fetch methods asynchronous?

后端 未结 4 1880
轮回少年
轮回少年 2021-01-18 05:46

Fetch is the new Promise-based API for making network requests:

fetch(\'https://www.everythingisawesome.com/\')
  .then(response => console.log(\'status:          


        
4条回答
  •  被撕碎了的回忆
    2021-01-18 06:15

    Why are these fetch methods asynchronous?

    The naïve answer is "because the specification says so"

    • The arrayBuffer() method, when invoked, must return the result of running consume body with ArrayBuffer.
    • The blob() method, when invoked, must return the result of running consume body with Blob.
    • The formData() method, when invoked, must return the result of running consume body with FormData.
    • The json() method, when invoked, must return the result of running consume body with JSON.
    • The text() method, when invoked, must return the result of running consume body with text.

    Of course, that doesn't really answer the question because it leaves open the question of "Why does the spec say so?"

    And this is where it gets complicated, because I'm certain of the reasoning, but I have no evidence from an official source to prove it. I'm going to attempt to explain the rational to the best of my understanding, but be aware that everything after here should be treated largely as outside opinion.


    When you request data from a resource using the fetch API, you have to wait for the resource to finish downloading before you can use it. This should be reasonably obvious. JavaScript uses asynchronous APIs to handle this behavior so that the work involved doesn't block other scripts, and—more importantly—the UI.

    When the resource has finished downloading, the data might be enormous. There's nothing that prevents you from requesting a monolithic JSON object that exceeds 50MB.

    What do you think would happen if you attempted to parse 50MB of JSON synchronously? It would block other scripts, and—more importantly—the UI.

    Other programmers have already solved how to handle large amounts of data in a performant manner: Streams. In JavaScript, streams are implemented using an asynchronous API so that they don't block, and if you read the consume body details, it's clear that streams are being used to parse the data:

    Let stream be body's stream if body is non-null, or an empty ReadableStream object otherwise.

    Now, it's certainly possible that the spec could have defined two ways of accessing the data: one synchronous API meant for smaller amounts of data, and one asynchronous API for larger amounts of data, but this would lead to confusion and duplication.

    Besides Ya Ain't Gonna Need It. Everything that can be expressed using synchronous code can be expressed in asynchronous code. The reverse is not true. Because of this, a single asynchronous API was created that could handle all use cases.

提交回复
热议问题