Load and execution sequence of a web page?

前端 未结 7 2044
盖世英雄少女心
盖世英雄少女心 2020-11-22 03:15

I have done some web based projects, but I don\'t think too much about the load and execution sequence of an ordinary web page. But now I need to know detail. It\'s hard to

相关标签:
7条回答
  • 2020-11-22 03:59

    Dynatrace AJAX Edition shows you the exact sequence of page loading, parsing and execution.

    0 讨论(0)
  • 2020-11-22 04:00

    1) HTML is downloaded.

    2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.

    3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the <head></head> section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.

    4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.

    Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.

    Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.

    0 讨论(0)
  • 2020-11-22 04:08

    AFAIK, the browser (at least Firefox) requests every resource as soon as it parses it. If it encounters an img tag it will request that image as soon as the img tag has been parsed. And that can be even before it has received the totality of the HTML document... that is it could still be downloading the HTML document when that happens.

    For Firefox, there are browser queues that apply, depending on how they are set in about:config. For example it will not attempt to download more then 8 files at once from the same server... the additional requests will be queued. I think there are per-domain limits, per proxy limits, and other stuff, which are documented on the Mozilla website and can be set in about:config. I read somewhere that IE has no such limits.

    The jQuery ready event is fired as soon as the main HTML document has been downloaded and it's DOM parsed. Then the load event is fired once all linked resources (CSS, images, etc.) have been downloaded and parsed as well. It is made clear in the jQuery documentation.

    If you want to control the order in which all that is loaded, I believe the most reliable way to do it is through JavaScript.

    0 讨论(0)
  • 2020-11-22 04:13

    If you're asking this because you want to speed up your web site, check out Yahoo's page on Best Practices for Speeding Up Your Web Site. It has a lot of best practices for speeding up your web site.

    0 讨论(0)
  • 2020-11-22 04:17

    Open your page in Firefox and get the HTTPFox addon. It will tell you all that you need.

    Found this on archivist.incuito:

    http://archivist.incutio.com/viewlist/css-discuss/76444

    When you first request a page, your browser sends a GET request to the server, which returns the HTML to the browser. The browser then starts parsing the page (possibly before all of it has been returned).

    When it finds a reference to an external entity such as a CSS file, an image file, a script file, a Flash file, or anything else external to the page (either on the same server/domain or not), it prepares to make a further GET request for that resource.

    However the HTTP standard specifies that the browser should not make more than two concurrent requests to the same domain. So it puts each request to a particular domain in a queue, and as each entity is returned it starts the next one in the queue for that domain.

    The time it takes for an entity to be returned depends on its size, the load the server is currently experiencing, and the activity of every single machine between the machine running the browser and the server. The list of these machines can in principle be different for every request, to the extent that one image might travel from the USA to me in the UK over the Atlantic, while another from the same server comes out via the Pacific, Asia and Europe, which takes longer. So you might get a sequence like the following, where a page has (in this order) references to three script files, and five image files, all of differing sizes:

    1. GET script1 and script2; queue request for script3 and images1-5.
    2. script2 arrives (it's smaller than script1): GET script3, queue images1-5.
    3. script1 arrives; GET image1, queue images2-5.
    4. image1 arrives, GET image2, queue images3-5.
    5. script3 fails to arrive due to a network problem - GET script3 again (automatic retry).
    6. image2 arrives, script3 still not here; GET image3, queue images4-5.
    7. image 3 arrives; GET image4, queue image5, script3 still on the way.
    8. image4 arrives, GET image5;
    9. image5 arrives.
    10. script3 arrives.

    In short: any old order, depending on what the server is doing, what the rest of the Internet is doing, and whether or not anything has errors and has to be re-fetched. This may seem like a weird way of doing things, but it would quite literally be impossible for the Internet (not just the WWW) to work with any degree of reliability if it wasn't done this way.

    Also, the browser's internal queue might not fetch entities in the order they appear in the page - it's not required to by any standard.

    (Oh, and don't forget caching, both in the browser and in caching proxies used by ISPs to ease the load on the network.)

    0 讨论(0)
  • 2020-11-22 04:18

    The chosen answer looks like does not apply to modern browsers, at least on Firefox 52. What I observed is that the requests of loading resources like css, javascript are issued before HTML parser reaches the element, for example

    <html>
      <head>
        <!-- prints the date before parsing and blocks HTMP parsering -->
        <script>
          console.log("start: " + (new Date()).toISOString());
          for(var i=0; i<1000000000; i++) {};
        </script>
    
        <script src="jquery.js" type="text/javascript"></script>
        <script src="abc.js" type="text/javascript"></script>
        <link rel="stylesheets" type="text/css" href="abc.css"></link>
        <style>h2{font-wight:bold;}</style>
        <script>
          $(document).ready(function(){
          $("#img").attr("src", "kkk.png");
         });
       </script>
     </head>
     <body>
       <img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
       <script src="kkk.js" type="text/javascript"></script>
       </body>
    </html>
    

    What I found that the start time of requests to load css and javascript resources were not being blocked. Looks like Firefox has a HTML scan, and identify key resources(img resource is not included) before starting to parse the HTML.

    0 讨论(0)
提交回复
热议问题