What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?

前端 未结 6 2171
挽巷
挽巷 2020-12-25 11:55

I\'m toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.

I\'m speci

相关标签:
6条回答
  • 2020-12-25 12:11

    You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie. This is only for HTML5 Supported browsers, of-course

    0 讨论(0)
  • 2020-12-25 12:17

    You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.

    If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?

    0 讨论(0)
  • 2020-12-25 12:19

    A different approach, that does not need explicit benchmark, would be to progressively enable features.

    You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.

    Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.

    0 讨论(0)
  • 2020-12-25 12:21

    Take a look at some of Google's (copyrighted!) benchmarks for V8:

    • http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js

    • http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js

    I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.

    Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.

    0 讨论(0)
  • 2020-12-25 12:33

    Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.

    There are several reasons for this, the main ones being:

    1. Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience

    2. Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.

    3. Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.

    The best you can do is to either

    1. Let your users decide what they want, and enable them to easily change that decision if they regret it

      or better yet

    2. Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.

    Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.

    0 讨论(0)
  • 2020-12-25 12:36

    Some Ideas:

    • Putting a time-limit on the tests seems like an obvious choice.
    • Storing test results in a cookie also seems obvious.
    • Poor test performance on a test could pause further scripts
      • and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
      • that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
      • while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
        .
    • On a sidenote - Slow network speeds could also probably be tested
      • by timing the download of external resources (like the pages own CSS or JavaScript files)
      • and comparing that result with the JavaScript benchmark results.
      • this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
        .
    • It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
    0 讨论(0)
提交回复
热议问题