问题
The traditional answer to "why is Javascript slower than native code?" is: "Because it's interpreted". The problem with this claim is that interpretation is not a quality of the language itself. As a matter of fact, nowadays most Javascript code is being JITed, still, this isn't even close to native speed.
What if we remove the interpretation factor from the equation and make Javascript AOT compiled? Will it then match the performance of native code? If yes, why isn't this widely done over the web*? If no, where is the performance bottleneck now?
If the new bottleneck is DOM, what if we eliminate that too? would a DOM-less, compiled Javascript be as efficient as native code? If yes, why isn't this widely done over the web**? If no, where is the performance bottleneck now?
After stripping the DOM part and the interpretation part, the only big difference I can see between Javascript and C/C++ is the fact that the former has dynamic types. Suppose we eliminate that too and end up with a DOM-less,statically typed, ahead-of-time-compiled Javascipt. how would that compare to native code? If it would be as efficient, why isn't this widely used? If not, where is the bottleneck now? In this state, JavaScript is nearly identical to C.
*One might say that JIT is faster to load, but this wouldn't explain why AOT isn't being used for resource-intensive web apps such as 3d video games, where the AOT performance benefit is well worth the initial AOT compilation delay. (and a significant "game loading" delay is present anyways)
**a DOM-less javascript would use WebGL/Canvas to interface with the user. This currently requires minimal DOM, which defines the initial HTML5 Canvas, but this can theoretically be eliminated by revising the technology if it's worth the performance benefit. Assume that DOM-less WebGL/Canvas is possible when answering.
EDIT: I am talking about client-side compilation.
回答1:
Important:
You seem to advocate a stripped, statically typed compilable version of JS. The first thing that shows is that you have no clue as to what JS is: a multi-paradigm programming language, that supports the Prototype-based OO, imperative and functional programming paradigms. The key being the functional paradigm. Apart from Haskell, which can be sort-of strong typed after you've defined your own infix operators, a functional language can't be statically-typed AFAIK. Imagine C-like function definitions that return closures:
function a = (function (Object g)
{
char[] closureChar = g.location.href;
Object foo = {};
Function foo.bar = char* function()
{//This is a right mess
return &closureChar;
};
}(this));
A function is a first class object, too. Using tons of lamda-functions that return objects, that reference functions that might return itself, other functions, objects or primitives... How on earth are you going to write all that? Js functions are as much a way of creating scopes, structuring your code, controlling the flow of your program as they are things that you assign to variables.
The problem with compiling JS ahead of time is quite simple: you compile code, that will have to run on such a vast array of different platforms: Desktops/laptops running Windows, OSX, linux, UNIX as well as tablets and smartphones with their different mobile browsers...
Even if you did manage to write & compile JS, that runs on all platforms, the speed of JS still is limited to it being single threaded, and running on a JS engine (like Java runs on a VM).
Compiling the code client side is already being done. True, it takes some time, but not an awful lot. It's quite resource intensive, so most modern browsers will cache the code in such a way that a lot of the preprocessing has been done already. The things that will always be possible to compile, will be cached in their compiled state, too. V8 is an open source, fast, JS engine. If you want, you can check the source on how it's determined what aspects of JS code are compiled, and which aren't.
Even so, that's only how V8 works... The JS engines have more to do with how fast your code runs: Some are very fast, others aren't. Some are faster at one thing, where others outperform all competition on another area. More details can be read here
Stripping the DOM part, isn't stripping anything from the language. The DOM API, isn't part of JS itself. JS is a very expressive, but in the core, small language, just like C. Both haven't got IO capabilities left to their own devices, nor can they parse a DOM. For that, browser implementations of JS have access to a DOMParser object.
You suggest a minimal DOM... hey, everybody with any sense is all for a revamped DOM API. It's far from the best thing about the web. But you have to realize that the DOM and JS are separate entities. The DOM (and DOM API) are managed by W3, whereas ECMA is responsable for JS. Neither having anything to do with each other. That's why the DOM can't be "stripped" from JS: It was never a part of it to begin with.
Since you compare JS to C++: You can write C++ code that can be compiled on both windows and Linux machines, but that's not as easy as it sounds. But since you refer to C++ yourself, I think you might know about that, too.
Speaking of which, if the only real difference you see between C++ and JS is the static vs dynamic typing, you really should spend a bit more time learning about JS.
While its syntax is C-like, the language itself shares a lot more resemblances with Lisp (ie functional programming). It doesn't know of classes as such, but uses prototypes... the dynamic typing is really not that big of a deal, to be honest.
So, bottom line:
compiling JS to run on every machine will lead to something like MS's .NET framework. The philosophy behind that was: "Write once, run everywhere"... Which didn't turn out to be true at all.
Java is X-platform, but that's only because it's not compiled to native code, but runs on a virtual machine.
Lastly, the ECMAScript standard (JS being its most common implementation) is not all that good, and is the result of the joint effort of all big competitors in the field: Mozilla, Google, Microsoft and some irrelevant Swiss company. It's one huge compromise. Just imagine those three big names agreeing to make a compiler for JS together. Microsoft will just put forth its JScript compiler as the best, Google will have its own ideas and Mozilla will probably have 3 different compilers ready, depending on what the community wants.
Edit:
You made an edit, clarifying you're talking about client-side JS. Because you felt the need to specify that, I feel as though you're not entirely sure where JS ends, and where the browsers takes over.
JS was designed as a very portable language: it hasn't got IO capabilities, supports multiple development paradigms, and (initially) was a fully interpreted language. True, it was developed with the web in mind, but you could, and some do, use this language to query a database (MongoDB), as an alternative batch scripting language (JScript), or a server-side scripting language (backbone, node.js,...). Some use ECMAScript (the basic standard for JS) to make their own programming language (Yes, I'm talking about Flash ActionScript).
Depending on the use-case, JS will be given access to objects/API's that aren't native to the language (document
, [Object http].createServer
, [Object file].readFileSync
for DOM access, webserver capabilities, and IO respectively). Those often form the bottlenecks, not the language itself.
As I hinted ad JS was initially an interpreted language. As is the way these days, the division bell between compiled and interpreted languages has been fading for the past decade, to be honest.
C/C++ used to be strictly compiled languages, but in some cases (.NET) C++ code needn't be compiled to machine code anymore...
At the same time, scripting languages like Python, are used for so many purposes they're generally perceived as a programming language, as the term scripting language somehow implies a "lesser language".
A few years ago, with the release of PHP5, the ZendEngine2 was released, too. Since then, PHP is compiled to bytecode and runs on a virtual machine. You can cache the bytecode using APC. The bcompiler allows you to generate standalone executables from PHP code, as does Facebook's HPHPc (deprecated) used to compile PHP to C++, then to native code. Now, facebook uses HHVM, which is a custom virtual machine. Find out more here.
The same evolution can be seen in JavaScript interpreters (which are called engines nowadays). They're not your everyday parse-and-execute threads of old, as you still seem to think they are. There's a lot of wizardry going on in terms of memory management, JITCompilation (tail stack optimizing even), optimization and what have you...
All great things, but these make it rather hard to determine where the actual bottlenecks are. The way each engine optimizes differs even more than IE6 differs from IE10, so it's next to impossible to pinpoint the bottlenecks definitively. If one browser takes 10 seconds for a DOM intensive task, another might take only 1~2 seconds. If, however, the same browsers where pitted against each other to check the performance of the RegExp object, the boot might be on the other foot.
Let's not forget that, after you've written your blog-post about your findings, you'll have to check if neither of the browsers has released a new version/update that claims to speed up certain tasks.
回答2:
Actually, to answer your question, yes. Or sort of. Since of course you can compile anything ahead of time given the appropriate compiler.
It is true that AOT compilation of Javascript is a bit of a bizarre concept. AOT compilation and 'write once run anywhere' is a contradiction since by compiling it you are saying, 'i want it to run on this particular CPU'.
However there are some attempts at this. Take a look at asm.js. You write your C program and then via a few hoops you convert it into a Javascript module. This module is then loaded in by Firefox and, because it is tagged in a certain way (ams="true" or something), the browser attempts to compile it ahead of time. The results are ALMOST native speed. However, there are so many restrictions on what the code can attempt to do (pretty much all those you cite above), that i cant see that many use cases for it except algorithms.
All that said, i feel that the other contributors were unduly harsh in their answers because you are touching on something people are actually trying to do.
回答3:
I'm inclined to think that it would be faster by a significant margin, but it also limits you on what can be done easily because the DOM actually works really well for certain things.
DOM was designed for documents; it wasn't really designed with UI in mind. That said, one of the hardest problems with DOM is keeping it in sync with the model since the DOM is static-ish in nature and even the smallest changes can cause a reflow (this can be slow). React and other Virtual DOM frameworks try to sidestep this by replacing the entire DOM at once and using VDOM diffs to minimize the amount of changes. Using a canvas, on the other hand, makes it incredibly trivial to keep the screen in sync with the model because you can just repaint the canvas with the new data and move on.
Moving the rendering to a canvas cuts out the middle man, allowing you to do your own rendering, layout, and styling. The tradeoff is obvious though. You now have to manage all of the event handling and scrolling manually, plus embedding actual documents would be a nightmare. If some framework were to devise a solution to this, it might be a viable, performant alternative without DOM cruft, but this is probably not going to make sense for news sites, blogs, social media, or really the majority of text-heavy websites. This sort of thing is likely to bring a massive performance increase to a drawing app, game, or possibly even a chat app, but not many other domains will benefit. Or will they? I'd be happy to be proven wrong.
The DOM has very much been a square peg that we've managed to fit into all sorts of weirdly-shaped holes. It isn't the best tool for everything, but when all you have is a hammer, everything looks like a nail.
回答4:
Everything is not about performance, performance still comes secondary to ability. You first get some language in place (re: javascript) to make websites possible, and then improve the language. If you strip out DOM manipulation from JS, make it compile to native code, then what would we use for the web? One of the reasons that javascript is around, and we are not using C/C++ in the web is because it has the ability to do DOM manipulation, and it need not be compiled into a machine specific format, and is therefore universally executable.
Beside I have a name for javascript that's been stripped of DOM manipulation, statically typed, ahead of time compiled : It is Java :)
Your question really is why aren't we using Java for websites as it is so much better performing - think about it. One day we will be there, but not yet.
来源:https://stackoverflow.com/questions/16376476/how-would-a-dom-less-statically-typed-ahead-of-time-compiled-javascript-code-co