问题
I don't see anything in the JavaScript spec, the proposed DOM spec extensions related to SharedArrayBuffer
, or the current WHAT-WG HTML spec to suggest that shared memory will be synchronized/updated across threads when one thread posts a message to another and the other processes the message. (After the one has already sent the shared memory to the other.) However, I'm also unable to verify experimentally that it doesn't happen (in my tests, I don't see stale values). Is there some such guarantee that I'm missing, and if so, where is it guaranteed? For instance, is it documented for postMessage
and I've missed it, or is there something about yielding back to the event loop / job queue that guarantees it (since handling a message from another thread involves doing that), etc.? Or alternately, is it definitely not guaranteed (and that information is in a spec somewhere)?
Please don't speculate or make a "reasonable guess." I'm looking for hard information: Citations from canonical sources, a replicatable experiment that shows that it isn't guaranteed (although I suppose then there's the question of whether it's just an implementation error), that sort of thing.
Below is the source for my tests that have not yet been able to catch unsynchronized memory. To run it, you'll need to be using a browser that currently supports SharedArrayBuffer
, which I think at the moment means Chrome v67 or higher (Firefox, Edge, and Safari all had support but disabled it in response to Spectre and Meltdown in Jan 2018; Chrome did too, but re-enabled it in v67 [July 2018] on platforms where their site-isolation feature is enabled).
sync-test-postMessage.html
:
<!doctype html>
<html>
<head>
<meta charset="UTF-8">
<title>Sync Test postMessage</title>
</head>
<body>
<script src="sync-test-postMessage-main.js"></script>
</body>
</html>
sync-test-postMessage-main.js
:
const array = new Uint32Array(new SharedArrayBuffer(Uint32Array.BYTES_PER_ELEMENT));
const worker = new Worker("./sync-test-postMessage-worker.js");
let counter = 0;
const limit = 1000000;
const report = Math.floor(limit / 10);
let mismatches = 0;
const now = performance.now();
const log = msg => {
console.log(`${msg} - ${mismatches} mismatch(es) - ${performance.now() - now}ms`);
};
worker.addEventListener("message", e => {
if (e.data && e.data.type === "ping") {
++counter;
const value = array[0];
if (counter !== value) {
++mismatches;
console.log(`Out of sync! ${counter} !== ${value}`);
}
if (counter % report === 0) {
log(`${counter} of ${limit}`);
}
if (counter < limit) {
worker.postMessage({type: "pong"});
} else {
console.log("done");
}
}
});
worker.postMessage({type: "init", array});
console.log(`running to ${limit}`);
sync-test-postMessage-worker.js
:
let array;
this.addEventListener("message", e => {
if (e.data) {
switch (e.data.type) {
case "init":
array = e.data.array;
// fall through to "pong"
case "pong":
++array[0];
this.postMessage({type: "ping"});
break;
}
}
});
Using that code, if memory weren't synchronized, I'd expect at some point for the main thread to see a stale value in the shared array. But it's entirely likely (in my view) that this code only happens to work because of the relatively-large timescales involved in the message passing...
回答1:
TL;DR: Yes, it does.
In the thread on es-discuss, the author of the shared memory proposal, Lars Hansen, wrote:
In a browser, postMessage send and receive was always intended to create a synchronization edge in the same way a write-read pair is. http://tc39.github.io/ecmascript_sharedmem/shmem.html#WebBrowserEmbedding
Not sure where this prose ended up when the spec was transfered to the es262 document.
I followed up with:
Thanks!
Looks like it's at least partially here: https://tc39.github.io/ecma262/#sec-host-synchronizes-with
So, a question (well, two questions) just for those of us not deeply versed in the terminology of the Memory Model section. Given:
- Thread A sends a 1k shared block to Thread B via
postMessage
- Thread B writes to various locations in that block directly (not via
Atomics.store
)- Thread B does a
postMessage
to Thread A (without referencing the block in thepostMessage
)- Thread A receives the message and reads data from the block (not via
Atomics.load
)...am I correct that in Step 4 it's guaranteed that thread A will reliably see the writes to that block by Thread B from Step 2, because the
postMessage
was a "synchronization edge" ensuring (amongst other things) that CPU L1d caches are up-to-date, etc.?Similarly, if (!) I'm reading it correctly, in your Mandlebrot example, you have an
Atomics.wait
on a single location in a shared block, and when the thread wakes up it seems to assume other data in the block (not in thewait
range) can reliably be read directly. That's also a "synchronization edge"?
To which he replied:
...ensuring (amongst other things) that CPU L1d caches are up-to-date, etc.?
Yes, that was the intent of that language. The writes to the memory should happen-before the postMessage and the receive-message should happen-before the reads.
... That's also a "synchronization edge"?
Yes, same argument. The writes happen-before the wake, and the wakeup of the wait happens-before the reads.
All of this is by intent so as to allow data to be written and read with cheap unsynchronized writes and reads, and then for the (comparatively expensive) synchronization to ensure proper observability.
来源:https://stackoverflow.com/questions/52445807/does-postmessage-or-yielding-to-the-event-loop-or-similar-sync-shared-memory