问题
I want to import 2 scripts in webWorker by importScripts() as follows,but it failed to import. How to deal with it?
self.importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs');
self.importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter');
error figure
回答1:
Currently, it is not possible to use the webgl implementation on web-worker, the offlineCanvas being an experimental features. However, it is possible to use the CPU backend.
Here is an example of delegation to the web-worker to perform a computation
<head>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.14.2/dist/tf.min.js"></script>
<script>
const worker_function = () => {
onmessage = () => {
console.log('from web worker')
this.window = this
importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
tf.setBackend('cpu')
const res = tf.zeros([1, 2]).add(tf.ones([1, 2]))
res.print()
postMessage({res: res.dataSync(), shape: res.shape})
};
}
if (window != self)
worker_function();
</script>
<script>
const worker = new Worker(URL.createObjectURL(new Blob(["(" + worker_function.toString() + ")()"], { type: 'text/javascript' })));
worker.postMessage({});
worker.onmessage = (message) => {
console.log('from main thread')
const {data} = message
tf.tensor(data.res, data.shape).print()
}
</script>
</head>
With tensors the data shared between the main thread and the web worker can be big. This data is either cloned or transferred.
The difference is that if the data is cloned, the web worker will still keep a copy of the data for further processing. When transfer, the ownership of the data is transferred as well. Its advantage compare to cloning is the fastness of the transfer, actually it can be viewed as a passing to reference (if ones come from a background of language with pointer)
Let's discuss the performance with these two snippets
<head>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.14.2/dist/tf.min.js"></script>
<script>
const worker_function = () => {
onmessage = () => {
console.log('from web worker')
this.window = this
importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
tf.setBackend('cpu')
const res = tf.randomNormal([2000, 2000, 3])
const t0 = performance.now()
postMessage({res: res.dataSync().buffer, shape: res.shape}, [res.dataSync().buffer])
console.log(`Prediction took ${(performance.now() - t0).toFixed(1)} ms`)
};
}
if (window != self)
worker_function();
</script>
<script>
const worker = new Worker(URL.createObjectURL(new Blob(["(" + worker_function.toString() + ")()"], { type: 'text/javascript' })));
worker.postMessage({});
worker.onmessage = (message) => {
console.log('from main thread')
const {data} = message
tf.tensor(new Float32Array(message.data.res), message.data.shape)
}
</script>
</head>
<head>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.14.2/dist/tf.min.js"></script>
<script>
const worker_function = () => {
onmessage = () => {
console.log('from web worker')
this.window = this
importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
tf.setBackend('cpu')
const res = tf.randomNormal([2000, 2000, 3])
const t0 = performance.now()
postMessage({res: res.dataSync(), shape: res.shape})
console.log(`Prediction took ${(performance.now() - t0).toFixed(1)} ms`)
};
}
if (window != self)
worker_function();
</script>
<script>
const worker = new Worker(URL.createObjectURL(new Blob(["(" + worker_function.toString() + ")()"], { type: 'text/javascript' })));
worker.postMessage({});
worker.onmessage = (message) => {
console.log('from main thread')
const {data} = message
tf.tensor(message.data.res, message.data.shape)
}
</script>
</head>
We can see a difference of around 10ms between the two snippets. When performance is at cost, one needs to take into account how the data is shared if it has to be cloned or transferred.
回答2:
TensorflowJS needs the canvas to do it's GPU computation and a worker currently doesn't have a canvas.
OffscreenCanvas is a feature that's being worked on, but before TFJS uses it, it probably needs wide enough browser support.
来源:https://stackoverflow.com/questions/54359728/tensorflow-js-in-webworkers