问题
Currently I'm running some benchmarks between JavaScript, asm.js and WebAssembly. For that I've implemented a little program that searches for primes with the Sieve of Atkin algorithm.
To make the ramp up negligible I'm calculating primes up to 500'000'000. My problem is, that the JavaScript implementation runs out of memory, because the result array becomes huge.
That's my current implementation:
const AMOUNT = 500000000;
function getPrimes() {
const limit = AMOUNT;
const width = Math.sqrt(limit);
let i, j, x, y, z;
let primes = new Array(limit);
const begin = performance.now();
for (x = 1; x <= width; x++) {
for (y = 1; y <= width; y++) {
z = 4 * x * x + y * y;
if (z < limit && (z % 60 == 1 || z % 60 == 13 || z % 60 == 17 || z
% 60 == 29 || z % 60 == 37 || z % 60 == 41 || z % 60 == 49
|| z % 60 == 53)) {
primes[z] = !primes[z];
}
z = 3 * x * x + y * y;
if (z < limit && (z % 60 == 7 || z % 60 == 19 || z % 60 == 31 || z
% 60 == 43)) {
primes[z] = !primes[z];
}
z = 3 * x * x - y * y;
if (x > y && z < limit && (z % 60 == 11 || z % 60 == 23 || z % 60
== 47 || z % 60 == 59)) {
primes[z] = !primes[z];
}
}
}
const end = performance.now();
let last_prime = 0;
for (i = limit; i >= 0; i--) {
if(primes[i] == 1) {
last_prime = i;
break;
}
}
primes = null;
time_spent = end - begin;
console.log(time_spent/1000 + ', ' + last_prime);
}
document.addEventListener("DOMContentLoaded", function() {
let i;
for (i = 0; i <= 10; i++) {
getPrimes();
}
});
This is my HTML to run it:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Find Primes</title>
<script src="./find-primes.js"></script>
</head>
<body>
Check the console.
</body>
</html>
What I've tried so far:
- Make sure the primes
array gets collected after each run by nullify it.
- Allocate the whole array once.
Do you have more ideas to optimize the memory usage of my JavaScript implementation?
What I try to prevent is, to split the array or to calculate the result in different iterations, because I want to benchmark the raw calculation performance. There shouldn't be any influence by memory management on the test.
I'm using:
- Google Chrome: Version 75.0.3770.90 (Official Build) (64-bit)
- Mozilla Firefox: Version 67.0.2 (64-bit)
Thank you in advance!
回答1:
Instead of Array, which naïvely stores each of your flags as a double
(and probably has even more memory overhead for its ability to store arbitrary mixed types), you might use a Uint8Array, or other unsigned integer format which would reduce your memory usage by at least 64 times.
You'll need to modify how you initialize, toggle, and check the flags though.
Initialization:
const BitArray = Uint8Array; // or Uint16Array, Uint32Array
const BITS_PER_ELEMENT = BitArray.BYTES_PER_ELEMENT * 8;
const BIT_SHIFT = Math.log2(BITS_PER_ELEMENT);
const BIT_MASK = BITS_PER_ELEMENT - 1;
let primes = new BitArray(Math.ceil(limit / BITS_PER_ELEMENT));
Toggling:
primes[z >> BIT_SHIFT] ^= 1 << (z & BIT_MASK);
Checking:
if (primes[i >> BIT_SHIFT] & (1 << (i & BIT_MASK)))
Any choice between Uint8Array
, Uint16Array
, and Uint32Array
would use the same amount of memory.
来源:https://stackoverflow.com/questions/56617250/javascript-out-of-memory-for-huge-result-array-in-sieve-of-atkin-implementation