I understand this general advice given against the use of synchronous ajax calls, because the synchronous calls block the UI rendering.
The other reason generally given
I think that memory leaks are happening mainly because the garbage collector can't do its job. I.e. you have a reference to something and the GC can not delete it. I wrote a simple example:
var getDataSync = function(url) {
console.log("getDataSync");
var request = new XMLHttpRequest();
request.open('GET', url, false); // `false` makes the request synchronous
try {
request.send(null);
if(request.status === 200) {
return request.responseText;
} else {
return "";
}
} catch(e) {
console.log("!ERROR");
}
}
var getDataAsync = function(url, callback) {
console.log("getDataAsync");
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.onload = function (e) {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
callback(xhr.responseText);
} else {
callback("");
}
}
};
xhr.onerror = function (e) {
callback("");
};
xhr.send(null);
}
var requestsMade = 0
var requests = 1;
var url = "http://missing-url";
for(var i=0; i
Except the fact that the synchronous function blocks a lot of stuff there is another big difference. The error handling. If you use getDataSync and remove the try-catch block and refresh the page you will see that an error is thrown. That's because the url doesn't exist, but the question now is how garbage collector works when an error is thrown. Is it clears all the objects connected with the error, is it keeps the error object or something like that. I'll be glad if someone knows more about that and write here.