Sort of like this:
function getfoo() {
var foo = \"\";
$.get(\"foofile.html\", function (data) {
foo = data;
});
return foo;
}
The first "a" of ajax stands for asynchronous and so what you are trying to do is sort of against the philosophy of ajax. However it is possible to use a blocking request but this is not supported by the .get
simplified interface and you must use the .ajax
function:
var foo = "hmmm";
$.ajax("jquery.js", {async:false, success:function(x){foo=x}});
alert(foo);
The basic execution model of javascript is event-based and single threaded (there are web workers that provide multithreading capability, but each worker lives in its own address space and cannot share any memory with other workers or with the main thread... so in a sense they're more similar to processes than to threads).
In Javascript your cannot "wait" in a loop for other things to happen, you functions must always terminate quickly, possibly attaching callbacks to be called again when something happens. If you make a loop the javascript engine will just get stuck in that loop and is not allowed to do other processing (if that is visible from javascript). For example:
// Warning: WRONG example... this won't work!
var foo = 0;
setTimeout(function(){foo = 1}, 100); // Set foo=1 after 100ms
while (foo == 0) ; // Wait for that to happen
alert(foo);
is NOT going to work, because the browser engine is not allowed to execute other javascript code (the timeout callback) until the main code path gets to an end.
This event-based approach simplifies programming quite a bit (because there's no locking needed... there's always just one single thread manipulating the state) but forces to use a different flow design for long operations.
The fact that there is a single thread of execution also means that while your javascript code is doing any long computation everything else is blocked and the browser looks unresponsive for the user.
This unresponsiveness is why the use of synchronous calls to retrieve resources is considered a bad practice... the difficult job of concurrent acquisition of different resources from multiple streams is already implemented by the browser but your javascript code is required to use a callback model to be able to take advantage of this feature.
You should use a callback pass to the function and let it deal your data.
function getfoo(callback) {
var foo = "";
$.get("foofile.html", function (data) {
callback(data);
// do some other things
// ...
});
}
getfoo(function(data) {
console.log(data);
});
When using ajax, you should write your code a little bit differently. You should separate your caller and callee logic.
Suppose your existing code is like
function getfoo() {
var foo = "";
$.get("foofile.html", function (data) {
foo = data;
});
return foo;
}
function usefoo(){
var data = getfoo();
// do something with data
}
It should really be written like
function getfoo() {
var foo = "";
$.get("foofile.html", function (data) {
usefoo(data);
});
}
function usefoo(data){
// do something with data
}