I have a simple lambda function that asynchronously makes an API calls and then returns data. 99% of the time this works great. When ever the API takes longer then the lambda co
I think the problem is because of ip address we mention it in the AWS RDS inbound/ outbound.
If you are testing for now and your node.js is working on local ide and not on AWS, then you have to do following:
Go to AWS RDS.
Click on DB instance.
All Set.
well if you defined 3 seconds in your function configuration, this timeout will override the time inside your code, so make sure to increase the timeout from your lambda function configs and try again the wait() and it should work!
I've run into the same issue, in fact there are many cases when Lambda becomes unresponsive, e.g.:
Parsing not valid json:
exports.handler = function(event, context, callback)
{
var nonValidJson = "Not even Json";
var jsonParse = JSON.parse(nonValidJson);
Accessing property of undefined variable:
exports.handler = function(event, context, callback)
{
var emptyObject = {};
var value = emptyObject.Item.Key;
Not closing mySql connection after accessing RDS leads to Lambda timeout and then it becomes non-responsive.
When I'm saying unresponsive it's literally not even loading, i.e. first print inside handler isn't printed, and Lambda just exits every run with timeout:
exports.handler = function(event, context, callback)
{
console.log("Hello there");
It's a bug, known by AWS team for almost a year:
https://forums.aws.amazon.com/thread.jspa?threadID=238434&tstart=0
Unfortunately it's still not fixed, after some tests it's revealed that in fact Lambda tries to restart (reload the container?), there is just not enough time. If you set the timeout to be 10s, after ~4s of execution time Lambda starts working, and then in next runs comes to behave normally. I've also tried playing with setting:
context.callbackWaitsForEmptyEventLoop = false;
and putting all 'require' blocks inside handler, nothing really worked. The only way to prevent Lambda becoming dead is setting bigger timeout, 10s should be more than enough as a workaround protection against this bug.
I just had to increase the timeout and the error is subsided. I increased it to 5 sec. This was okay for me because, I wasn't gonna use this Lambda in production.
In Amazon console AWS config you have to change the default timeout from 3 seconds to more (5 min max)
You should look for how your function handle works with a specific
context.callbackWaitsForEmptyEventLoop
If that boolean-type is false
, the setTimeout won't be ever fired, because you might've answered/handled the lambda invocation earlier.
But if the value of callbackWaitsForEmptyEventLoop
is true
- then your code will do what you are looking for.
Also - it's probably easier to handle everything via callbacks directly, without the need for "hand-written" timeouts, changing configuration timeouts and so on...
E.g.
function doneFactory(cb) { // closure factory returning a callback function which knows about res (response)
return function(err, res) {
if (err) {
return cb(JSON.stringify(err));
}
return cb(null, res);
};
}
// you're going to call this Lambda function from your code
exports.handle = function(event, context, handleCallback) {
// allows for using callbacks as finish/error-handlers
context.callbackWaitsForEmptyEventLoop = false;
doSomeAsyncWork(event, context, doneFactory(handleCallback));
};