node and Error: EMFILE, too many open files

前端 未结 18 1704
终归单人心
终归单人心 2020-11-28 17:58

For some days I have searched for a working solution to an error

Error: EMFILE, too many open files

It seems that many people have the same proble

相关标签:
18条回答
  • 2020-11-28 18:14

    With bagpipe, you just need change

    FS.readFile(filename, onRealRead);
    

    =>

    var bagpipe = new Bagpipe(10);
    
    bagpipe.push(FS.readFile, filename, onRealRead))
    

    The bagpipe help you limit the parallel. more details: https://github.com/JacksonTian/bagpipe

    0 讨论(0)
  • 2020-11-28 18:16

    You're reading too many files. Node reads files asynchronously, it'll be reading all files at once. So you're probably reading the 10240 limit.

    See if this works:

    var fs = require('fs')
    var events = require('events')
    var util = require('util')
    var path = require('path')
    
    var FsPool = module.exports = function(dir) {
        events.EventEmitter.call(this)
        this.dir = dir;
        this.files = [];
        this.active = [];
        this.threads = 1;
        this.on('run', this.runQuta.bind(this))
    };
    // So will act like an event emitter
    util.inherits(FsPool, events.EventEmitter);
    
    FsPool.prototype.runQuta = function() {
        if(this.files.length === 0 && this.active.length === 0) {
            return this.emit('done');
        }
        if(this.active.length < this.threads) {
            var name = this.files.shift()
    
            this.active.push(name)
            var fileName = path.join(this.dir, name);
            var self = this;
            fs.stat(fileName, function(err, stats) {
                if(err)
                    throw err;
                if(stats.isFile()) {
                    fs.readFile(fileName, function(err, data) {
                        if(err)
                            throw err;
                        self.active.splice(self.active.indexOf(name), 1)
                        self.emit('file', name, data);
                        self.emit('run');
    
                    });
                } else {
                    self.active.splice(self.active.indexOf(name), 1)
                    self.emit('dir', name);
                    self.emit('run');
                }
            });
        }
        return this
    };
    FsPool.prototype.init = function() {
        var dir = this.dir;
        var self = this;
        fs.readdir(dir, function(err, files) {
            if(err)
                throw err;
            self.files = files
            self.emit('run');
        })
        return this
    };
    var fsPool = new FsPool(__dirname)
    
    fsPool.on('file', function(fileName, fileData) {
        console.log('file name: ' + fileName)
        console.log('file data: ', fileData.toString('utf8'))
    
    })
    fsPool.on('dir', function(dirName) {
        console.log('dir name: ' + dirName)
    
    })
    fsPool.on('done', function() {
        console.log('done')
    });
    fsPool.init()
    
    0 讨论(0)
  • 2020-11-28 18:16

    Here's my two cents: Considering a CSV file is just lines of text I've streamed the data (strings) to avoid this problem.

    Easiest solution for me that worked in my usecase.

    It can be used with graceful fs or standard fs. Just note that there won't be headers in the file when creating.

    // import graceful-fs or normal fs
    const fs = require("graceful-fs"); // or use: const fs = require("fs") 
    
    // Create output file and set it up to receive streamed data
    // Flag is to say "append" so that data can be recursively added to the same file 
    let fakeCSV = fs.createWriteStream("./output/document.csv", {
      flags: "a",
    });
    

    and the data that needs to be streamed to the file i've done like this

    // create custom streamer that can be invoked when needed
    const customStreamer = (dataToWrite) => {
      fakeCSV.write(dataToWrite + "\n");
    };
    

    Note that the dataToWrite is simply a string with a custom seperator like ";" or ",". i.e.

    const dataToWrite = "batman" + ";" + "superman"
    customStreamer(dataToWrite);
    

    This writes "batman;superman" to the file.


    • Note that there's no error catching or whatsoever in this example.
    • Docs: https://nodejs.org/api/fs.html#fs_fs_createwritestream_path_options
    0 讨论(0)
  • 2020-11-28 18:17

    I did installing watchman, changing limit etc. and it didn't work in Gulp.

    Restarting iterm2 actually helped though.

    0 讨论(0)
  • 2020-11-28 18:18

    For when graceful-fs doesn't work... or you just want to understand where the leak is coming from. Follow this process.

    (e.g. graceful-fs isn't gonna fix your wagon if your issue is with sockets.)

    From My Blog Article: http://www.blakerobertson.com/devlog/2014/1/11/how-to-determine-whats-causing-error-connect-emfile-nodejs.html

    How To Isolate

    This command will output the number of open handles for nodejs processes:

    lsof -i -n -P | grep nodejs
    
    COMMAND     PID    USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
    ...
    nodejs    12211    root 1012u  IPv4 151317015      0t0  TCP 10.101.42.209:40371->54.236.3.170:80 (ESTABLISHED)
    nodejs    12211    root 1013u  IPv4 151279902      0t0  TCP 10.101.42.209:43656->54.236.3.172:80 (ESTABLISHED)
    nodejs    12211    root 1014u  IPv4 151317016      0t0  TCP 10.101.42.209:34450->54.236.3.168:80 (ESTABLISHED)
    nodejs    12211    root 1015u  IPv4 151289728      0t0  TCP 10.101.42.209:52691->54.236.3.173:80 (ESTABLISHED)
    nodejs    12211    root 1016u  IPv4 151305607      0t0  TCP 10.101.42.209:47707->54.236.3.172:80 (ESTABLISHED)
    nodejs    12211    root 1017u  IPv4 151289730      0t0  TCP 10.101.42.209:45423->54.236.3.171:80 (ESTABLISHED)
    nodejs    12211    root 1018u  IPv4 151289731      0t0  TCP 10.101.42.209:36090->54.236.3.170:80 (ESTABLISHED)
    nodejs    12211    root 1019u  IPv4 151314874      0t0  TCP 10.101.42.209:49176->54.236.3.172:80 (ESTABLISHED)
    nodejs    12211    root 1020u  IPv4 151289768      0t0  TCP 10.101.42.209:45427->54.236.3.171:80 (ESTABLISHED)
    nodejs    12211    root 1021u  IPv4 151289769      0t0  TCP 10.101.42.209:36094->54.236.3.170:80 (ESTABLISHED)
    nodejs    12211    root 1022u  IPv4 151279903      0t0  TCP 10.101.42.209:43836->54.236.3.171:80 (ESTABLISHED)
    nodejs    12211    root 1023u  IPv4 151281403      0t0  TCP 10.101.42.209:43930->54.236.3.172:80 (ESTABLISHED)
    ....
    

    Notice the: 1023u (last line) - that's the 1024th file handle which is the default maximum.

    Now, Look at the last column. That indicates which resource is open. You'll probably see a number of lines all with the same resource name. Hopefully, that now tells you where to look in your code for the leak.

    If you don't know multiple node processes, first lookup which process has pid 12211. That'll tell you the process.

    In my case above, I noticed that there were a bunch of very similar IP Addresses. They were all 54.236.3.### By doing ip address lookups, was able to determine in my case it was pubnub related.

    Command Reference

    Use this syntax to determine how many open handles a process has open...

    To get a count of open files for a certain pid

    I used this command to test the number of files that were opened after doing various events in my app.

    lsof -i -n -P | grep "8465" | wc -l
    
    # lsof -i -n -P | grep "nodejs.*8465" | wc -l
    28
    # lsof -i -n -P | grep "nodejs.*8465" | wc -l
    31
    # lsof -i -n -P | grep "nodejs.*8465" | wc -l
    34
    

    What is your process limit?

    ulimit -a
    

    The line you want will look like this:

    open files                      (-n) 1024
    

    Permanently change the limit:

    • tested on Ubuntu 14.04, nodejs v. 7.9

    In case you are expecting to open many connections (websockets is a good example), you can permanently increase the limit:

    • file: /etc/pam.d/common-session (add to the end)

        session required pam_limits.so
      
    • file: /etc/security/limits.conf (add to the end, or edit if already exists)

        root soft  nofile 40000
        root hard  nofile 100000
      
    • restart your nodejs and logout/login from ssh.

    • this may not work for older NodeJS you'll need to restart server

    • use instead of if your node runs with different uid.

    0 讨论(0)
  • 2020-11-28 18:19

    Use the latest fs-extra.

    I had that problem on Ubuntu (16 and 18) with plenty of file/socket-descriptors space (count with lsof |wc -l). Used fs-extra version 8.1.0. After the update to 9.0.0 the "Error: EMFILE, too many open files" vanished.

    I've experienced diverse problems on diverse OS' with node handling filesystems. Filesystems are obviously not trivial.

    0 讨论(0)
提交回复
热议问题