Argument list too long error for rm, cp, mv commands

前端 未结 27 2511
长情又很酷
长情又很酷 2020-11-22 04:50

I have several hundred PDFs under a directory in UNIX. The names of the PDFs are really long (approx. 60 chars).

When I try to delete all PDFs together using the fol

相关标签:
27条回答
  • 2020-11-22 05:22

    I ran into this problem a few times. Many of the solutions will run the rm command for each individual file that needs to be deleted. This is very inefficient:

    find . -name "*.pdf" -print0 | xargs -0 rm -rf
    

    I ended up writing a python script to delete the files based on the first 4 characters in the file-name:

    import os
    filedir = '/tmp/' #The directory you wish to run rm on 
    filelist = (os.listdir(filedir)) #gets listing of all files in the specified dir
    newlist = [] #Makes a blank list named newlist
    for i in filelist: 
        if str((i)[:4]) not in newlist: #This makes sure that the elements are unique for newlist
            newlist.append((i)[:4]) #This takes only the first 4 charcters of the folder/filename and appends it to newlist
    for i in newlist:
        if 'tmp' in i:  #If statment to look for tmp in the filename/dirname
            print ('Running command rm -rf '+str(filedir)+str(i)+'* : File Count: '+str(len(os.listdir(filedir)))) #Prints the command to be run and a total file count
            os.system('rm -rf '+str(filedir)+str(i)+'*') #Actual shell command
    print ('DONE')
    

    This worked very well for me. I was able to clear out over 2 million temp files in a folder in about 15 minutes. I commented the tar out of the little bit of code so anyone with minimal to no python knowledge can manipulate this code.

    0 讨论(0)
  • 2020-11-22 05:23

    If you’re trying to delete a very large number of files at one time (I deleted a directory with 485,000+ today), you will probably run into this error:

    /bin/rm: Argument list too long.
    

    The problem is that when you type something like rm -rf *, the * is replaced with a list of every matching file, like “rm -rf file1 file2 file3 file4” and so on. There is a relatively small buffer of memory allocated to storing this list of arguments and if it is filled up, the shell will not execute the program.

    To get around this problem, a lot of people will use the find command to find every file and pass them one-by-one to the “rm” command like this:

    find . -type f -exec rm -v {} \;
    

    My problem is that I needed to delete 500,000 files and it was taking way too long.

    I stumbled upon a much faster way of deleting files – the “find” command has a “-delete” flag built right in! Here’s what I ended up using:

    find . -type f -delete
    

    Using this method, I was deleting files at a rate of about 2000 files/second – much faster!

    You can also show the filenames as you’re deleting them:

    find . -type f -print -delete
    

    …or even show how many files will be deleted, then time how long it takes to delete them:

    root@devel# ls -1 | wc -l && time find . -type f -delete
    100000
    real    0m3.660s
    user    0m0.036s
    sys     0m0.552s
    
    0 讨论(0)
  • 2020-11-22 05:24

    tl;dr

    It's a kernel limitation on the size of the command line argument. Use a for loop instead.

    Origin of problem

    This is a system issue, related to execve and ARG_MAX constant. There is plenty of documentation about that (see man execve, debian's wiki).

    Basically, the expansion produce a command (with its parameters) that exceeds the ARG_MAX limit. On kernel 2.6.23, the limit was set at 128 kB. This constant has been increased and you can get its value by executing:

    getconf ARG_MAX
    # 2097152 # on 3.5.0-40-generic
    

    Solution: Using for Loop

    Use a for loop as it's recommended on BashFAQ/095 and there is no limit except for RAM/memory space:

    Dry run to ascertain it will delete what you expect:

    for f in *.pdf; do echo rm "$f"; done
    

    And execute it:

    for f in *.pdf; do rm "$f"; done
    

    Also this is a portable approach as glob have strong and consistant behavior among shells (part of POSIX spec).

    Note: As noted by several comments, this is indeed slower but more maintainable as it can adapt more complex scenarios, e.g. where one want to do more than just one action.

    Solution: Using find

    If you insist, you can use find but really don't use xargs as it "is dangerous (broken, exploitable, etc.) when reading non-NUL-delimited input":

    find . -maxdepth 1 -name '*.pdf' -delete 
    

    Using -maxdepth 1 ... -delete instead of -exec rm {} + allows find to simply execute the required system calls itself without using an external process, hence faster (thanks to @chepner comment).

    References

    • I'm getting "Argument list too long". How can I process a large list in chunks? @ wooledge
    • execve(2) - Linux man page (search for ARG_MAX) ;
    • Error: Argument list too long @ Debian's wiki ;
    • Why do I get “/bin/sh: Argument list too long” when passing quoted arguments? @ SuperUser
    0 讨论(0)
  • 2020-11-22 05:24

    Another answer is to force xargs to process the commands in batches. For instance to delete the files 100 at a time, cd into the directory and run this:

    echo *.pdf | xargs -n 100 rm

    0 讨论(0)
  • 2020-11-22 05:24

    And another one:

    cd  /path/to/pdf
    printf "%s\0" *.[Pp][Dd][Ff] | xargs -0 rm
    

    printf is a shell builtin, and as far as I know it's always been as such. Now given that printf is not a shell command (but a builtin), it's not subject to "argument list too long ..." fatal error.

    So we can safely use it with shell globbing patterns such as *.[Pp][Dd][Ff], then we pipe its output to remove (rm) command, through xargs, which makes sure it fits enough file names in the command line so as not to fail the rm command, which is a shell command.

    The \0 in printf serves as a null separator for the file names wich are then processed by xargs command, using it (-0) as a separator, so rm does not fail when there are white spaces or other special characters in the file names.

    0 讨论(0)
  • 2020-11-22 05:25

    Try this also If you wanna delete above 30/90 days (+) or else below 30/90(-) days files/folders then you can use the below ex commands

    Ex: For 90days excludes above after 90days files/folders deletes, it means 91,92....100 days

    find <path> -type f -mtime +90 -exec rm -rf {} \;
    

    Ex: For only latest 30days files that you wanna delete then use the below command (-)

    find <path> -type f -mtime -30 -exec rm -rf {} \;
    

    If you wanna giz the files for more than 2 days files

    find <path> -type f -mtime +2 -exec gzip {} \;
    

    If you wanna see the files/folders only from past one month . Ex:

    find <path> -type f -mtime -30 -exec ls -lrt {} \;
    

    Above 30days more only then list the files/folders Ex:

    find <path> -type f -mtime +30 -exec ls -lrt {} \;
    
    find /opt/app/logs -type f -mtime +30 -exec ls -lrt {} \;
    
    0 讨论(0)
提交回复
热议问题