Why does reading and writing to the same file in a pipeline produce unreliable results?

前端 未结 2 1794
予麋鹿
予麋鹿 2020-12-04 03:04

I have a bunch a files that contain many blank lines, and want to remove any repeated blank lines to make reading the files easier. I wrote the following script:

<         


        
相关标签:
2条回答
  • 2020-12-04 03:44

    You cannot read from a file if you are writing to it at the same time. The > redirection first clears the file, so there is nothing more to read.

    You can use sed -i -e '/^$/d' to remove empty lines (if your sed supports -i), which creates the temporary file under the hood.

    0 讨论(0)
  • 2020-12-04 03:45

    The unpredictability happens because there's a race condition between two stages in the pipeline, cat "$file" and cat -s > "$file".

    The first tries to open the file and read from it, while the other tries to empty the file.

    • If it's emptied before it's read, you get an empty file.
    • If it's read before it's emptied, you get some data (but the file is emptied shortly after and the result is truncated unless it's very short).

    If you have GNU sed, you can simply do sed -i 'expression' *

    0 讨论(0)
提交回复
热议问题