I have a complex command I am passing via ssh to a remote server. I am trying to unzip a file and then change its naming structure and extension in a second ssh command. T
The most probable reason (as you don't show the contents of the root
home directory in the server) is that you are uncompressing the file in the /tmp
directory, but feeding to awk filenames that should exist in the root
home directory.
"
allows escaping sequences with \
. so the correct way to do is
ssh root@server1 "gzip -d /tmp/file.out-20171119.gz; echo file* | awk -F'[.-]' '{print \$1\$3\".log\"}'"
(like you wrote in your question) this means the following command is executed with a shell in the server machine.
gzip -d /tmp/file.out-20171119.gz; echo file* | awk - F'[.-]' '{print $1$3".log"}'
You are executing two commands, the first to gunzip /tmp/file.out-2017119.gz
(beware, as it will be gunzipped in /tmp
). And the second can be the source for the problem. It is echoing all the files in the local directory (this is, the root
user home directory, probably /root
in the server) that begin with file
in the name (probably none), and feeding that to the next awk command.
As a general rule.... test your command locally, and when it works locally, just escape all special characters that will go unescaped, after being parsed by the first shell.
another way to solve the problem is to use gzip(1)
as a filter... so you can decide the name of the output file
ssh root@server1 "gzip -d </tmp/file.out-20171119.gz >file20171119.log"
this way you save an awk(1)
execution just to format the output file. Or if you have the date from an environment variable.
DATE=`date +%Y%m%d`
ssh root@server1 "gzip -d </tmp/file.out-${DATE}.gz >file${DATE}.log"
Finally, let me give some advice: Don't use /tmp
to uncompress files. /tmp
is used by several distributions as a high speed temporary dir. It is normally ram based, too quick, but limited space, so uncompressing a log file there can fill up the memory of the kernel used for the ram based filesystem, which is not a good idea. Also, a log file normally expands a lot and /tmp
is a local system general directory, where other users can store files named file<something>
and you can clash with those files (in case you do searches with wildcard patterns, like you do in your command) Also, it is common once you know the name of the file to assign it to environment variables and use those variables, so case you need to change the format of the filename, you do it in only one place.
You need to escape the "
to prevent them from closing your quoted string early, and you need to escape the $
in the awk
script to prevent local parameter expansion.
ssh root@server1 "gzip -d /tmp/file.out-20171119.gz; echo file* | awk -F'[.-]' '{print \$1\$3\".log\"}'"
The easiest way to deal with this problem is to avoid it. Don't bother trying to escape your script to go on a command line: Pass it on stdin instead.
ssh root@server1 bash -s <<'EOF'
gzip -d /tmp/file.out-20171119.gz
# note that (particularly w/o a cd /tmp) this doesn't do anything at all related to the
# line above; thus, probably buggy as given in the original question.
echo file* | awk -F'[.-]' '{print $1$3".log"}'
EOF
A quoted heredoc -- one with <<'EOF'
or <<\EOF
instead of <<EOF
-- is passed literally, without any shell expansions; thus, $1
or $3
will not be replaced by the calling shell as they would with an unquoted heredoc.
If you don't want to go the avoidance route, you can have the shell do the quoting for you itself. For example:
external_function() {
gzip -d /tmp/file.out-20171119.gz
echo file* | awk -F'[.-]' '{print $1$3".log"}'
}
ssh root@server1 "$(declare -f external_function); external_function"
declare -f
prints a definition of a function. Putting that function literally into your SSH command ensures that it's run remotely.