I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
As far as I know, you cannot chown
files to somebody else than you, if you are not root. So you would have to rsync
using the www-data
account, as all files will be created with the specified user as owner. So you need to chown
the files afterwards.
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user@user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username@hostname:/var/www/html/website
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user@10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user@10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user@10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub
and add it to ~user/.ssh/authorized_keys
on the source machine. You can ssh user@10.1.1.2
as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user@remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
I had a similar problem and cheated the rsync command,
rsync -avz --delete root@x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
You can also sudo the rsync on the target host by using the --rsync-path
option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user@targethost:/path
This lets you authenticate as user
on targethost, but still get privileged write permission through sudo
. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers
or run sudo visudo
for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown
or a second run of rsync
to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from
option.
rsync -av /path/to/files user@targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user@targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=-
will work. You'll undoubtedly need to experiment.