I know that if I have files in a bare repository, I can access them using git show HEAD:path/to/file
.
But can I add new content to a bare repository without cloning and modifying a working tree?
if I add 1 file, only the 1 file is in that commit, aka we added something new and it's now set in stone
There's several convenient ways to add a single-file commit to the tip of the master branch in a bare repo.
So it appears i need to create a blob object, attach it to a tree, then attach that tree object to a commit.
All the ways to commit anything boil down to doing that, it's just a question of how well the convenience commands suit your purpose. git add
creates a blob and makes an entry for it in the index; git commit
does a git write-tree
that adds any new trees for what's in the index, and a git commit-tree
that adds a commit of the top-level resulting tree, and a git update-ref
to keep HEAD
up to date. Bare repos do have a HEAD
commit, generally attached to (aka a symbolic ref for) a branch like master
, . . .
So git's convenience commands are already doing almost exactly what you want. Especially with just the one file, this is going to be very easy.
Say for example your files appear in ~server/data/logs/
, the bare repo you're using for distribution is at ~server/repo.git
, you want the committed files to be at data/logs
in the repo, and you always want to commit the latest logfile:
#!/bin/sh
cd ~server
# supply locations git ordinarily does on its own in working i.e. non-bare repos:
export GIT_DIR=$PWD/repo.git # bare repos don't have defaults for these
export GIT_WORK_TREE=$PWD # so supply some to suit our purpose
export GIT_INDEX_FILE=$GIT_DIR/scratch-index # ...
# payload: commit (only) the latest file in data/logs:
git read-tree --empty # make the index all pretty, and
git add data/logs/`ls -1t data/logs|sed q` # everything's ordinary from here - add and
git commit -m'new logfile' # commit
git read-tree
loads index entries from committed trees. It's what underlies checkout and merge and reset and probably some others I'm forgetting atm. Here, we just want an empty index to start, hence --empty
.
use push/pull/remote to synchronize data while using a tool already available on every machine
You said "millions" of files over time, and if you don't want that full history distributed, rsync
as I gather you already suspect might be a better bet. But -- one at a time, one new file per minute, it'll take two years to accumulate just one million. So, ?
Whatever, the above procedure is pretty efficiently extensible to any small-ish numbers of files per commit. For bulk work there are better ways.
来源:https://stackoverflow.com/questions/29389897/adding-things-to-a-git-bare-repository