I like using Git software to push commits, but the ones I use (Gitbox, Github, SourceTree) all ask for a local repo when adding a new repo to them.
Thing is, my repo is
Remember, Git is a DVCS. The fact that you don't connect to a remote server to commit stuff is by design.
What you want to do is have local Git repos that push code to your integration server (the one that actually runs the code). It's like deploying, only you deploy to a test server instead of production.
This is normally achieved by having a shared Git repository you push to. This repo should be bare. Besides the bare shared repo, you'll want a non-bare clone of the shared Git repo which will serve as your Apache docroot.
When the shared repo receives a commit, it will make the docroot repo execute git pull
.
This can be achieved by using post-receive hooks on the shared repo.
The docroot repo is checked out on a specific branch (let's say develop
). So even if you commit stuff to other branches and push them, the server won't be affected.
This allows you to setup multiple deployment repositories, so you could have another branch prod
associated with one of those that would actually update production code when you push stuff to it.
It also allows you to store incomplete / on-going work on a shared branch that doesn't deploy at all, so that you know the thing you've been working on your laptop is safe on the shared repo, even though it can't be sent to the test server because it's not complete and would break the test server, making other people unable to work or something.
This article goes in detail how to setup all that. I've done it before, it works well.
7 years later, the goal is for Git to be able to use a virtual disk, through VFS for Git.
The Virtual Filesystem for Git (formerly GVFS) is an open source system that enables Git to operate at enterprise-scale.
It makes using and managing massive Git repositories possible.VFS for Git virtualizes the filesystem beneath your Git repository so that Git tools see what appears to be a normal repository when, in fact, the files are not actually present on disk.
VFS for Git only downloads files as they are needed.
This is not (yet) part of Git itself, but:
For that, Git 2.22 (Q2 2019) will help managing such a virtual disk by introducing a new hook "post-index-change
", which will be called when the on-disk index file changes: that can help e.g. a virtualized working tree implementation.
See commit 1956ecd (15 Feb 2019) by Ben Peart (benpeart).
(Merged by Junio C Hamano -- gitster -- in commit 5795a75, 25 Apr 2019)
read-cache
: addpost-index-change
hookAdd a
post-index-change
hook that is invoked after the index is written indo_write_locked_index()
.This hook is meant primarily for notification, and cannot affect the outcome of git commands that trigger the index write.
The hook is passed a flag to indicate whether the working directory was updated or not and a flag indicating if a
skip-worktree
bit could have changed.
These flags enable the hook to optimize its response to the index change notification.
I had this exact problem about a year ago - unfortunately I couldn't find any consistent, reliable answer. I googled for weeks, thinking it was my search terms that weren't successful - trying every which way.
[The set up we had was each developer had their own dev server - having them separate from machines meant that sites could be developed anywhere, dev servers could be set up to be exactly the same as live environments and the sysadmin could keep them upgraded and backed up - i fully see the advantage of having a separate development server to the working machine, with one of the few downsides being no git apps!]
Everything that relies on file mounts or spoofing your computer into thinking a remote drive is local is fine for general file browsing, but Git apps tend to flake out when the connection is intermittent. Other times you have to do things in a certain way, just to see a git status
I know this is not the answer you want to hear as I was in your situation a while ago and know exactly how you feel, but the best thing you can do is use git on the command line.
I hate to be one of those command line is better Stack Overflow answerers, but in this situation I couldn't find anything that was up to scratch, to be used all day every day by multiple developers.
I was also against it at the time, I prefer the prettier, easier to use UI but since learning the command line & git, I have never looked back. When starting my own projects at home I find myself using terminal over any apps as I find many of them confusing!
It not only helps with your command line confidence, but my Git knowledge has improved tenfold since using the terminal, as the apps often hide a lot of the happenings.
I found an easy way for myself: in Transmit (FTP client) there is an option to 'Mount favorite as disk'. SourceTree can work as expected with this 'virtual' disk.
One limitation though: you can mount the disk and start SourceTree only after you made all the changed in code and ready to make commit/push, it will not work if you keep SourceTree and ssh disk mounted while working on code. For some reason disk mounted by Transmit doesn't update files content live, but only after unmount/mount action.
One solution, which doesn't rely on the front-end to support manipulating a remote repo directly, would be to mount the remote as a networked filesystem. If you only have SSH access to the remote machine, you could try using SSHFS via FUSE (on Linux) or OSXFUSE on Mac OS X. Or depending on your preferences and setup, you could use SMB, NFS, DAV, or another network filesystem.
Another way to do it, that I bring up in the comments, is to export the network filesystem from your development machine to your server. I do this so that I can mount my current working copy on multiple machines at once, and also so that I still have my local working copy even when I'm not connected to the server.
You write:
I am surprised git software can't deal with remote repos as the working version.
Most Git GUIs do some of their work by calling out to the git
command. In order for them to support remote operation, core Git would have to as well. It is written in a mix of C and shell script; all of that would have to be rewritten to cope with remote files.
A text editor has a much easier job; it reads one file when you open it, and writes when you save, while Git reads and writes many files in the course of a single operation like commit
.
A networked filesystem would mean that all tools (Git and otherwise) will work on your remote files. Instead of building a layer into each and every application to support networked file access, doing it in the kernel (or via FUSE) and then just treating it like a local filesystem gives you that support in every application for free.