remote: Counting objects: 3442754, done. remote: Compressing objects: 100% (515633/515633), done. remote: Total 3442754 (delta 2898137), reused 3442437
Make sure you have that issue consistently and:
1:04 UTC:
As a result of our ongoing DDoS mitigation, we're experiencing high rates of packet loss from users in the Asia-Pacific region.
We're working on reducing this disruption to service and will provide additional information when it becomes available.
I have seen this issue when the machine from which the "repo sync" or "git pull" was done had very low memory. I have seen this multiple times and checking the memory it always had 0GB free memory. This also happened when the user machine's OS was upgraded to Ubuntu 14.04 and had the same issue of memory being insufficient though memory was managed well with a previous OS version.
Running the same command from a more powerful machine worked or trying this from a different OS worked.
This is not an answer but i had seen this issue, so an observation and workaround if the issue is related to memory.
Recently I encountered the same problem when cloning a git repository on an NFS share with a large git repository (>500MB).
If I ran the git clone
on the server (I.E. not over NFS) the errors went away.
Further testing identified that occasionally data was getting corrupted while being copied to the server via NFSv4.
After much debugging it turned out the issue was related to a buggy network driver corrupting TCP packets when segmentation and rx/tx checksumming was offloaded to the network interface card.
After disabling the segmentation and rx/tx checksum offload on the network interface card (following the instructions in the following blog post: How to solve ssh disconnect packet corrupt problems) the data corruption problems on the NFS share went away, as did my issues with git.