This is a beginning of resurrecting the multi-head pulling support
for git-fetch-pack command. The git-fetch-script wrapper still
only knows about fetching a single head, without renaming, so it is
not very useful unless you directly call git-fetch-pack itself yet.
It also fixes a longstanding obsolete description of how the command
discovers the list of local commits.
Some http servers return an HTML error page and git reads it as normal
data. Adding -f option makes curl fail silently.
Signed-off-by: Catalin Marinas <catalin.marinas@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Implement fetching from a packed repository over http/https
using the dumb server support files.
I consider some parts of the logic should be in a separate C
program, but it appears to work with my simple tests. I have
backburnered it for a bit too long for my liking, so let's throw
it out in the open and see what happens.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Cause setting environment variable GIT_SSL_NO_VERIFY to turn off
curl's ssl peer verification.
Only use curl for http transfers, instead of curl and wget.
Make curl check ~/.netrc for credentials.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
We were trying to fetch using the merge-head name rather than the
merge-head SHA1 that we just got.
Now, http:// is broken anyway right now for packing, but this should
make it work for nonpacked repositories again.
Since pull and fetch are done often against the same remote
repository repeatedly, keeping the URL to pull from along with
the name of the head to use in $GIT_DIR/branches/$name makes a
lot of sense. Adopt that convention from Cogito, and try to be
compatible when possible; storing a partial URL and completing
it with a trailing path may not be understood by Cogito.
While we are at it, fix pulling a tag. Earlier, we updated only
refs/tags/$tag without updating FETCH_HEAD, and called
resolve-script using a stale (or absent) FETCH_HEAD.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
It sets up the normal git environment variables and a few helper
functions (currently just "die()"), and returns ok if it all looks like
a git archive. So use it something like
. git-sh-setup-script || die "Not a git archive"
to make the rest of the git scripts more careful and readable.
We codify the following different heads (in addition to the main "HEAD",
which points to the current branch, of course):
- FETCH_HEAD
Populated by "git fetch"
- ORIG_HEAD
The old HEAD before a "git pull/resolve" (successful or not)
- LAST_MERGE
The HEAD we're currently merging in "git pull/resolve"
- MERGE_HEAD
The previous head of a unresolved "git pull", which gets committed by
a "git commit" after manually resolving the result
We used to have "MERGE_HEAD" be populated directly by the fetch, and we
removed ORIG_HEAD and LAST_MERGE too aggressively.
During the mailing list discussion on renaming GIT_ environment
variables, people felt that having one environment that lets the
user (or Porcelain) specify both SHA1_FILE_DIRECTORY (now
GIT_OBJECT_DIRECTORY) and GIT_INDEX_FILE for the default layout
would be handy. This change introduces GIT_DIR environment
variable, from which the defaults for GIT_INDEX_FILE and
GIT_OBJECT_DIRECTORY are derived. When GIT_DIR is not defined,
it defaults to ".git". GIT_INDEX_FILE defaults to
"$GIT_DIR/index" and GIT_OBJECT_DIRECTORY defaults to
"$GIT_DIR/objects".
Special thanks for ideas and discussions go to Petr Baudis and
Daniel Barkalow. Bugs are mine ;-)
Signed-off-by: Junio C Hamano <junkio@cox.net>
Separate out the merge resolve from the actual getting of the
data. Also, update the resolve phase to take advantage of the
fact that we don't need to do the commit->tree object lookup
by hand, since all the actors involved happily just act on a
commit object these days.
If you set SHA1_FILE_DIRECTORY to something else than .git/objects
git-pull-script will store the fetched files in a location the rest of
the tools does not expect.
git-prune-script also ignores this setting, but I think this is good,
because pruning a shared tree to fit a single project means throwing
away a lot of useful data. :-)
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When the trivial "read-tree" merge fails, fall back on the (equally
trivial) automatic merge script instead of forcing the user to do
it by hand.
When _that_ fails, you get to do a manual merge.
reading a single tree too. That should speed up a
trivial merge noticeably.
Also, don't bother reading back the tree we just wrote
when we committed a real merge. It had better be the
same one we still have..
They sure as hell aren't perfect, but they allow you to do:
./git-pull-script {other-git-directory}
to do the initial merge, and if that had content clashes, you do
merge-cache ./git-merge-one-file-script -a
which tries to auto-merge. When/if the auto-merge fails, it will
leave the last file in your working directory, and you can edit
it and then when you're happy you can do "update-cache filename"
on it. Re-do the merge-cache thing until there are no files left
to be merged, and now you can write the tree and commit:
write-tree
commit-tree .... -p $(cat .git/HEAD) -p $(cat .git/MERGE_HEAD)
and you're done.