Otherwise, git-clone silently failed to clone a remote
repository where redirections (ie. a response with a
"Location" header line) are used.
This includes the fixes from Nick Hengeveld.
Signed-off-by: Josef Weidendorfer <Josef.Weidendorfer@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Make a bunch of needlessly global functions static, and replace two
K&R-style declarations.
Signed-off-by: Peter Hagervall <hager@cs.umu.se>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When curl_message is released using curl_multi_remove_handle(), it's
contents are undefined. Therefore, get the information before releasing it.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
After using cg-update to pull, empty files named *.temp are left in
the various subdirectories of .git/objects/. These are created by
git-http-fetch to hold data as it's being fetched from the remote
repository. They are left behind after a transfer error so that the
next time git-http-fetch runs it can pick up where it left off. If
they're empty though, it would make more sense to delete them rather
than leaving them behind for the next attempt.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-http-fetch spits out curl 404 error message when unable to fetch an object,
but that's confusing since no error really happened and the object is usually
found in a pack it tries right after that. And if the object still cannot be
retrieved, it will say another error message anyway. OTOH other HTTP errors
(403 etc) are likely fatal and the user should be still informed about them.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add configuration settings to abort HTTP requests if the transfer rate
drops below a threshold for a specified length of time. Environment
variables override config file settings.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch cleans out all sparse warnings from http-fetch.c
I'm a bit uncomfortable with adding extra #ifdefs to avoid either
'mixing declaration with code' or 'unused variable' warnings, but I
figured that since those functions are already littered with #ifdefs I
might just get away with it. Comments?
[jc: I adjusted Peter's patch to address uncomfortableness issues.]
Signed-off-by: Peter Hagervall <hager@cs.umu.se>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Hi,
On Fri, 14 Oct 2005, Junio C Hamano wrote:
> Johannes Schindelin <Johannes.Schindelin@gmx.de> writes:
>
> > This patch looks bigger than it really is: The code to get the
> > default handle was refactored into a function, and is called
> > instead of curl_easy_duphandle() if that does not exist.
>
> I'd like to take Nick's config file patch first, which
> unfortunately interferes with your patch. I'd hate to ask you
> this, but could you rebase it on top of Nick's patch, [...]
No need to hate it. Here comes the rebased patch, and this time, I
actually tested it a bit.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Use "http." config file settings if they exist. Environment variables
still work, and they will override config file settings.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-http-fetch received objects/info/packs into a fixed-size buffer
and started to fail when this file became larger than the buffer.
Change it to grow the buffer dynamically, and do the same thing for
objects/info/alternates. Also add missing free() calls for these
buffers.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru>
Signed-off-by: Junio C Hamano <junkio@cox.net>
curl_escape ought to do this, but we should not let it quote
slashes (nobody said refs/tags cannot have subdirectories), so
we roll our own safer version. With this, the last part of
git-clone from Martin's moodle repository that used to fail now
works, which reads:
$ git-http-fetch -v -a -w 'tags/MOODLE_15_MERGED **INVALID**' \
'tags/MOODLE_15_MERGED **INVALID**' \
http://locke.catalyst.net.nz/git/moodle.git/
Signed-off-by: Junio C Hamano <junkio@cox.net>
The http commit walker cannot use the same temporary file
creation code because it needs to use predictable temporary
filename for partial fetch continuation purposes, but the code
to move the temporary file to the final location should be
usable from the ordinary object creation codepath.
Export move_temp_to_file from sha1_file.c and use it, while
losing the custom relink_or_rename function from http-fetch.c.
Also the temporary object file creation part needs to make sure
the leading path exists, in preparation of the really lazy
fan-out directory creation.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The parallel request changes didn't properly implement the previous patch to
allow caching of retrieved objects by proxy servers. Restore the previous
functionality such that by default requests include the "Pragma: no-cache"
header, and this header is removed on requests for pack indexes, packs, and
objects.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Be sure not to fetch objects that already exist in the local repository.
The main process loop no longer performs this check, http-fetch now checks
prior to starting a new request queue entry and when fetch_object() is called,
and local-fetch now checks when fetch_object() is called.
As discussed in this thread: http://marc.theaimsgroup.com/?t=112854890500001
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Use an environment variable rather than a command-line argument to set the
parallel HTTP request limit. This allows the setting to work whether
git-http-fetch is run directly or via git-fetch.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Only compile parallel HTTP support with CURL >= 7.9.8
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add support for parallel HTTP transfers. Prefetch populates a queue of
objects to transfer and starts feeding requests to an active request
queue for processing; fetch_object keeps the active queue moving
while the specified object is being transferred. The size of the active
queue can be restricted using -r and defaults to 5 concurrent transfers.
Requests for objects that are not prefetched are also processed via the
active queue.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Don't unlink the temp file when an object transfer fails, so next attempt
will pick up where the failed transfer left off
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add the sanity checks discussed on the list with Nick Hengeveld in
<20050927000931.GA15615@reactrix.com>.
* unlink of previous and rename from temp to previous can fail for
reasons other than benign ones (missing previous and missing temp).
Report these failures when we encounter them, to make diagnosing
problems easier.
* when rewinding the partially written result, make sure to
truncate the file.
Also verify the pack after downloading by calling
verify_packfile().
Signed-off-by: Junio C Hamano <junkio@cox.net>
HTTP partial transfer support for object, pack, and index transfers
[jc: this should not be placed in "master" -- it does not have any
fixes requested on the list.]
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
... so try to set it only in later versions.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Added support for additional CURL SSL settings via environment variables.
Client certificate/key files can be specified as well as alternate CA
information.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Return CURL error message when object transfer fails
[jc: added similar curl_errorstr errors to places where we
use curl_easy_perform() to run fetch that _must_ succeed.]
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With the --recover option, we verify that we have absolutely
everything reachable from the target, not assuming that things
reachable from refs will be complete.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Unlike write_sha1_file() that tries to create the object file in a
temporary location and then move it to the final location, fetch_object
could have been interrupted in the middle, leaving a corrupt file.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This allows the remote repository to refer to additional repositories
in a file objects/info/http-alternates or
objects/info/alternates. Each line may be:
a relative path, starting with ../, to get from the objects directory
of the starting repository to the objects directory of the added
repository.
an absolute path of the objects directory of the added repository (on
the same server).
(only in http-alternates) a full URL of the objects directory of the
added repository.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This tries .../objects/info/http-alternates and then
.../objects/info/alternates, looking for a file which specifies where
else to download objects and packs from.
It currently only supports absolute paths, and doesn't support full URLs.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
By default the curl library adds "Pragma: no-cache" header to all
requests, which disables caching by proxy servers. However, most
files in a GIT repository are immutable, and caching them is safe and
could be useful.
This patch removes the "Pragma: no-cache" header from requests for all
files except the pack list (objects/info/packs) and references
(refs/*), which are really mutable and should not be cached.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru>
Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from 3b2a4c46fd5093ec79fb60e1b14b8d4a58c74612 commit)
We have deprecated the old environment variable names for quite a
while and now it's time to remove them. Gone are:
SHA1_FILE_DIRECTORIES AUTHOR_DATE AUTHOR_EMAIL AUTHOR_NAME
COMMIT_AUTHOR_EMAIL COMMIT_AUTHOR_NAME SHA1_FILE_DIRECTORY
Signed-off-by: Junio C Hamano <junkio@cox.net>
As promised, this is the "big tool rename" patch. The primary differences
since 0.99.6 are:
(1) git-*-script are no more. The commands installed do not
have any such suffix so users do not have to remember if
something is implemented as a shell script or not.
(2) Many command names with 'cache' in them are renamed with
'index' if that is what they mean.
There are backward compatibility symblic links so that you and
Porcelains can keep using the old names, but the backward
compatibility support is expected to be removed in the near
future.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This processes objects in two simultaneous passes. Each object will
first be given to prefetch(), as soon as it is possible to tell that
it will be needed, and then will be given to fetch(), when it is the
next object that needs to be parsed. Unless an implementation does
something with prefetch(), this should have no effect.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds support to http-pull for finding the list of pack files
available on the server, downloading the index files for those pack
files, and downloading pack files when they contain needed objects not
available individually. It retains the index files even if the pack
files were not needed, but downloads the list of pack files once per
run if an object is not found separately.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Old libcurl has curl_easy_setopt(), and http-pull requires it; it just
doesn't have one of the options.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Some newer features of libcurl are used which are not strictly necessary
for http-pull. Use them only if libcurl is new enough to know about them.
[jc: I just reworked #ifdef sprinkled all over the code into a
single section that defines a set of macros.]
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Cause setting environment variable GIT_SSL_NO_VERIFY to turn off
curl's ssl peer verification.
Only use curl for http transfers, instead of curl and wget.
Make curl check ~/.netrc for credentials.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Packed delta files created by git-pack-objects seems to be the
way to go, and existing "delta" object handling code has exposed
the object representation details to too many places. Remove it
while we refactor code to come up with a proper interface in
sha1_file.c.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This adds support for refs to http-pull, both the -w option and reading
the target from a served file.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This adds support to pull.c for requesting a reference and writing it to a
file. All of the git-*-pull programs get stubs for now.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This addresses a concern raised by Jason McMullan in the mailing
list discussion. After retrieving and storing a potentially
deltified object, pull logic tries to check and fulfil its delta
dependency. When the pull procedure is killed at this point,
however, there was no easy way to recover by re-running pull,
since next run would have found that we already have that
deltified object and happily reported success, without really
checking its delta dependency is satisfied.
This patch introduces --recover option to git-*-pull family
which causes them to re-validate dependency of deltified objects
we are fetching. A new test t5100-delta-pull.sh covers such a
failure mode.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When a remote repository is deltified, we need to get the
objects that a deltified object we want to obtain is based upon.
The initial parts of each retrieved SHA1 file is inflated and
inspected to see if it is deltified, and its base object is
asked from the remote side when it is. Since this partial
inflation and inspection has a small performance hit, it can
optionally be skipped by giving -d flag to git-*-pull commands.
This flag should be used only when the remote repository is
known to have no deltified objects.
Rsync transport does not have this problem since it fetches
everything the remote side has.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add <limits.h> to the include files handled by "cache.h", and remove
extraneous #include directives from various .c files. The rule is that
"cache.h" gets all the basic stuff, so that we'll have as few system
dependencies as possible.
- Raw hashes should be unsigned char.
- String functions want signed char.
- Hash and compress functions want unsigned char.
Signed-off By: Brian Gerst <bgerst@didntduck.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This moves the private "say()" function to pull.c, renames it to
"pull_say()", and introduces a global variable "get_verbosely" that
makes the pull backends report what they fetch. The -v option is
added to git-rpull and git-http-pull to match git-local-pull.
The documentation is updated to describe these pull commands.
Signed-off-by: Junio C Hamano <junkio@cox.net>