In transport.c, proxy setting (the one from the remote conf) was set through
curl_easy_setopt() call, while http.c already does the same with the
http.proxy setting. We now just use this infrastructure instead, and make
http_init() now take the struct remote as argument so that it can take the
http_proxy setting from there, and any other property that would be added
later.
At the same time, we make get_http_walker() take a struct remote argument
too, and pass it to http_init(), which makes remote defined proxy be used
for more than get_refs_via_curl().
We leave out http-fetch and http-push, which don't use remotes for the
moment, purposefully.
Signed-off-by: Mike Hommey <mh@glandium.org>
Acked-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make the necessary changes to be ok with their difference, and rename the
function http_fetch_ref.
Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a downloaded ref doesn't contain a sha1, the error message displays
a random sha1 because of uninitialized memory. This happens when cloning
a repository that is already a clone of another one, in which case
refs/remotes/origin/HEAD is a symref.
Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we fail to open a temporary file to be renamed to something else,
we reported the final filename, not the temporary file we failed to
open.
Signed-off-by: André Goddard Rosa <andre.goddard@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This turns the extern functions to be provided by the backend into a
struct of pointers, renames the functions to be more
namespace-friendly, and updates http-fetch to this interface. It
removes the unused include from http-push.c. It makes git-http-fetch a
builtin (with the implementation a separate file, accessible
directly).
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This eliminates the last function provided by the code using http.h as
a global symbol, so it should be possible to have multiple programs
using http.h in the same executable, and it also adds an argument to
that callback, so that info can be passed into the callback without
being global.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This removes all of the boilerplate and http-internal stuff from
fill_active_slots() and makes it easy to turn into a callback.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This uses "git-apply --whitespace=strip" to fix whitespace errors that have
crept in to our source files over time. There are a few files that need
to have trailing whitespaces (most notably, test vectors). The results
still passes the test, and build result in Documentation/ area is unchanged.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Please see http://bugs.debian.org/409887
http-fetch expected the URL given at the command line to have a trailing
slash anyway, and then added '/objects...' when requesting objects files
from the http server.
Now it doesn't require the trailing slash in <url> anymore, and strips
trailing slashes if given nonetheless.
Signed-off-by: Gerrit Pape <pape@smarden.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
There were instances of strncmp() that were formatted improperly
(e.g. whitespace around parameter before closing parenthesis)
that caused the earlier mechanical conversion step to miss
them. This step cleans them up.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This mechanically converts strncmp() to use prefixcmp(), but only when
the parameters match specific patterns, so that they can be verified
easily. Leftover from this will be fixed in a separate step, including
idiotic conversions like
if (!strncmp("foo", arg, 3))
=>
if (!(-prefixcmp(arg, "foo")))
This was done by using this script in px.perl
#!/usr/bin/perl -i.bak -p
if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) {
s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|;
}
if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) {
s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|;
}
and running:
$ git grep -l strncmp -- '*.c' | xargs perl px.perl
Signed-off-by: Junio C Hamano <junkio@cox.net>
Back when only handful commands that created commit and tag were
the only users of committer identity information, it made sense
to explicitly call setup_ident() to pre-fill the default value
from the gecos information. But it is much simpler for programs
to make the call automatic when get_ident() is called these days,
since many more programs want to use the information when updating
the reflog.
Signed-off-by: Junio C Hamano <junkio@cox.net>
My sp/mmap changes to pack-check.c modified the function such that
it expects packed_git.pack_size to be populated with the total
bytecount of the packfile by the caller.
But that isn't the case for packs obtained by git-http-fetch as
pack_size was not initialized before being accessed. This caused
verify_pack to think it had 2^32-21 bytes available when the
downloaded pack perhaps was only 305 bytes in length. The use_pack
function then later dies with "offset beyond end of packfile"
when computing the overall file checksum.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked write() calls. Often we are
expecting write() to write exactly the size we requested or fail,
this fails to handle interrupts or short writes. Switch to using
the new write_in_full(). Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xwrite().
Note, the changes to config handling are much larger and handled
in the next patch in the sequence.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked read() calls. Often we are
expecting read() to read exactly the size we requested or fail, this
fails to handle interrupts or short reads. Add a read_in_full()
providing those semantics. Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xread().
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Sean <seanlkml@sympatico.ca> writes:
> On Sat, 07 Oct 2006 21:52:02 -0700
> Junio C Hamano <junkio@cox.net> wrote:
>
>> Using DAV, if it works with the server, has the advantage of not
>> having to keep objects/info/packs up-to-date from repository
>> owner's point of view. But the repository owner ends up keeping
>> up-to-date as a side effect of keeping info/refs up-to-date
>> anyway (as I do not see a code to read that information over
>> DAV), so there is no point doing this over DAV in practice.
>>
>> Perhaps we should remove call to remote_ls() from
>> fetch_indices() unconditionally, not just protected with
>> NO_EXPAT and be done with it?
>
> That makes a lot of sense. A server really has to always provide
> a objects/info/packs anyway, just to be fetchable today by clients
> that are compiled with NO_EXPAT.
And even for an isolated group where everybody knows that
everybody else runs DAV-enabled clients, they need info/refs
prepared for ls-remote and git-fetch script, which means you
will run update-server-info to keep objects/info/packs up to
date.
Nick, do you see holes in my logic?
-- >8 --
http-fetch.c: drop remote_ls()
While doing remote_ls() over DAV potentially allows the server
side not to keep objects/info/pack up-to-date, misconfigured or
buggy servers can silently ignore or not to respond to DAV
requests and makes the client hang.
The server side (unfortunately) needs to run git-update-server-info
even if remote_ls() removes the need to keep objects/info/pack file
up-to-date, because the caller of git-http-fetch (git-fetch) and other
clients that interact with the repository (e.g. git-ls-remote) need to
read from info/refs file (there is no code to make that unnecessary by
using DAV yet).
Perhaps the right solution in the longer-term is to make info/refs
also unnecessary by using DAV, and we would want to resurrect the
code this patch removes when we do so, but let's drop remote_ls()
implementation for now. It is causing problems without really
helping anything yet.
git will keep it for us until we need it next time.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Based on Sasha Khapyorsky's patch but adjusted to the refactored
"missing target" detection code.
It might have been better if the program were called
git-url-fetch but it is too late now ;-).
Signed-off-by: Junio C Hamano <junkio@cox.net>
At a handful places we check two error codes from curl library
to see if the file we asked was missing from the remote (e.g.
we asked for a loose object when it is in a pack) to decide what
to do next. This consolidates the check into a single function.
NOTE: the original did not check for HTTP_RETURNED_ERROR when
error code is 404, but this version does to make sure 404 is
from HTTP and not some other protcol.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Fetch over http from a repository that uses alternates to borrow
from neighbouring repositories were quite broken, apparently for
some time now.
We parse input and count bytes to allocate the new buffer, and
when we copy into that buffer we know exactly how many bytes we
want to copy from where. Using strlcpy for it was simply
stupid, and the code forgot to take it into account that strlcpy
terminated the string with NUL.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Fetch over http from a repository that uses alternates to borrow
from neighbouring repositories were quite broken, apparently for
some time now.
We parse input and count bytes to allocate the new buffer, and
when we copy into that buffer we know exactly how many bytes we
want to copy from where. Using strlcpy for it was simply
stupid, and the code forgot to take it into account that strlcpy
terminated the string with NUL.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Like xmalloc and xrealloc xstrdup dies with a useful message if
the native strdup() implementation returns NULL rather than a
valid pointer.
I just tried to use xstrdup in new code and found it to be missing.
However I expected it to be present as xmalloc and xrealloc are
already commonly used throughout the code.
[jc: removed the part that deals with last_XXX, which I am
finding more and more dubious these days.]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This abstracts away the size of the hash values when copying them
from memory location to memory location, much as the introduction
of hashcmp abstracted away hash value comparsion.
A few call sites were using char* rather than unsigned char* so
I added the cast rather than open hashcpy to be void*. This is a
reasonable tradeoff as most call sites already use unsigned char*
and the existing hashcmp is also declared to be unsigned char*.
[jc: Splitted the patch to "master" part, to be followed by a
patch for merge-recursive.c which is not in "master" yet.
Fixed the cast in the latter hunk to combine-diff.c which was
wrong in the original.
Also converted ones left-over in combine-diff.c, diff-lib.c and
upload-pack.c ]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Introduces global inline:
hashcmp(const unsigned char *sha1, const unsigned char *sha2)
Uses memcmp for comparison and returns the result based on the length of
the hash name (a future runtime decision).
Acked-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
[jc: I needed to hand merge the changes to the updated codebase,
so the result needs to be checked.]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
As Fredrik points out the current interface of has_extension() is
potentially confusing. Its parameters include both a nul-terminated
string and a length-limited string.
This patch drops the length argument, requiring two nul-terminated
strings; all callsites are updated. I checked that all of them indeed
provide nul-terminated strings. Filenames need to be nul-terminated
anyway if they are to be passed to open() etc. The performance penalty
of the additional strlen() is negligible compared to the system calls
which inevitably surround has_extension() calls.
Additionally, change has_extension() to use size_t inside instead of
int, as that is the exact type strlen() returns and memcmp() expects.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The little helper has_extension() documents through its name what we are
trying to do and makes sure we don't forget the underrun check.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The function pull() in fetch.c calls write_ref_sha1(), which may
need committer identity to update the ref-log, so they need to
call setup_ident() before calling git_config() function.
Acked-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
pull() now takes an array of arguments instead of just one of each kind.
Currently, no users use the new capability, but that'll change.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently it's a bit weird that pull() takes a single argument
describing the commit but takes the write_ref from a global variable.
This makes it take that as a parameter as well, which might be nicer
for the libification in the future, but especially it will make for
nicer code when we implement pull()ing multiple commits at once.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a really ancient remnant of the short era of delta objects stored
directly in the object database.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This doesn't make the code uglier or harder to read, yet it makes the
code more portable. This also simplifies checking for other potential
incompatibilities. "gcc -std=c89 -pedantic" can flag many incompatible
constructs as warnings, but C99 comments will cause it to emit an error.
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This cleans up the use of safe_strncpy() even more. Since it has the
same semantics as strlcpy() use this name instead. Also move the
definition from inside path.c to its own file compat/strlcpy.c, and use
it conditionally at compile time, since some platforms already has
strlcpy(). It's included in the same way as compat/setenv.c.
Signed-off-by: Peter Eriksen <s022018@student.dtu.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
ANSI C99 doesn't allow void-pointer arithmetic. This patch fixes this in
various ways. Usually the strategy that required the least changes was used.
Signed-off-by: Florian Forster <octo@verplant.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Though very nice and readable, the "case 'a'...'z':" construct is not ANSI C99
compliant. This patch unfolds the range in `quote.c' and substitutes the
switch-statement with an if-statement in `http-fetch.c' and `http-push.c'.
Signed-off-by: Florian Forster <octo@verplant.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Initialize an object request's slot to a safe value. A non-NULL value
can cause a segfault if the request is aborted before it starts.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Free the curl string lists after running http_cleanup to
avoid an occasional segfault in the curl library. Seems
to only occur if the website returns a 405 error.
Signed-off-by: Sean Estabrooks <seanlkml@sympatico.ca>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If a ref is changed by http-fetch, local-fetch or ssh-fetch
record the change and the remote URL/name in the log for the ref.
This requires loading the config file to check logAllRefUpdates.
Also fixed a bug in the ref lock generation; the log file name was
not being produced right due to a bad prefix length.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If git is not built with NO_EXPAT, this patch changes git-http-fetch to
attempt using DAV to get a list of remote packs and fall back to using
objects/info/packs if the DAV request fails.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When a repository otherwise properly prepared is served by a
dumb HTTP server that sends "No such page" output with 200
status for human consumption to a request for a page that does
not exist, the users will get an alarming "File X corrupt" error
message. Hint that they might be dealing with such a server at
the end and suggest running fsck-objects to check if the result
is OK (the pack-fallback code does the right thing in this case
so unless a loose object file was actually corrupt the result
should check OK).
Signed-off-by: Junio C Hamano <junkio@cox.net>
When fetching alternates, http-fetch may reuse the slot to fetch non-http
alternates if http-alternates does not exist. When doing so, it now needs
to update the slot's finished status so run_active_slot waits for the
non-http alternates request to finish.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>