There were instances of strncmp() that were formatted improperly
(e.g. whitespace around parameter before closing parenthesis)
that caused the earlier mechanical conversion step to miss
them. This step cleans them up.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This mechanically converts strncmp() to use prefixcmp(), but only when
the parameters match specific patterns, so that they can be verified
easily. Leftover from this will be fixed in a separate step, including
idiotic conversions like
if (!strncmp("foo", arg, 3))
=>
if (!(-prefixcmp(arg, "foo")))
This was done by using this script in px.perl
#!/usr/bin/perl -i.bak -p
if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) {
s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|;
}
if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) {
s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|;
}
and running:
$ git grep -l strncmp -- '*.c' | xargs perl px.perl
Signed-off-by: Junio C Hamano <junkio@cox.net>
Back when only handful commands that created commit and tag were
the only users of committer identity information, it made sense
to explicitly call setup_ident() to pre-fill the default value
from the gecos information. But it is much simpler for programs
to make the call automatic when get_ident() is called these days,
since many more programs want to use the information when updating
the reflog.
Signed-off-by: Junio C Hamano <junkio@cox.net>
My sp/mmap changes to pack-check.c modified the function such that
it expects packed_git.pack_size to be populated with the total
bytecount of the packfile by the caller.
But that isn't the case for packs obtained by git-http-fetch as
pack_size was not initialized before being accessed. This caused
verify_pack to think it had 2^32-21 bytes available when the
downloaded pack perhaps was only 305 bytes in length. The use_pack
function then later dies with "offset beyond end of packfile"
when computing the overall file checksum.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked write() calls. Often we are
expecting write() to write exactly the size we requested or fail,
this fails to handle interrupts or short writes. Switch to using
the new write_in_full(). Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xwrite().
Note, the changes to config handling are much larger and handled
in the next patch in the sequence.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked read() calls. Often we are
expecting read() to read exactly the size we requested or fail, this
fails to handle interrupts or short reads. Add a read_in_full()
providing those semantics. Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xread().
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Sean <seanlkml@sympatico.ca> writes:
> On Sat, 07 Oct 2006 21:52:02 -0700
> Junio C Hamano <junkio@cox.net> wrote:
>
>> Using DAV, if it works with the server, has the advantage of not
>> having to keep objects/info/packs up-to-date from repository
>> owner's point of view. But the repository owner ends up keeping
>> up-to-date as a side effect of keeping info/refs up-to-date
>> anyway (as I do not see a code to read that information over
>> DAV), so there is no point doing this over DAV in practice.
>>
>> Perhaps we should remove call to remote_ls() from
>> fetch_indices() unconditionally, not just protected with
>> NO_EXPAT and be done with it?
>
> That makes a lot of sense. A server really has to always provide
> a objects/info/packs anyway, just to be fetchable today by clients
> that are compiled with NO_EXPAT.
And even for an isolated group where everybody knows that
everybody else runs DAV-enabled clients, they need info/refs
prepared for ls-remote and git-fetch script, which means you
will run update-server-info to keep objects/info/packs up to
date.
Nick, do you see holes in my logic?
-- >8 --
http-fetch.c: drop remote_ls()
While doing remote_ls() over DAV potentially allows the server
side not to keep objects/info/pack up-to-date, misconfigured or
buggy servers can silently ignore or not to respond to DAV
requests and makes the client hang.
The server side (unfortunately) needs to run git-update-server-info
even if remote_ls() removes the need to keep objects/info/pack file
up-to-date, because the caller of git-http-fetch (git-fetch) and other
clients that interact with the repository (e.g. git-ls-remote) need to
read from info/refs file (there is no code to make that unnecessary by
using DAV yet).
Perhaps the right solution in the longer-term is to make info/refs
also unnecessary by using DAV, and we would want to resurrect the
code this patch removes when we do so, but let's drop remote_ls()
implementation for now. It is causing problems without really
helping anything yet.
git will keep it for us until we need it next time.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Based on Sasha Khapyorsky's patch but adjusted to the refactored
"missing target" detection code.
It might have been better if the program were called
git-url-fetch but it is too late now ;-).
Signed-off-by: Junio C Hamano <junkio@cox.net>
At a handful places we check two error codes from curl library
to see if the file we asked was missing from the remote (e.g.
we asked for a loose object when it is in a pack) to decide what
to do next. This consolidates the check into a single function.
NOTE: the original did not check for HTTP_RETURNED_ERROR when
error code is 404, but this version does to make sure 404 is
from HTTP and not some other protcol.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Fetch over http from a repository that uses alternates to borrow
from neighbouring repositories were quite broken, apparently for
some time now.
We parse input and count bytes to allocate the new buffer, and
when we copy into that buffer we know exactly how many bytes we
want to copy from where. Using strlcpy for it was simply
stupid, and the code forgot to take it into account that strlcpy
terminated the string with NUL.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Fetch over http from a repository that uses alternates to borrow
from neighbouring repositories were quite broken, apparently for
some time now.
We parse input and count bytes to allocate the new buffer, and
when we copy into that buffer we know exactly how many bytes we
want to copy from where. Using strlcpy for it was simply
stupid, and the code forgot to take it into account that strlcpy
terminated the string with NUL.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Like xmalloc and xrealloc xstrdup dies with a useful message if
the native strdup() implementation returns NULL rather than a
valid pointer.
I just tried to use xstrdup in new code and found it to be missing.
However I expected it to be present as xmalloc and xrealloc are
already commonly used throughout the code.
[jc: removed the part that deals with last_XXX, which I am
finding more and more dubious these days.]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This abstracts away the size of the hash values when copying them
from memory location to memory location, much as the introduction
of hashcmp abstracted away hash value comparsion.
A few call sites were using char* rather than unsigned char* so
I added the cast rather than open hashcpy to be void*. This is a
reasonable tradeoff as most call sites already use unsigned char*
and the existing hashcmp is also declared to be unsigned char*.
[jc: Splitted the patch to "master" part, to be followed by a
patch for merge-recursive.c which is not in "master" yet.
Fixed the cast in the latter hunk to combine-diff.c which was
wrong in the original.
Also converted ones left-over in combine-diff.c, diff-lib.c and
upload-pack.c ]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Introduces global inline:
hashcmp(const unsigned char *sha1, const unsigned char *sha2)
Uses memcmp for comparison and returns the result based on the length of
the hash name (a future runtime decision).
Acked-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
[jc: I needed to hand merge the changes to the updated codebase,
so the result needs to be checked.]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
As Fredrik points out the current interface of has_extension() is
potentially confusing. Its parameters include both a nul-terminated
string and a length-limited string.
This patch drops the length argument, requiring two nul-terminated
strings; all callsites are updated. I checked that all of them indeed
provide nul-terminated strings. Filenames need to be nul-terminated
anyway if they are to be passed to open() etc. The performance penalty
of the additional strlen() is negligible compared to the system calls
which inevitably surround has_extension() calls.
Additionally, change has_extension() to use size_t inside instead of
int, as that is the exact type strlen() returns and memcmp() expects.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The little helper has_extension() documents through its name what we are
trying to do and makes sure we don't forget the underrun check.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The function pull() in fetch.c calls write_ref_sha1(), which may
need committer identity to update the ref-log, so they need to
call setup_ident() before calling git_config() function.
Acked-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
pull() now takes an array of arguments instead of just one of each kind.
Currently, no users use the new capability, but that'll change.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently it's a bit weird that pull() takes a single argument
describing the commit but takes the write_ref from a global variable.
This makes it take that as a parameter as well, which might be nicer
for the libification in the future, but especially it will make for
nicer code when we implement pull()ing multiple commits at once.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a really ancient remnant of the short era of delta objects stored
directly in the object database.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This doesn't make the code uglier or harder to read, yet it makes the
code more portable. This also simplifies checking for other potential
incompatibilities. "gcc -std=c89 -pedantic" can flag many incompatible
constructs as warnings, but C99 comments will cause it to emit an error.
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This cleans up the use of safe_strncpy() even more. Since it has the
same semantics as strlcpy() use this name instead. Also move the
definition from inside path.c to its own file compat/strlcpy.c, and use
it conditionally at compile time, since some platforms already has
strlcpy(). It's included in the same way as compat/setenv.c.
Signed-off-by: Peter Eriksen <s022018@student.dtu.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
ANSI C99 doesn't allow void-pointer arithmetic. This patch fixes this in
various ways. Usually the strategy that required the least changes was used.
Signed-off-by: Florian Forster <octo@verplant.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Though very nice and readable, the "case 'a'...'z':" construct is not ANSI C99
compliant. This patch unfolds the range in `quote.c' and substitutes the
switch-statement with an if-statement in `http-fetch.c' and `http-push.c'.
Signed-off-by: Florian Forster <octo@verplant.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Initialize an object request's slot to a safe value. A non-NULL value
can cause a segfault if the request is aborted before it starts.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Free the curl string lists after running http_cleanup to
avoid an occasional segfault in the curl library. Seems
to only occur if the website returns a 405 error.
Signed-off-by: Sean Estabrooks <seanlkml@sympatico.ca>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If a ref is changed by http-fetch, local-fetch or ssh-fetch
record the change and the remote URL/name in the log for the ref.
This requires loading the config file to check logAllRefUpdates.
Also fixed a bug in the ref lock generation; the log file name was
not being produced right due to a bad prefix length.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If git is not built with NO_EXPAT, this patch changes git-http-fetch to
attempt using DAV to get a list of remote packs and fall back to using
objects/info/packs if the DAV request fails.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When a repository otherwise properly prepared is served by a
dumb HTTP server that sends "No such page" output with 200
status for human consumption to a request for a page that does
not exist, the users will get an alarming "File X corrupt" error
message. Hint that they might be dealing with such a server at
the end and suggest running fsck-objects to check if the result
is OK (the pack-fallback code does the right thing in this case
so unless a loose object file was actually corrupt the result
should check OK).
Signed-off-by: Junio C Hamano <junkio@cox.net>
When fetching alternates, http-fetch may reuse the slot to fetch non-http
alternates if http-alternates does not exist. When doing so, it now needs
to update the slot's finished status so run_active_slot waits for the
non-http alternates request to finish.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
In fetch_object, there's a call to release an object request if the
object mysteriously arrived, say in a pack. Unfortunately, the fetch
attempt for this object might already be in progress, and we'll leak the
descriptor. Instead, try to tidy away the request.
Signed-off-by: Mark Wooding <mdw@distorted.org.uk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
move_temp_to_file returns 0 or -1. This is not a good thing to pass to
strerror(3). Fortunately, someone already reported the error, so don't
worry too much.
Signed-off-by: Mark Wooding <mdw@distorted.org.uk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
In fill_active_slots() -- if we find an object which has already arrived,
say as part of a pack, /don't/ remove it from the list. It's already been
prefetched and someone will ask for it later. Just label it as done and
carry blithely on. (As it was, the code would dereference a freed object
to continue through the list anyway.)
Signed-off-by: Mark Wooding <mdw@distorted.org.uk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
There's no need for these structures to be static, and it could potentially
cause problems down the road.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add a way to store the results of an HTTP request when a slot finishes
so the results can be processed after the slot has been reused.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Recognize missing files when using http-fetch with file:// URLs
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
It failed to register the last pack in the objects/info/packs
file. Also it had an independent overrun error.
Signed-off-by: Junio C Hamano <junkio@cox.net>
These are whole-tree operations and there is not much point
making them operable from within a subdirectory, but it is easy
to do so, and using setup_git_directory() upfront helps git://
proxy specification picked up from the correct place.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Better response handling for pack list requests - a 404 means we do have
the list but it happens to be empty.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Rename object request functions and data to make it more clear which type
of request is being processed - this is a response to the introduction of
slot callbacks and the definition of different types of requests such as
alternates_request.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Move shared HTTP request functionality out of http-fetch and http-push,
and replace the two fwrite_buffer/fwrite_buffer_dynamic functions with
one fwrite_buffer function that does dynamic buffering. Use slot
callbacks to process responses to fetch object transfer requests and
push transfer requests, and put all of http-push into an #ifdef check
for curl multi support.
Signed-off-by: Nick Hengeveld <nickh@reactrix.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>