Let's keep thread stuff close together if possible. And in this case,
this even reduces the #ifdef noise, and allows for skipping the
autodetection altogether if delta search is not needed (like with a pure
clone).
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Test the major configuration settings which control access to
the repository:
http.getanyfile
http.uploadpack
http.receivepack
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some repository owners may wish to enable smart HTTP, but disallow
dumb content serving. Disallowing dumb serving might be because
the owners want to rely upon reachability to control which objects
clients may access from the repository, or they just want to
encourage clients to use the more bandwidth efficient transport.
If http.getanyfile is set to false the backend CGI will return with
'403 Forbidden' when an object file is accessed by a dumb client.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The top level directory "/smart/" of the test Apache server is mapped
through our git-http-backend CGI, but uses the same underlying
repository space as the server's document root. This is the most
simple installation possible.
Server logs are checked to verify the client has accessed only the
smart URLs during the test. During fetch testing the headers are
also logged from libcurl to ensure we are making a reasonably sane
HTTP request, and getting back reasonably sane response headers
from the CGI.
When validating the request headers used during smart fetch we munge
away the actual Content-Length and replace it with the placeholder
"xxx". This avoids unnecessary varability in the test caused by
an unrelated change in the requested capabilities in the first want
line of the request. However, we still want to look for and verify
that Content-Length was used, because smaller payloads should be
using Content-Length and not "Transfer-Encoding: chunked".
When validating the server response headers we must discard both
Content-Length and Transfer-Encoding, as Apache2 can use either
format to return our response.
During development of this test I observed Apache returning both
forms, depending on when the processes got CPU time. If our CGI
returned the pack data quickly, Apache just buffered the whole
thing and returned a Content-Length. If our CGI took just a bit
too long to complete, Apache flushed its buffer and instead used
"Transfer-Encoding: chunked".
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To clarify what part of the HTTP transprot is being tested we change
the URLs used by existing tests to include /dumb/ at the start,
indicating they use the non-Git aware code paths.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If LIB_HTTPD_PORT is not set already, lib-httpd will set it to the
default 8111.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The upload-pack requests are mostly plain text and they compress
rather well. Deflating them with Content-Encoding: gzip can easily
drop the size of the request by 50%, reducing the amount of data
to transfer as we negotiate the common commits.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-remote-curl backend detects if the remote server supports
the git-upload-pack service, and if so, runs git-fetch-pack locally
in a pipe to generate the want/have commands.
The advertisements from the server that were obtained during the
discovery are passed into git-fetch-pack before the POST request
starts, permitting server capability discovery and enablement.
Common objects that are discovered are appended onto the request as
have lines and are sent again on the next request. This allows the
remote side to reinitialize its in-memory list of common objects
during the next request.
Because all requests are relatively short, below git-remote-curl's
1 MiB buffer limit, requests will use the standard Content-Length
header and be valid HTTP/1.0 POST requests. This makes the fetch
client more tolerant of proxy servers which don't support HTTP/1.1
or the chunked transfer encoding.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-remote-curl backend detects if the remote server supports
the git-receive-pack service, and if so, runs git-send-pack in a
pipe to dump the command and pack data as a single POST request.
The advertisements from the server that were obtained during the
discovery are passed into git-send-pack before the POST request
starts. This permits git-send-pack to operate largely unmodified.
For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a
Content-Length is used, permitting interaction with any server.
The 1 MiB limit is arbitrary, but is sufficent to fit most deltas
created by human authors against text sources with the occasional
small binary file (e.g. few KiB icon image). The configuration
option http.postBuffer can be used to increase (or shink) this
buffer if the default is not sufficient.
For larger packs which cannot be spooled entirely into the helper's
memory space (due to http.postBuffer being too small), the POST
request requires HTTP/1.1 and sets "Transfer-Encoding: chunked".
This permits the client to upload an unknown amount of data in one
HTTP transaction without needing to pregenerate the entire pack
file locally.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of loading the cached info/refs, try to use the smart HTTP
version when the server supports it. Since the smart variant is
actually the pkt-line stream from the start of either upload-pack
or receive-pack we need to parse these through get_remote_heads,
which requires a background thread to feed its pipe.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the git-http-backend examples, only match git-receive-pack within
/git/.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the git-http-backend documentation, add an example of how to set up
gitweb and git-http-backend on the same URL by using a series of
mod_alias commands.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the git-http-backend documentation, use mod_alias exlusively, instead
of using a combination of mod_alias and mod_rewrite. This makes the
example slightly shorted and a bit more clear.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Clarify some of the git-http-backend documentation, particularly:
* In the Description, state that smart/dumb HTTP fetch and smart HTTP
push are supported, state that authenticated clients allow push, and
remove the note that this is only suited for read-only updates.
* At the start of Examples, state explicitly what URL is mapping to what
location on disk.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new environment variable, GIT_PROJECT_ROOT, to override the
method of using PATH_TRANSLATED to find the git repository on disk.
This makes it much easier to configure the web server, especially when
the web server's DocumentRoot does not contain the git repositories,
which is the usual case.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Requests for $GIT_URL/git-receive-pack and $GIT_URL/git-upload-pack
are forwarded to the corresponding backend process by directly
executing it and leaving stdin and stdout connected to the invoking
web server. Prior to starting the backend process the HTTP response
headers are sent, thereby freeing the backend from needing to know
about the HTTP protocol.
Requests that are encoded with Content-Encoding: gzip are
automatically inflated before being streamed into the backend.
This is primarily useful for the git-upload-pack backend, which
receives highly repetitive text data from clients that easily
compresses to 50% of its original size.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When --stateless-rpc is passed as a command line parameter to
upload-pack or receive-pack the programs now assume they may
perform only a single read-write cycle with stdin and stdout.
This fits with the HTTP POST request processing model where a
program may read the request, write a response, and must exit.
When --advertise-refs is passed as a command line parameter only
the initial ref advertisement is output, and the program exits
immediately. This fits with the HTTP GET request model, where
no request content is received but a response must be produced.
HTTP headers and/or environment are not processed here, but
instead are assumed to be handled by the program invoking
either service backend.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-http-backend CGI can be configured into any Apache server
using ScriptAlias, such as with the following configuration:
LoadModule cgi_module /usr/libexec/apache2/mod_cgi.so
LoadModule alias_module /usr/libexec/apache2/mod_alias.so
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/
Repositories are accessed via the translated PATH_INFO.
The CGI is backwards compatible with the dumb client, allowing all
older HTTP clients to continue to download repositories which are
managed by the CGI.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we use -c, -C, or --amend, we are trying one of two things: using the
source as a template or modifying a commit with corrections.
When these options are used, the authorship and timestamp recorded in the
newly created commit are always taken from the original commit. This is
inconvenient when we just want to borrow the commit log message or when
our change to the code is so significant that we should take over the
authorship (with the blame for bugs we introduce, of course).
The new --reset-author option is meant to solve this need by regenerating
the timestamp and setting the committer as the new author.
Signed-off-by: Erick Mattos <erick.mattos@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduced in 492cf3f (More precise description of 'git describe --abbrev', 2009-10-29)
Signed-off-by: Gisle Aas <gisle@aas.no>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use the new GIT_PARSE_WITH_SET_MAKE_VAR macro to allow configuration
settings for ETC_GITCONFIG, DEFAULT_PAGER and DEFAULT_EDITOR.
Signed-off-by: Ben Walton <bwalton@artsci.utoronto.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add macro GIT_PARSE_WITH_SET_MAKE_VAR to configure.ac to allow --with
style options that set values for variables used during the make
process.
Arguments are the $name part of --with-$name, the name of
the variable to set in the Makefile (config.mak.autogen) and
the help text for the option.
Signed-off-by: Ben Walton <bwalton@artsci.utoronto.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It's okay to use the curl helper without a local repository, so long
as you don't use "fetch". There aren't any git programs that would try
to use it, and it doesn't make sense to try it (since there's nowhere
to write the results), but we may as well be clear.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
cmd_ls_remote() was calling transport_get() with a NULL remote and a
non-NULL url in the case where it was run outside a git
repository. This involved a bunch of ill-tested special
cases. Instead, simply get the struct remote for the URL with
remote_get(), which works fine outside a git repository, and can also
take global options into account.
This fixes a tiny and obscure bug where "git ls-remote" without a repo
didn't support global url.*.insteadOf, even though "git clone" and
"git ls-remote" in any repo did.
Also, enforce that all callers provide a struct remote to transport_get().
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Starting with commit 51ea55190b,
git-compat-util.h includes compat/bswap.h
Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The openssl/CHANGES file says:
Let the TLSv1_method() etc. functions return a 'const' SSL_METHOD
pointer and make the SSL_METHOD parameter in SSL_CTX_new,
SSL_CTX_set_ssl_version and SSL_set_ssl_method 'const'.
In older versions, unqualified pointers were used, so we unfortunately
cannot unconditionally update the type of the variable we use.
Signed-off-by: Vietor Liu <vietor@vxwo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we introduced the "word diff" mode, we could have done one of three
things:
* change fn_out_consume() to "this is called every time a line worth of
diff becomes ready from the lower-level diff routine. This function
knows two sets of helpers (one for line-oriented diff, another for word
diff), and each set has various functions to be called at certain
places (e.g. hunk header, context, ...). The function's role is to
inspect the incoming line, and dispatch appropriate helpers to produce
either line- or word- oriented diff output."
* introduce fn_out_consume_word_diff() that is "this is called every time
a line worth of diff becomes ready from the lower-level diff routine,
and here is what we do to prepare word oriented diff using that line."
without touching fn_out_consume() at all.
* Do neither of the above, and keep fn_out_consume() to "this is called
every time a line worth of diff becomes ready from the lower-level diff
routine, and here is what we do to output line oriented diff using that
line." but sprinkle a handful of 'are we in word-diff mode? if so do
this totally different thing' at random places.
This patch is to at least abstract the details of "this totally different
thing" out from the main codepath, in order to improve readability.
We can later refactor it by introducing fn_out_consume_word_diff(), taking
the second route above, but that is a separate topic.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 0cc5691a8b.
There is not enough justification for doing this. We do not update
things in .git/branches and .git/remotes anymore, but still do read
information from there and will keep doing so.
Besides, this breaks quite a lot of tests in t55?? series.
* ja/fetch-doc:
Documentation/merge-options.txt: order options in alphabetical groups
Documentation/git-pull.txt: Add subtitles above included option files
Documentation/fetch-options.txt: order options alphabetically
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The remote helper interface now supports the push capability,
which can be used to ask the implementation to push one or more
specs to the remote repository. For remote-curl we implement this
by calling the existing WebDAV based git-http-push executable.
Internally the helper interface uses the push_refs transport hook
so that the complexity of the refspec parsing and matching can be
reused between remote implementations. When possible however the
helper protocol uses source ref name rather than the source SHA-1,
thereby allowing the helper to access this name if it is useful.
>From Clemens Buchacher <drizzd@aon.at>:
update http tests according to remote-curl capabilities
o Pushing packed refs is now fixed.
o The transport helper fails if refs are already up-to-date. Add
a test for that.
o The transport helper will notice if refs are already
up-to-date. We therefore need to update server info in the
unpacked-refs test.
o The transport helper will purge deleted branches automatically.
o Use a variable ($ORIG_HEAD) instead of full SHA-1 name.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
CC: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some transports, like the native pack transport implemented by
fetch-pack, support useful features like depth or include tags.
These should be exposed if the underlying helper knows how to
use them.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some network protocols (e.g. native git://) are able to fetch more
than one ref at a time and reduce the overall transfer cost by
combining the requests into a single exchange. Instead of feeding
each fetch request one at a time to the helper, feed all of them
at once so the helper can decide whether or not it should batch them.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Helpers might want a higher level of verbosity than just +1 (the
porcelain default setting) and +2 (-v -v). Expand the field to
allow verbosity in the range -1..3.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We will need the walker, url and remote in other functions as the
code grows larger to support smart HTTP. Extract this out into a
set of globals we can easily reference once configured.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When multi_ack_detailed is enabled the ACK continue messages returned
by the remote upload-pack are broken out to describe the different
states within the peer. This permits the client to better understand
the server's in-memory state.
The fetch-pack/upload-pack protocol now looks like:
NAK
---------------------------------
Always sent in response to "done" if there was no common base
selected from the "have" lines (or no have lines were sent).
* no multi_ack or multi_ack_detailed:
Sent when the client has sent a pkt-line flush ("0000") and
the server has not yet found a common base object.
* either multi_ack or multi_ack_detailed:
Always sent in response to a pkt-line flush.
ACK %s
-----------------------------------
* no multi_ack or multi_ack_detailed:
Sent in response to "have" when the object exists on the remote
side and is therefore an object in common between the peers.
The argument is the SHA-1 of the common object.
* either multi_ack or multi_ack_detailed:
Sent in response to "done" if there are common objects.
The argument is the last SHA-1 determined to be common.
ACK %s continue
-----------------------------------
* multi_ack only:
Sent in response to "have".
The remote side wants the client to consider this object as
common, and immediately stop transmitting additional "have"
lines for objects that are reachable from it. The reason
the client should stop is not given, but is one of the two
cases below available under multi_ack_detailed.
ACK %s common
-----------------------------------
* multi_ack_detailed only:
Sent in response to "have". Both sides have this object.
Like with "ACK %s continue" above the client should stop
sending have lines reachable for objects from the argument.
ACK %s ready
-----------------------------------
* multi_ack_detailed only:
Sent in response to "have".
The client should stop transmitting objects which are reachable
from the argument, and send "done" soon to get the objects.
If the remote side has the specified object, it should
first send an "ACK %s common" message prior to sending
"ACK %s ready".
Clients may still submit additional "have" lines if there are
more side branches for the client to explore that might be added
to the common set and reduce the number of objects to transfer.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>