This uses the new "git stash create" interface to stash away the dirty state
you have in your working tree before starting a rebase, and then replaying
it when you are done with stashing.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This subcommand creates a stash from the current state and writes out the
resulting commit object ID to the standard output, without updating the
stash ref nor resetting the tree. It is intended to be used by scripts
to temporarily rewind the working tree to a clean state.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jc/partial-remove:
Document ls-files --with-tree=<tree-ish>
git-commit: partial commit of paths only removed from the index
git-commit: Allow partial commit of file removal.
Because a partial commit is meant to be a way to ignore what are
staged in the index, "git rm --cached A && git commit A" should
just record what is in A on the filesystem. The previous patch
made the command sequence to barf, saying that A has not been
added yet. This fixes it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jc/grep-c:
Split grep arguments in a way that does not requires to add /dev/null.
Documentation/git-config.txt: AsciiDoc tweak to avoid leading dot
Add test to check recent fix to "git add -u"
Documentation/git-archive.txt: a couple of clarifications.
Fix the rename detection limit checking
diff --no-index: do not forget to run diff_setup_done()
In order to (almost) always show the name of the file without
relying on "-H" option of GNU grep, we used to add /dev/null to
the argument list unless we are doing -l or -L. This caused
"/dev/null:0" to show up when -c is given in the output.
It is not enough to add -c to the set of options we do not pass
/dev/null for. When we have too many files, we invoke grep
multiple times and we need to avoid giving a widow filename to
the last invocation -- otherwise we will not see the name.
This keeps two filenames when the argv[] buffer is about to
overflow and we have not finished iterating over the index, so
that the last round will always have at least two paths to work
with (and not require /dev/null).
An obvious and the only exception is when there is only 1 file
that is given to the underlying grep, and in that case we avoid
passing /dev/null and let the external "grep -c" report only the
number of matches.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bram Schoenmakers noticed that git-config document was formatted
incorrectly. Depending on the version of AsciiDoc and docbook
toolchain, it is sometimes taken as a numbered example by AsciiDoc,
some other times passed intact to roff format to confuse "man".
Since we refer to the repository metadata directory as $GIT_DIR
elsewhere, work it around by using that symbolic name.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An earlier commit fixed type-change case in "git add -u".
This adds a test to make sure we do not introduce regression.
At the same time, it fixes a stupid typo in the error message.
Signed-off-by: Benoit Sigoure <tsuna@lrde.epita.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The description of the option gave impression that there
were several formats available by using three dots. There are
no other formats than tar and gzip currently supported.
Clarify that the archive goes to the standard output.
Signed-off-by: Jari Aalto <jari.aalto@cante.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds more proper rename detection limits. Instead of just checking
the limit against the number of potential rename destinations, we verify
that the rename matrix (which is what really matters) doesn't grow
ridiculously large, and we also make sure that we don't overflow when
doing the matrix size calculation.
This also changes the default limits from unlimited, to a rename matrix
that is limited to 100 entries on a side. You can raise it with the config
entry, or by using the "-l<n>" command line flag, but at least the default
is now a sane number that avoids spending lots of time (and memory) in
situations that likely don't merit it.
The choice of default value is of course very debatable. Limiting the
rename matrix to a 100x100 size will mean that even if you have just one
obvious rename, but you also create (or delete) 10,000 files, the rename
matrix will be so big that we disable the heuristics. Sounds reasonable to
me, but let's see if people hit this (and, perhaps more importantly,
actually *care*) in real life.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
git-format-patch --in-reply-to: accept <message@id> with angle brackets
git-add -u: do not barf on type changes
Remove duplicate note about removing commits with git-filter-branch
git-clone: improve error message if curl program is missing or not executable
hooks--update: Explicitly check for all zeros for a deleted ref.
This will allow RFC-literate users to say:
format-patch --in-reply-to='<message.id@site.name>'
without forcing them to strip the surrounding angle brackets
like this:
format-patch --in-reply-to='message.id@site.name'
We accept both forms, and the latter gets necessary < and >
around it as before.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A duplicate of an already existing section in the documentation of
git-filter-branch was added in commit
f95eef15f2.
This patch removes that redundant section.
Signed-off-by: Ulrik Sverdrup <ulrik.sverdrup@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the curl program is not available (or not executable), and git clone is
started to clone a repository through http, this is the output
Initialized empty Git repository in /tmp/puppet/.git/
/usr/bin/git-clone: line 37: curl: command not found
Cannot get remote repository information.
Perhaps git-update-server-info needs to be run there?
This patch improves the error message by checking the return code when
running curl to exit immediately if it's 126 or 127; the error output now
is
Initialized empty Git repository in /tmp/puppet/.git/
/usr/bin/git-clone: line 37: curl: command not found
Adrian Bridgett noticed this and reported through
http://bugs.debian.org/440976
Signed-off-by: Gerrit Pape <pape@smarden.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The default behavior for each state can be customized, and it can also
be toggled directly from the status buffer.
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes insertions and updates much more efficient.
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous check caused the hook to reject as unannotated any tag
whose SHA1 starts with a zero.
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When making a partial commit, git-commit uses git-ls-files with
the --error-unmatch option to expand and sanity check the user
supplied path patterns. When any path pattern does not match
with the paths known to the index, it errors out, in order to
catch a common mistake to say "git commit Makefiel cache.h"
and end up with a commit that touches only cache.h (notice the
misspelled "Makefile"). This detection however does not work
well when the path has already been removed from the index.
If you drop a path from the index and try to commit that
partially, i.e.
$ git rm COPYING
$ git commit -m 'Remove COPYING' COPYING
the command complains because git does not know anything about
COPYING anymore.
This introduces a new option --with-tree to git-ls-files and
uses it in git-commit when we build a temporary index to
write a tree object for the partial commit.
When --with-tree=<tree-ish> option is specified, names from the
given tree are added to the set of names the index knows about,
so we can treat COPYING file in the example as known.
Of course, there is no reason to use "git rm" and git-aware
people have long time done:
$ rm COPYING
$ git commit -m 'Remove COPYING' COPYING
which works just fine. But this caused a constant confusion.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
stash: end index commit log with a newline
git-commit: Disallow amend if it is going to produce an empty non-merge commit
git-send-email.perl: Add angle brackets to In-Reply-To if necessary
Fix a test failure (t9500-*.sh) on cygwin
There was no newline at the end of the index commit message, putting
the shell prompt at its end after a 'git cat-file commit $id'. This is
similar to what was fixed in 843103d693.
Signed-off-by: Jean-Luc Herren <jlh@gmx.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Right now one can amend the last non-merge commit using a dirty index
and in the process maybe cause the last commit to have the same tree
as its parent. In such a case one would want to discard the last commit
instead of amending it.
This reverts commit 8588452ceb.
Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Although message-id by defintion should have surrounding angle
brackets, there is no point forcing people to type them in.
Signed-off-by: David Kastrup <dak@gnu.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On filesystems where it is appropriate to set core.filemode
to false, test 29 ("commitdiff(0): mode change") fails when
git-commit does not notice a file (execute) permission change.
A fix requires noting the new file execute permission in the
index with a "git update-index --chmod=+x", prior to the commit.
Add a function (note_chmod) which implements this idea, and
insert a call in each test that modifies the x permission.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* rs/archive:
archive - leakfix for format_subst()
Define NO_MEMMEM on Darwin as it lacks the function
archive: rename attribute specfile to export-subst
archive: specfile syntax change: "$Format:%PLCHLDR$" instead of just "%PLCHLDR" (take 2)
add memmem()
Remove unused function convert_sha1_file()
archive: specfile support (--pretty=format: in archive files)
Export format_commit_message()
* sp/maint-no-thin:
Make --no-thin the default in git-push to save server resources
fix doc for --compression argument to pack-objects
git-tag -s must fail if gpg cannot sign the tag.
1) pushes happen less often than fetches, so the bandwidth saving is
much less visible in that case overall.
2) thin packs have to be complemented with missing delta bases to be
valid, so many received thin packs will take more disk space.
3) the bother of repacking should be distributed amongst "clients"
i.e. fetchers and pushers as much as possible, and not the server
being fetched or pushed, to keep disk and CPU usage low on the
server.
This is why a fetch should get thin packs but a push should not.
Both Nico and I have been assuming that --no-thin was the default
behavior of git-push ever since Nico introduced --fix-thin into the
index-pack process, which allowed fetch and receive-pack to avoid
exploding packfiles received during transfer. This patch finally
makes it so.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove obsolete details (core.legacyheaders is always true now).
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds a --threads=<n> parameter to 'git pack-objects' with
documentation.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Try to keep object with the same name hash together.
Suggested by Martin Koegler.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With this, each thread get repeatedly assigned the next available chunk of
objects to process until the whole list is done. The idea is to have
reasonably small chunks so that all CPUs remain busy with a minimum
number of threads for as long as there is data to process.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of this patch code and message was written by Shawn O. Pearce.
I made some tests to know what the problem was, and then I changed
the code related with the SIGPIPE signal.
If the user has misconfigured `user.signingkey` in their .git/config
or just doesn't have any secret keys on their keyring and they ask
for a signed tag with `git tag -s` we better make sure the resulting
tag was actually signed by gpg.
Prior versions of builtin git-tag allowed this failure to slip
by without error as they were not checking the return value of
the finish_command() so they did not notice when gpg exited with
an error exit status. They also did not fail if gpg produced an
empty output or if read_in_full received an error from the read
system call while trying to read the pipe back from gpg.
Finally, we did not actually honor any return value from the do_sign
function as it returns ssize_t but was being stored into an unsigned
long. This caused the compiler to optimize out the die condition,
allowing git-tag to continue along and create the tag object.
However, when gpg gets a wrong username, it exits before any read was done
and then the writing process receives SIGPIPE and program is terminated.
By ignoring this signal, anyway, the function write_or_die gets EPIPE from
write_in_full and exits returning 0 to the system without a message.
Here we better call to write_in_full directly so we can fail
printing a message and return safely to the caller.
With these issues fixed `git-tag -s` will now fail to create the
tag and will report a non-zero exit status to its caller, thereby
allowing automated helper scripts to detect (and recover from)
failure if gpg is not working properly.
Proposed-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Carlos Rica <jasampler@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous hash bucket culling resulted in a somewhat unpredictable
number of hash bucket entries in the order of magnitude of HASH_LIMIT.
Replace this with a Bresenham-like algorithm leaving us with exactly
HASH_LIMIT entries by uniform culling.
Signed-off-by: David Kastrup <dak@gnu.org>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In normal use cases, the performance wins are not overly impressive:
we get something like 5-10% due to the slightly better locality of
memory accesses using the packed structure.
However, since the data structure for index entries saves 33% of
memory on 32-bit platforms and 40% on 64-bit platforms, the behavior
when memory gets limited should be nicer.
This is a rather well-contained change. One obvious improvement would
be sorting the elements in one bucket according to their hash, then
using binary probing to find the elements with the right hash value.
As it stands, the output should be strictly the same as previously
unless one uses the option for limiting the amount of used memory, in
which case the created packs might be better.
Signed-off-by: David Kastrup <dak@gnu.org>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>