Our CI needs to be aware of the location of the test output directory so
that it knows where to find test results. Some of our CI jobs achieve
this by setting the `TEST_OUTPUT_DIRECTORY` environment variable, which
ensures that the output will be written to that directory. Other jobs,
especially on GitHub Workflows, don't set that environment variable and
instead expect test results to be located in the source directory in
"t/".
The latter logic does not work with Meson though, as the test results
are not written into the source directory by default, but instead into
the build directory. As such, any job that uses Meson without setting
the environment variable will be unable to locate and aggregate results.
Fix this by explicitly setting the test output directory when we set up
the Meson build directory. Like this, we can easily default to "t/" in
the source directory when the value hasn't been set explicitly.
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that we dropped `contrib/buildsystems/generate` to generate Visual
Studio Solution files, it is time to also drop the `vcxproj` Makefile
target that depended on that script.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before we had CMake support, the only way to build Git in Visual Studio
was via this hacky `generate` script.
For a while I tried to fix whenever things got broken, in particular to
allow building confidence in embargoed releases by running the CI builds
in Azure Pipelines in a private Azure DevOps project. I even carried the
patches in Git for Windows with the intention of upstreaming them,
eventually.
However, it is a lot of work with too little benefit. CMake is much
better supported by Visual Studio. So let's drop this hacky script (plus
support code).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is not useful because we do not have any persisted directory anymore,
not since dropping our Travis CI support.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
7b399322a2 (doc: apply new format to git-branch man page, 2025-03-19)
updated the formatting for this doc to, among other things, use backtick
for some elements. In the process `è` was used by accident instead
of backtick.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The tilde (~) count doesn’t match the length of the heading. In turn
you get a bunch of `<sub>~</sub>` instead of the intended `<h3>` in the
HTML output.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the `win+Meson` job was added to Git's CI, modeled after the
`win+vs` job, it overlooked that the latter built the Git artifacts in
release mode.
The reason for this is that there is code in `compat/mingw.c` that turns
on the modal assertion dialogs in debug mode, which are very useful when
debugging interactively (as they offer to attach Visual Studio's
debugger), but they are scarcely useful in CI builds (where that modal
dialog would sit around, waiting for a human being to see and deal with
it, which obviously won't ever happen).
This problem was not realized immediately because of a separate bug: the
`win+Meson` job erroneously built using the `gcc` that is in the `PATH`
by default on hosted GitHub Actions runners. Since that bug was fixed by
switching to `--vsenv`, though, the t7001-mv test consistently timed out
after six hours in the CI builds on GitHub, quite often, and wasting
build minutes without any benefit in return.
The reason for this timeout was a symptom of aforementioned debug mode
problem, where the test case 'nonsense mv triggers assertion failure and
partially updated index' in t7001-mv triggered an assertion.
I originally proposed this here patch to address the timeouts in CI
builds. The Git project decided to address this timeout differently,
though: by fixing the bug that the t7001-mv test case demonstrated. This
does not address the debug mode problem, though, as an `assert()` call
could be triggered in other ways in CI, and it should still not cause
the CI build to hang but should cause Git to error out instead. To avoid
having to accept this here patch, it was then proposed to replace all
`assert()` calls in Git's code base by `BUG()` calls. This might be
reasonable for independent reasons, but it obviously still does not
address the debug mode problem, as `assert()` calls could be easily
re-introduced by mistake, and besides, Git has a couple of dependencies
that all may have their own `assert()` calls (which are then safely
outside the control of the Git project to remove), therefore this here
patch is still needed.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Acked-by: Patrick Steinhardt <ps@pks.im>
[jc: rebased on 'maint' to enable fast-tracking the change down]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a path with wildcard characters, e.g. 'f*o', exists in the
working tree, "git add -- 'f*o'" stops after happily finding
that there is 'f*o' and adding it to the index, without
realizing there may be other paths, e.g. 'foooo', that may match
the given pathspec.
This is because dir.c:do_match_pathspec() disables further
matches with pathspec when it finds an exact match.
Reported-by: piotrsiupa <piotrsiupa@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: K Jayatheerth <jayatheerthkulkarni2005@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When commit 50ddb089ff (tree-walk.c: remove the_repo from
get_tree_entry(), 2019-06-27) added an extra parameter to
get_tree_entry(), it did not fix the ordering comment about the meaning
of the parameters. Rather than just changing "third"->"fourth" and
"fourth"->"fifth", give the paramemters meaningful names (or actually,
just take the existing names from the get_tree_entry() definition in the
tree-walk.c file) and while at it, tweak the rest of the description to
incorporate the other parameter names as well.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The use of asserts is discouraged in our codebase because they lead to
different behaviour depending on how Git is built. When being unsure
enough whether a condition always holds so that one adds the assert,
then the assert should probably trigger regardless of how Git is being
built.
Drop the call to assert(3p) in git-mv(1) and instead use `BUG()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We have a known issue in git-mv(1) where moving both a child and any of
its parents causes an assert to trigger because the child cannot be
found anymore in the index. We have added a test for this in commit
0fcd473fdd (t7001: add failure test which triggers assertion,
2024-10-22) without addressing the issue, which is why the test itself
is marked as `test_expect_failure`.
The behaviour of that test relies on a call to assert(3p) though, which
may or may not be compiled into the resulting binary depending on
whether or not we pass `-DNDEBUG`. When these asserts are compiled into
Git this may cause our CI to hang on Windows though, because asserts may
cause a modal window to be shown.
While we could work around the issue by converting this into a call to
`BUG()`, let's rather address the root cause of the issue by bailing out
in case we see that both a child and any of its parents are being moved
in the same command.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reduce requirement for Perl in our documentation build and a few
scripts.
* ps/fewer-perl:
Documentation: stop depending on Perl to generate command list
Documentation: stop depending on Perl to massage user manual
request-pull: stop depending on Perl
filter-branch: stop depending on Perl
Overhaul of the reftable API.
* ps/reftable-api-revamp:
reftable/table: move printing logic into test helper
reftable/constants: make block types part of the public interface
reftable/table: introduce iterator for table blocks
reftable/table: add `reftable_table` to the public interface
reftable/block: expose a generic iterator over reftable records
reftable/block: make block iterators reseekable
reftable/block: store block pointer in the block iterator
reftable/block: create public interface for reading blocks
git-zlib: use `struct z_stream_s` instead of typedef
reftable/block: rename `block_reader` to `reftable_block`
reftable/block: rename `block` to `block_data`
reftable/table: move reading block into block reader
reftable/block: simplify how we track restart points
reftable/blocksource: consolidate code into a single file
reftable/reader: rename data structure to "table"
reftable: fix formatting of the license header
Since a call to repo_config() can be called with repo set to NULL
these days, a command that is marked as RUN_SETUP in the builtin
command table does not have to check repo with NULL before making
the call.
* ua/call-repo-config-with-possibly-null-repository:
builtin/difftool: remove unnecessary if statement
builtin/add: remove unnecessary if statement
The cleanup_records function marks some lines as changed before running
the actual diff algorithm. For most lines, this is a good performance
optimization, but it also marks lines that are surrounded by many
changed lines as changed as well. This can cause redundant changes and
longer-than-necessary diffs.
Whether this results in better-looking diffs is subjective. However, the
--minimal flag explicitly requests the shortest possible diff.
The change results in shorter diffs in about 1.3% of all diffs in Git's
history. Performance wise, I have measured the impact on
"git log -p -3000 --minimal > /dev/null". With this change, I get
Time (mean ± σ): 2.363 s ± 0.023 s (25 runs)
and without this patch I measured
Time (mean ± σ): 2.362 s ± 0.035 s (25 runs).
As the difference is well within the margin of error, this does not seem
to have an impact on performance.
Signed-off-by: Niels Glodny <n.glodny@campus.lmu.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before accessing an array element at a given index, it should be
verified that the index is within the desired bounds, not afterwards,
otherwise it may not make sense to even access the array element in the
first place. This is the point of CodeQL's
`cpp/offset-use-before-range-check` rule.
This CodeQL rule unfortunately is also triggered by the
`fill_es_indent_data()` code, even though the condition `off < len - 1`
does not even need to guarantee that the offset is in bounds (`s` points
to a NUL-terminated string, for which `s[off] == '\r'` would fail before
running out of bounds).
Let's work around this rare false positive to help us use an otherwise
mostly useful tool is a worthy thing to do.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the preceding commits we have converted all users of
`repo_has_object_file()` and its `_with_flags()` variant to instead use
`has_object()`. Drop these functions.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As the comment of `repo_has_object_file()` and its `_with_flags()`
variant tells us, these functions are considered to be deprecated in
favor of `has_object()`. There are a couple of slight benefits in favor
of the replacement:
- The new function has a short-and-sweet name.
- More explicit defaults: `has_object()` doesn't fetch missing objects
via promisor remotes, and neither does it reload packfiles if an
object wasn't found by default. This ensures that it becomes
immediately obvious when a simple object existence check may result
in expensive actions.
Most importantly though, it is confusing that we have two sets of
functions that ultimately do the same thing, but with different
defaults.
Start sunsetting `repo_has_object_file()` and its `_with_flags()`
sibling by replacing all callsites with `has_object()`:
- `repo_has_object_file(...)` is equivalent to
`has_object(..., HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT)`
is equivalent to `has_object(..., 0)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_SKIP_FETCH_OBJECT)`
is equivalent to `has_object(..., HAS_OBJECT_RECHECK_PACKED)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_QUICK)`
is equivalent to `has_object(..., HAS_OBJECT_FETCH_PROMISOR)`.
The replacements should be functionally equivalent.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We're about to fully remove `repo_has_object_file()` in favor of
`has_object()`. The latter function does not yet have a way to fetch
missing objects via a promisor remote though, which means that it cannot
fully replace all usecases of `repo_has_object_file()`.
Introduce a new flag `HAS_OBJECT_FETCH_PROMISOR` that causes the
function to optionally fetch missing objects which are part of a
promisor pack. This flag will be used in the subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We carry declarations for a couple of functions in "object-store.h" that
are not defined in "object-store.c", but in a different subsystem. Move
these declarations to the respective headers whose matching code files
carry the corresponding definition.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `odb_pack_keep()` creates a file at the passed-in path. If
this fails, then the function re-tries by first creating any potentially
missing leading directories and then trying to create the file once
again. As such, this function doesn't host any kind of logic that is
specific to the object store, but is rather a generic helper function.
Rename the function to `safe_create_file_with_leading_directories()` and
move it into "path.c". While at it, refactor it so that it loses its
dependency on `the_repository`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `loose_object_path()` is a trivial wrapper around
`odb_loose_path()`, with the only exception that it always uses the
primary object database of the given repository. This doesn't really add
a ton of value though, so let's drop the function and inline it at every
callsite.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "object-store.h" header contains the definition of `struct
packed_git`. As this structure hosts all kind of information about a
specific packfile it is arguably a bit out of place in a generic place
like "object-store.h".
Move the structure as well as `pack_map_entry_cmp()` into "packfile.h".
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add an option to allow users to specifically enable or disable
retrieving the Message-ID from the Outlook SMTP server. This can be used
for other hosts mimicking the behaviour of Outlook, or for users who set
a custom domain to be a CNAME for the Outlook SMTP server.
While at it, lets also add missing * in description of --no-smtp-auth.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Aditya Garg <gargaditya08@live.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the series merged at bf0a430f70 (Merge branch 'en/strmap',
2020-11-21), strmap was built on top of hashmap and hashmap was extended
in a few ways to support strmap and be more generally useful. One of
the extensions was that hashmap_partial_clear() was introduced to allow
reuse of the hashmap without freeing the table. Peff believed that it
also made sense to introduce a hashmap_clear() which freed everything
while allowing reuse.
I added hashmap_clear(), but in doing so, overlooked the fact that for
a hashmap to be reusable, it needs a defined cmpfn and data (the
HASHMAP_INIT macro requires these fields as parameters, for example).
So, if we want the hashmap to be reusable, we shouldn't zero out those
fields. We probably also shouldn't zero out do_count_items. (We could
zero out grow_at and shrink_at, but whether we zero those or not is
irrelevant as they'll be automatically updated whenever a new entry is
inserted.)
Since clearing is associated with freeing map->table, and the only thing
required for consistency after freeing map->table is zeroing tablesize
and private_size, let's only zero those fields out.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As detailed in the previous changes to t5309-pack-delta-cycles.sh, the
logic within 'git index-pack' to analyze an incoming thin packfile with
REF_DELTAs is suspect. The algorithm is overly cautious around delta
cycles, and that leads in fact to failing even when there is no cycle.
This change adjusts the algorithm to no longer fail in these cases. In
fact, these cycle cases will no longer fail but more importantly the
valid cases will no longer fail, either. The resulting packfile from the
--fix-thin operation will not have cycles either since REF_DELTAs are
forbidden from the on-disk format and OFS_DELTAs are impossible to write
as a cycle.
The crux of the matter is how the algorithm works when the REF_DELTAs
point to base objects that exist in the local repository. When reading
the thin packfile, the object IDs for the delta objects are unknown so
we do not have the delta chain structure automatically. Instead, we need
to start somewhere by selecting a delta whose base is inside our current
object database.
Consider the case where the packfile has two REF_DELTA objects, A and B,
and the delta chain looks like "A depends on B" and "B depends on C" for
some third object C, where C is already in the current repository. The
algorithm _should_ start with all objects that depend on C, finding B,
and then moving on to all objects depending on B, finding A.
However, if the repository also already has object B, then the delta
chain can be analyzed in a different order. The deltas with base B can
be analyzed first, finding A, and then the deltas with base C are
analyzed, finding B. The algorithm currently continues to look for
objects that depend on B, finding A again. This fails due to A's
'real_type' member already being overwritten from OBJ_REF_DELTA to the
correct object type.
This scenario is possible in a typical 'git fetch' where the client does
not advertise B as a 'have' but requests A as a 'want' (and C is noticed
as a common object based on other 'have's). The reason this isn't
typically seen is that most Git servers use OFS_DELTAs to represent
deltas within a packfile. However, if a server uses only REF_DELTAs,
then this kind of issue can occur. There is nothing in the explicit
packfile format that states this use of inter-pack REF_DELTA is
incorrect, only that REF_DELTAs should not be used in the on-disk
representation to avoid cycles.
This die() was introduced in ab791dd138 (index-pack: fix race condition
with duplicate bases, 2014-08-29). Several refactors have adjusted the
error message and the surrounding logic, but this issue has existed for
a longer time as that was only a conversion from an assert().
The tests in t5309 originated in 3b910d0c5e (add tests for indexing
packs with delta cycles, 2013-08-23) and b2ef3d9ebb (test index-pack on
packs with recoverable delta cycles, 2013-08-23). These changes make
note that the current behavior of handling "resolvable" cycles is mostly
a documentation-only test, not that this behavior is the best way for
Git to handle the situation.
The fix here is somewhat complicated due to the amount of state being
adjusted by the loop within threaded_second_pass(). Instead of trying to
resume the start of the loop while adjusting the necessary context, I
chose to scan the REF_DELTAs depending on the current 'parent' and skip
any that have already been processed. This necessarily leaves us in a
state where 'child' and 'child_obj' could be left as NULL and that must
be handled later. There is also some careful handling around skipping
REF_DELTAs when there are also OFS_DELTAs depending on that parent.
There may be value in extending 'test-tool pack-deltas' to allow writing
OFS_DELTAs in order to exercise this logic across the delta types.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This new test demonstrates some behavior where a valid packfile is being
rejected by the Git client due to the order in which it is resolving
REF_DELTAs.
The thin packfile has a REF_DELTA chain A->B->C where C is not included
in the packfile. However, the client repository contains both C and B
already. Thus, 'git index-pack' is able to resolve A before resolving B.
When resolving B, it then attempts to resolve any other REF_DELTAs that
are pointing to B as a base. This "revisits" A and complains as if there
is a cycle, but it did not actually detect a cycle.
A fix will arrive in the next change.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When trying to demonstrate certain behavior in tests, it can be helpful
to create packfiles that have specific delta structures. 'git
pack-objects' uses various algorithms to select deltas based on their
compression rates, but that does not always demonstrate all possible
packfile shapes. This becomes especially important when wanting to test
'git index-pack' and its ability to parse certain pack shapes.
We have prior art in t/lib-pack.sh, where certain delta structures are
produced by manually writing certain opaque pack contents. However,
producing these script updates is cumbersome and difficult to do as a
contributor.
Instead, create a new test-tool, 'test-tool pack-deltas', that reads a
list of instructions for which objects to include in a packfile and how
those objects should be written in delta form.
At the moment, this only supports REF_DELTAs as those are the kinds of
deltas needed to exercise a bug in 'git index-pack'.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Wire up a couple of benchmarking options that we end up writing into our
"GIT-BUILD-OPTIONS" file. These options allow users to control how
exactly benchmarks are executed.
Note that neither `GIT_PERF_MAKE_COMMAND` nor `GIT_PERF_MAKE_OPTS` are
exposed as a build option. Those options are used by "t/perf/run", which
is not used by Meson.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Wire up benchmarks in Meson. The setup is mostly the same as how we wire
up our tests. The only difference is that benchmarks get wired up via
the `benchmark()` option instead of via `test()`, which gives them a bit
of special treatment:
- Benchmarks never run in parallel.
- Benchmarks aren't run by default when tests are executed.
- Meson does not inject the `MALLOC_PERTURB` environment variable.
Using benchmarks is quite simple:
```
$ meson setup build
# Run all benchmarks.
$ meson test -C build --benchmark
# Run a specific benchmark.
$ meson test -C build --benchmark p0000-*
```
Other than that the usual command line arguments accepted when running
tests are also accepted when running benchmarks.
Note that the benchmarking target is somewhat limited because it will
only run benchmarks for the current build. Other use cases, like running
benchmarks against multiple different versions of Git, are not currently
supported. Users should continue to use "t/perf/run" for those use
cases. The script should get extended at one point in time to support
Meson, but this is outside of the scope of this series.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "perf-lib.sh" script is sourced by all of our benchmarking suites to
make available common infrastructure. The script assumes that build and
source directory are the same, which works for our Makefile. But the
assumption breaks with both CMake and Meson, where the build directory
can be located in an arbitrary place.
Adapt the script so that it works with out-of-tree builds. Most
importantly, this requires us to figure out the location of the build
directory:
- When running benchmarks via our Makefile the build directory is the
same as the source directory. We already know to derive the test
directory ("t/") via `$(pwd)/..`, which works because we chdir into
"t/perf" before executing benchmarks. We can thus derive the build
directory by appending another "/.." to that path.
- When running benchmarks via Meson the build directory is located at
an arbitrary location. The build system thus has to make the path
known by exporting the `GIT_BUILD_DIR` environment variable.
This change prepares us for wiring up benchmarks in Meson.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our benchmarks use a couple of Perl scripts to compute results. These
Perl scripts get executed directly, and as the shebang is hardcoded to
"/usr/bin/perl" this will fail on any system where the Perl interpreter
is located in a different path.
Our build infrastructure already lets users configure the location of
Perl, which ultimately gets written into the GIT-BUILD-OPTIONS file.
This file is being sourced by "test-lib.sh", and consequently we already
have the "PERL_PATH" variable available that contains its configured
location.
Use "PERL_PATH" to execute Perl scripts, which makes them work on more
esoteric systems like NixOS. Furthermore, adapt the shebang to use
env(1) to execute Perl so that users who have Perl in PATH, but in a
non-standard location can execute the script directly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many of our benchmarks operate on a user-defined repository that we copy
over before running the benchmarked logic. To keep unintentional side
effects caused by on-disk state at bay we skip copying some files. This
includes for example hooks, but also the repo's configuration.
It is quite sensible to not copy over the configuration, as it is quite
easy to inadvertently carry over configuration that may significantly
impact the performance measurements. But we cannot fully ignore the
configuration either, as it may contain information about the repository
format. This will cause failures when for example using a repository
with SHA256 object format or the reftable ref format.
Fix the issue by parsing the reference and object formats from the
source repository and passing them to git-init(1).
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The change to the bundle-uri unbundling refspec now includes tags, so this
adds a very, very simple test to make sure that tags in a bundle are
properly added to the cloned repository and will be included in ref
negotiation with the subsequent fetch.
Signed-off-by: Scott Chacon <schacon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When downloading bundles via the bundle-uri functionality, we only copy the
references from refs/heads into the refs/bundle space. I'm not sure why this
refspec is hardcoded to be so limited, but it makes the ref negotiation on
the subsequent fetch suboptimal, since it won't use objects that are
referenced outside of the current heads of the bundled repository.
This change to copy everything in refs/ in the bundle to refs/bundles/
significantly helps the subsequent fetch, since nearly all the references
are now included in the negotiation.
The update to the bundle-uri unbundling refspec puts all the heads from a
bundle file into refs/bundle/heads instead of directly into refs/bundle/ so
the tests also need to be updated to look in the new heirarchy.
Signed-off-by: Scott Chacon <schacon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The script generates a Message-ID alongwith the other headers when
gen_header is called, and is sent alongwith the email. For most email
providers, including gmail, the Message-ID goes unchanged to the
recipient.
But, this does not seem to be a case with Outlook. In Outlook, when we
send our own Message-ID as a part of the headers, it discards it. Then
it generates a new random Message-ID and that is what the recipient
gets.
This is a problem because the Message-ID is crucial when we are sending
multiple emails in a thread. The current implementation for threads in
the script replies to the Message-ID it generated, but due to Outlook's
behavior, it is not the same as the one that the recipient got, thus
breaking threads. So a need arises to retrieve the Message-ID from the
server response and set it in the In-Reply-To and References email
headers instead of using the self generated one for the purpose of
replies.
The $smtp->message variable in this script for outlook is something like
this:
2.0.0 OK <Message-ID> [Hostname=Some-hostname]
The Message-ID here is the one the recipient gets, rather than the one
the script generated.
This patch uses the fact above and retrieves the Message-ID from the
server response. It then changes the value of the $message_id variable
to the one received from the server. This value will be used when next
and subsequent messages are sent as replies to the message, thus
preserving the threading of the messages.
Signed-off-by: Aditya Garg <gargaditya08@live.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Meson detects the path of the target shell via `find_program("sh")`,
which essentially does a lookup via `PATH`. This may easily lead to a
subtly-broken Git distribution when the build host has its shell in a
location that the target host doesn't know about.
Fix the issue by appending "/bin" to the custom program path, which
causes us to prefer "/bin/sh" over a `PATH`-based lookup. While
"/bin/sh" isn't standardized, this path tends to work alright on Linux
and BSD distributions. Furthermore, "/bin/sh" is also the path we pick
in our Makefile by default, which further demonstrates that this shell
fulfills our needs.
Note that we intentionally append, not prepend, to the custom program
path. This is because the program path can be configured by the user via
the `-Dsane_tool_path=` build option, which should take precedence over
any defaults we pick for the user.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git needs to know about a couple of executable paths to pick at runtime.
This includes the system shell, but may also optionally include the Perl
and Python interpreters. Meson detects the location of these paths
automatically via `find_program()`, which does a lookup via the `PATH`
environment variable. As such, it may not be immediately obvious to the
developer which paths have been autodetected.
Improve this by exposing runtime executable paths at setup time.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These are added in the Makefile, but not in meson. They probably won't
work well on systems without them.
CMake adds them, but only on non-Windows. Actually, it only performs
compiler checks for hstrerror, but excludes that check on Windows with
the note that it is "incompatible with the Windows build". This seems to
be misleading -- it is not incompatible, it simply doesn't exist. Still,
the compat version should not be used.
I interpret this cmake logic to mean we shouldn't even be checking for
symbol availability on Windows. In addition to making it simple to add
compat definitions, this also probably shaves off a second or two of
configure time on Windows as no compiler check needs to be performed.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Nowhere in the codebase do we otherwise check for strerror. Nowhere in
the codebase do we make use of -DNO_STRERROR. `strerror` is not a
networking function at all.
We do utilize `hstrerror` though, which is a networking function we
should have been checking here.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As evidenced in config.mak.uname and configure.ac, there are various
possible scenarios where these libraries are default-enabled in the
build, which mainly boils down to: SunOS. -lresolv is simply not the
only library that, when it exists, probably needs to be linked to for
networking.
Check for and add -lnsl -lsocket as well.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>