In 37dc6d8104 (builtin/repack.c: implement support for
`--max-cruft-size`, 2023-10-02), we exposed new functionality that
allowed repositories to specify the behavior of when we should combine
multiple cruft packs together.
This feature was designed to ensure that we never repacked cruft packs
which were larger than the given threshold in order to provide tighter
I/O bounds for repositories that have many unreachable objects. In
essence, specifying '--max-cruft-size=N' instructed 'repack' to
aggregate cruft packs together (in order of ascending size) until the
combine size grows past 'N', and then make a new cruft pack whose
contents includes the packs we rolled up.
But this isn't quite how it works in practice. Suppose for example that
we have two cruft packs which are each 100MiB in size. One might expect
specifying "--max-cruft-size=200M" would combine these two packs
together, and then avoid repacking them until a pruning GC takes place.
In reality, 'repack' would try and aggregate these together, but writing
a pack that is strictly smaller than 200 MiB (since pack-objects'
"--max-pack-size" provides a strict bound for packs containing more than
one object).
So instead we'll write out a pack that is, say, 199 MiB in size, and
then another 1 MiB pack containing the balance. If we later repack the
repository without adding any new unreachable objects, we'll repeat the
same exercise again, making the same 199 MiB and 1 MiB packs each time.
This happens because of a poor choice to bolt the '--max-cruft-size'
functionality onto pack-objects' '--max-pack-size', forcing us to
generate packs which are always smaller than the provided threshold and
thus subject to repacking.
The following commit will introduce a new flag that implements something
similar to the behavior above. Let's prepare for that by making repack's
'--max-cruft-size' flag behave as an cruft pack-specific override for
'--max-pack-size'.
Do so by temporarily repurposing the 'collapse_small_cruft_packs()'
function to instead generate a cruft pack using the same instructions as
if we didn't specify any maximum pack size. The calling code looks
something like:
if (args->max_pack_size && !cruft_expiration) {
collapse_small_cruft_packs(in, args->max_pack_size, existing);
} else {
for_each_string_list_item(item, &existing->non_kept_packs)
fprintf(in, "-%s.pack\n", item->string);
for_each_string_list_item(item, &existing->cruft_packs)
fprintf(in, "-%s.pack\n", item->string);
}
This patch makes collapse_small_cruft_packs() behave identically to the
'else' arm of the conditional above. This repurposing of
'collapse_small_cruft_packs()' is intentional, since it will set us up
nicely to introduce the new behavior in the following commit.
Naturally, there is some test fallout in the test which exercises the
old meaning of '--max-cruft-size'. Mark that test as failing for now to
be dealt with in the following commit. Likewise, add a new test which
explicitly tests the behavior of '--max-cruft-size' to place a hard
limit on the size of any generated cruft pack(s).
Note that this is a breaking change, as it alters the user-visible
behavior of '--max-cruft-size'. But I'm OK changing this behavior in
this instance, since the behavior wasn't accurate to begin with.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Acked-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A previous commit moved a handful of tests from a different script into
t7704, including one that relies on generating random blobs.
Incidentally, the original home of this test defined its own helper
"write_blob" for doing so, which is identical in function to our
"generate_random_blob" (and is slightly inferior to the latter, which
cleans up after itself).
Rewrite the test that uses "write_blob" to no longer do so and then
remove the function.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Acked-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that a number of new tests have landed in t7704, make sure that they
all make sense and are testing the things they say they are.
Things are mostly OK, but a handful of tests needed tweaks. Those tweaks
are as follows:
- Use the terms "too large" or "too small" in tests that exercise the
'--max-cruft-size' behavior. This has historically been treated as a
threshold beneath which to combine cruft packs, but that will change
in a subsequent commit. Prepare for that by using a more generic
term.
- Remove references to "--max-cruft-size" in the freshening tests.
These tests provide coverage of our ability to record updated mtimes
for objects already in cruft packs whose mtimes are upserted from
various sources (loose objects, finding that object in a new pack,
another cruft pack, etc.).
These have nothing to do with the '--max-cruft-size' feature, and in
fact none of the tests even *use* '--max-cruft-size'. Name them
appropriately to make it clear that these tests exercise freshening
behavior, not '--max-cruft-size' behavior.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Acked-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The cruft pack feature has two primary test scripts which exercise
various parts of it, which are:
- t5329-pack-objects-cruft.sh
- t7704-repack-cruft.sh
The former is designed to test low-level pack generation mechanics at
the 'git pack-objects --cruft'-level, which is plumbing. The latter, on
the other hand, is designed to test the user-facing behavior through
'git repack --cruft', which is porcelain (under the "ancillary
manipulators" sub-section).
At some point a handful of tests which should have been added to the
latter script were instead written to the former. This isn't a huge
deal, but rectifying it is straightforward. Move a handful of
'repack'-related tests out of t5329 and into their rightful home in
t7704.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Acked-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Once an object is written into a cruft pack, we can only freshen it by
writing a new loose or packed copy of that object with a more recent
mtime.
Prior to 61568efa95 (builtin/pack-objects.c: support `--max-pack-size`
with `--cruft`, 2023-08-28), we typically had at most one cruft pack in
a repository at any given time. So freshening unreachable objects was
straightforward when already rewriting the cruft pack (and its *.mtimes
file).
But 61568efa95 changes things: 'pack-objects' now supports writing
multiple cruft packs when invoked with `--cruft` and the
`--max-pack-size` flag. Cruft packs are rewritten until they reach some
size threshold, at which point they are considered "frozen", and will
only be modified in a pruning GC, or if the threshold itself is
adjusted.
Prior to this patch, however, this process breaks down when we attempt
to freshen an object packed in an earlier cruft pack, and that cruft
pack is larger than the threshold and thus will survive the repack.
When this is the case, it is impossible to freshen objects in cruft
pack(s) when those cruft packs are larger than the threshold. This is
because we would avoid writing them in the new cruft pack entirely, for
a couple of reasons.
1. When enumerating packed objects via 'add_objects_in_unpacked_packs()'
we pass the SKIP_IN_CORE_KEPT_PACKS, which is used to avoid looping
over the packs we're going to retain (which are marked as kept
in-core by 'read_cruft_objects()').
This means that we will avoid enumerating additional packed copies
of objects found in any cruft packs which are larger than the given
size threshold. Thus there is no opportunity to call
'create_object_entry()' whatsoever.
2. We likewise will discard the loose copy (if one exists) of any
unreachable object packed in a cruft pack that is larger than the
threshold. Here our call path is 'add_unreachable_loose_objects()',
which uses the 'add_loose_object()' callback.
That function will eventually land us in 'want_object_in_pack()'
(via 'add_cruft_object_entry()'), and we'll discard the object as it
appears in one of the packs which we marked as kept in-core.
This means in effect that it is impossible to freshen an unreachable
object once it appears in a cruft pack larger than the given threshold.
Instead, we should pack an additional copy of an unreachable object we
want to freshen even if it appears in a cruft pack, provided that the
cruft copy has an mtime which is before the mtime of the copy we are
trying to pack/freshen. This is sub-optimal in the sense that it
requires keeping an additional copy of unreachable objects upon
freshening, but we don't have a better alternative without the ability
to make in-place modifications to existing *.mtimes files.
In order to implement this, we have to adjust the behavior of
'want_found_object()'. When 'pack-objects' is told that we're *not*
going to retain any cruft packs (i.e. the set of packs marked as kept
in-core does not contain a cruft pack), the behavior is unchanged.
But when there *is* at least one cruft pack that we're holding onto, it
is no longer sufficient to reject a copy of an object found in that
cruft pack for that reason alone. In this case, we only want to reject a
candidate object when copies of that object either:
- exists in a non-cruft pack that we are retaining, regardless of that
pack's mtime, or
- exists in a cruft pack with an mtime at least as recent as the copy
we are debating whether or not to pack, in which case freshening
would be redundant.
To do this, keep track of whether or not we have any cruft packs in our
in-core kept list with a new 'ignore_packed_keep_in_core_has_cruft'
flag. When we end up in this new special case, we replace a call to
'has_object_kept_pack()' to 'want_cruft_object_mtime()', and only reject
objects when we have a copy in an existing cruft pack with at least as
recent an mtime as our candidate (in which case "freshening" would be
redundant).
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git version --build-options" stopped showing zlib version by
mistake due to recent refactoring, which has been corrected.
* tc/zlib-ng-fix:
help: print zlib-ng version number
help: include git-zlib.h to print zlib version
Commit bc26f7690a (clone: make it possible to specify --tags,
2025-02-06) added a new paragraph in the middle of this list item. By
adding an empty line rather than using a list continuation, we broke the
list continuation, with the new paragraph ending up funnily indented.
Restore the chain of list continuations.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
- Update for 2.49.0.
- Fix numerous typos found by spelling checker.
- Fix more straight quotes.
- Harmonize translation of "blob" (to "blob", not "blobb").
- Harmonize translation of "reflog" (to "referenslogg").
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
In 3f763ddf28 (fetch: set remote/HEAD if it does not exist, 2024-11-22),
unconditionally adds "HEAD" to the list of ref prefixes we send to the
server.
This breaks a core assumption that the list of prefixes we send to the
server is complete. We must either send all prefixes we care about, or
none at all (in the latter case the server then advertises everything).
The tag following code is careful to only add "refs/tags/" to the list
of prefixes if there are already entries in the prefix list. But because
the new code from 3f763ddf28 runs after the tag code, and because it
unconditionally adds to the prefix list, we may end up with a prefix
list that _should_ have "refs/tags/" in it, but doesn't.
When that is the case, the server does not advertise any tags, and our
auto-following breaks because we never learned about any tags in the
first place.
Fix this by only adding "HEAD" to the ref prefixes when we know that we
are already limiting the advertisement. In either case we'll learn about
HEAD (either through the limited advertisement, or implicitly through a
full advertisement).
Reported-by: Igor Todorovski <itodorov@ca.ibm.com>
Co-authored-by: Jeff King <peff@peff.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When building against zlib-ng, the header file `zlib.h` is not included,
but `zlib-ng.h` is included instead. It's `zlib.h` that defines
`ZLIB_VERSION` and that macro is used to print out zlib version in
`git-version(1)` with `--build-options`. But when it's not defined, no
version is printed.
`zlib-ng.h` defines another macro: `ZLIBNG_VERSION`. Use that macro to
print the zlib-ng version in `git version --build-options` when it's
set. Otherwise fallback to `ZLIB_VERSION`.
Signed-off-by: Toon Claes <toon@iotcl.com>
Helped-by: Patrick Steinhardt <ps@pks.im>
Reviewed-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 41f1a8435a (git-compat-util: move include of "compat/zlib.h" into
"git-zlib.h", 2025-01-28) some code was refactored to enable easier
linking against zlib-ng.
This removed `zlib.h` being indirectly included in `help.c`. As this
file uses `ZLIB_VERSION` to print the version number of zlib when
running git-version(1) with `--build-options`, this resulted in a
regression.
Include `git-zlib.h` directly into `help.c` to print zlib version
information. This brings back the zlib version in the output of
`git version --build-options`.
Signed-off-by: Toon Claes <toon@iotcl.com>
Reviewed-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Hotfix to help building Git-for-Windows.
* js/win-2.49-build-fixes:
cmake: generalize the handling of the `CLAR_TEST_OBJS` list
meson: fix sorting
ident: stop assuming that `gw_gecos` is writable
Some future breaking changes would remove certain parts of the
default repository, which were still described even when the
documents were built for the future with WITH_BREAKING_CHANGES.
* pw/repo-layout-doc-update:
docs: fix repository-layout when building with breaking changes
A late-comer to the v2.49.0 party, `sk/unit-test-oid`, added yet another
array item to `CLAR_TEST_OBJS`, causing the `win+VS build` job to fail
with symptoms like this one:
unit-tests-lib.lib(u-oid-array.obj) : error LNK2019: unresolved
external symbol cl_parse_any_oid referenced in function fill_array
This is a similar scenario to the one that forced me to write
8afda42fce (cmake: generalize the handling of the `UNIT_TEST_OBJS`
list, 2024-09-18): The hard-coded echo of `CLAR_TEST_OBJS` in
`CMakeLists.txt` that recapitulates faithfully what was already
hard-coded in `Makefile` would either have to be updated whack-a-mole
style, or generalized.
Just like I chose the latter option for `UNIT_TEST_OBJS`, I now do the
same for `CLAR_TEST_OBJS`.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 904339edbd (Introduce support for the Meson build system,
2024-12-06) the `meson.build` file was introduced, adding also a
Windows-specific list of source files. This list was obviously meant to
be sorted alphabetically, but there is one mistake. Let's fix that.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 590e081dea (ident: add NO_GECOS_IN_PWENT for systems without
pw_gecos in struct passwd, 2011-05-19), code was introduced to iterate
over the `gw_gecos` field; The loop variable is of type `char *`, which
assumes that `gw_gecos` is writable.
However, it is not necessarily writable (and it is a bad idea to have it
writable in the first place), so let's switch the loop variable type to
`const char *`.
This is not a new problem, but what is new is the Meson build. While it
does not trigger in CI builds, imitating the commands of
`ci/run-build-and-tests.sh` in a regular Git for Windows SDK (`meson
setup build . --fatal-meson-warnings --warnlevel 2 --werror --wrap-mode
nofallback -Dfuzzers=true` followed by `meson compile -C build --`
results in this beautiful error:
"cc" [...] -o libgit.a.p/ident.c.obj "-c" ../ident.c
../ident.c: In function 'copy_gecos':
../ident.c:68:18: error: assignment discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
68 | for (src = get_gecos(w); *src && *src != ','; src++) {
| ^
cc1.exe: all warnings being treated as errors
Now, why does this not trigger in CI? The answer is as simple as it is
puzzling: The `win+Meson` job completely side-steps Git for Windows'
development environment, opting instead to use the GCC that is on the
`PATH` in GitHub-hosted `windows-latest` runners. That GCC is pinned to
v12.2.0 and targets the UCRT (unlikely to change any time soon, see
https://github.com/actions/runner-images/blob/win25/20250303.1/images/windows/toolsets/toolset-2022.json#L132-L141).
That is in stark contrast to Git for Windows, which uses GCC v14.2.0 and
targets MSVCRT. Git for Windows' `Makefile`-based build also obviously
uses different compiler flags, otherwise this compile error would have
had plenty of opportunity in almost 14 years to surface.
In other words, contrary to my expectations, the `win+Meson` job is
ill-equipped to replace the `win build` job because it exercises a
completely different tool version/compiler flags vector than what Git
for Windows needs.
Nevertheless, there is currently this huge push, including breaking
changes after -rc1 and all, for switching to Meson. Therefore, we need
to make it work, somehow, even in Git for Windows' SDK, hence this
patch, at this point in time.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Meson-based build procedure forgot to build some docs, which has
been corrected.
* pw/build-meson-technical-and-howto-docs:
meson: fix building technical and howto docs