Earlier we stopped relying on commit-graph that (still) records
information about commits that are lost from the object store,
which has negative performance implications. The default has been
flipped to disable this pessimization.
* ps/commit-graph-less-paranoid:
commit-graph: disable GIT_COMMIT_GRAPH_PARANOIA by default
Newer versions of Getopt::Long started giving warnings against our
(ab)use of it in "git send-email". Bump the minimum version
requirement for Perl to 5.8.1 (from September 2002) to allow
simplifying our implementation.
* tz/send-email-negatable-options:
send-email: avoid duplicate specification warnings
perl: bump the required Perl version to 5.8.1 from 5.8.0
The way CI testing used "prove" could lead to running the test
suite twice needlessly, which has been corrected.
* js/ci-discard-prove-state:
ci: avoid running the test suite _twice_
ci: add support for GitLab CI
ci: install test dependencies for linux-musl
ci: squelch warnings when testing with unusable Git repo
ci: unify setup of some environment variables
ci: split out logic to set up failed test artifacts
ci: group installation of Docker dependencies
ci: make grouping setup more generic
ci: reorder definitions for grouping functions
A caller called index_file_exists() that takes a string expressed
as <ptr, length> with a wrong length, which has been corrected.
* jh/sparse-index-expand-to-path-fix:
sparse-index: pass string length to index_file_exists()
The wincred credential backend has been taught to support oauth refresh
token the same way as credential-cache and credential-libsecret backends.
* mh/credential-oauth-refresh-token-with-wincred:
credential/wincred: store oauth_refresh_token
Build dependency around unit tests has been fixed.
* jk/unit-tests-buildfix:
t/Makefile: say the default target upfront
t/Makefile: get UNIT_TESTS list from C sources
Makefile: remove UNIT_TEST_BIN directory with "make clean"
Makefile: use mkdir_p_parent_template for UNIT_TEST_BIN
The "--fsck-objects" option of "git index-pack" now can take the
optional parameter to tweak severity of different fsck errors.
* jc/index-pack-fsck-levels:
index-pack: --fsck-objects to take an optional argument for fsck msgs
index-pack: test and document --strict=<msg-id>=<severity>...
The priority queue test has been migrated to the unit testing
framework.
* cp/unit-test-prio-queue:
tests: move t0009-prio-queue.sh to the new unit testing framework
When copying a ref with the reftable backend we also copy the
corresponding log records. When seeking the first log record that we're
about to copy fails though we directly return from `write_copy_table()`
without doing any cleanup, leaking several allocated data structures.
Fix this by exiting via our common cleanup logic instead.
Reported-by: Jeff King <peff@peff.net> via Coverity
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add CI jobs for both GitHub Workflows and GitLab CI to run Git with the
new reftable backend.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Due to scalability issues, Shawn Pearce has originally proposed a new
"reftable" format more than six years ago [1]. Initially, this new
format was implemented in JGit with promising results. Around two years
ago, we have then added the "reftable" library to the Git codebase via
a4bbd13be3 (Merge branch 'hn/reftable', 2021-12-15). With this we have
landed all the low-level code to read and write reftables. Notably
missing though was the integration of this low-level code into the Git
code base in the form of a new ref backend that ties all of this
together.
This gap is now finally closed by introducing a new "reftable" backend
into the Git codebase. This new backend promises to bring some notable
improvements to Git repositories:
- It becomes possible to do truly atomic writes where either all refs
are committed to disk or none are. This was not possible with the
"files" backend because ref updates were split across multiple loose
files.
- The disk space required to store many refs is reduced, both compared
to loose refs and packed-refs. This is enabled both by the reftable
format being a binary format, which is more compact, and by prefix
compression.
- We can ignore filesystem-specific behaviour as ref names are not
encoded via paths anymore. This means there is no need to handle
case sensitivity on Windows systems or Unicode precomposition on
macOS.
- There is no need to rewrite the complete refdb anymore every time a
ref is being deleted like it was the case for packed-refs. This
means that ref deletions are now constant time instead of scaling
linearly with the number of refs.
- We can ignore file/directory conflicts so that it becomes possible
to store both "refs/heads/foo" and "refs/heads/foo/bar".
- Due to this property we can retain reflogs for deleted refs. We have
previously been deleting reflogs together with their refs to avoid
file/directory conflicts, which is not necessary anymore.
- We can properly enumerate all refs. With the "files" backend it is
not easily possible to distinguish between refs and non-refs because
they may live side by side in the gitdir.
Not all of these improvements are realized with the current "reftable"
backend implementation. At this point, the new backend is supposed to be
a drop-in replacement for the "files" backend that is used by basically
all Git repositories nowadays. It strives for 1:1 compatibility, which
means that a user can expect the same behaviour regardless of whether
they use the "reftable" backend or the "files" backend for most of the
part.
Most notably, this means we artificially limit the capabilities of the
"reftable" backend to match the limits of the "files" backend. It is not
possible to create refs that would end up with file/directory conflicts,
we do not retain reflogs, we perform stricter-than-necessary checks.
This is done intentionally due to two main reasons:
- It makes it significantly easier to land the "reftable" backend as
tests behave the same. It would be tough to argue for each and every
single test that doesn't pass with the "reftable" backend.
- It ensures compatibility between repositories that use the "files"
backend and repositories that use the "reftable" backend. Like this,
hosters can migrate their repositories to use the "reftable" backend
without causing issues for clients that use the "files" backend in
their clones.
It is expected that these artificial limitations may eventually go away
in the long term.
Performance-wise things very much depend on the actual workload. The
following benchmarks compare the "files" and "reftable" backends in the
current version:
- Creating N refs in separate transactions shows that the "files"
backend is ~50% faster. This is not surprising given that creating a
ref only requires us to create a single loose ref. The "reftable"
backend will also perform auto compaction on updates. In real-world
workloads we would likely also want to perform pack loose refs,
which would likely change the picture.
Benchmark 1: update-ref: create refs sequentially (refformat = files, refcount = 1)
Time (mean ± σ): 2.1 ms ± 0.3 ms [User: 0.6 ms, System: 1.7 ms]
Range (min … max): 1.8 ms … 4.3 ms 133 runs
Benchmark 2: update-ref: create refs sequentially (refformat = reftable, refcount = 1)
Time (mean ± σ): 2.7 ms ± 0.1 ms [User: 0.6 ms, System: 2.2 ms]
Range (min … max): 2.4 ms … 2.9 ms 132 runs
Benchmark 3: update-ref: create refs sequentially (refformat = files, refcount = 1000)
Time (mean ± σ): 1.975 s ± 0.006 s [User: 0.437 s, System: 1.535 s]
Range (min … max): 1.969 s … 1.980 s 3 runs
Benchmark 4: update-ref: create refs sequentially (refformat = reftable, refcount = 1000)
Time (mean ± σ): 2.611 s ± 0.013 s [User: 0.782 s, System: 1.825 s]
Range (min … max): 2.597 s … 2.622 s 3 runs
Benchmark 5: update-ref: create refs sequentially (refformat = files, refcount = 100000)
Time (mean ± σ): 198.442 s ± 0.241 s [User: 43.051 s, System: 155.250 s]
Range (min … max): 198.189 s … 198.670 s 3 runs
Benchmark 6: update-ref: create refs sequentially (refformat = reftable, refcount = 100000)
Time (mean ± σ): 294.509 s ± 4.269 s [User: 104.046 s, System: 190.326 s]
Range (min … max): 290.223 s … 298.761 s 3 runs
- Creating N refs in a single transaction shows that the "files"
backend is significantly slower once we start to write many refs.
The "reftable" backend only needs to update two files, whereas the
"files" backend needs to write one file per ref.
Benchmark 1: update-ref: create many refs (refformat = files, refcount = 1)
Time (mean ± σ): 1.9 ms ± 0.1 ms [User: 0.4 ms, System: 1.4 ms]
Range (min … max): 1.8 ms … 2.6 ms 151 runs
Benchmark 2: update-ref: create many refs (refformat = reftable, refcount = 1)
Time (mean ± σ): 2.5 ms ± 0.1 ms [User: 0.7 ms, System: 1.7 ms]
Range (min … max): 2.4 ms … 3.4 ms 148 runs
Benchmark 3: update-ref: create many refs (refformat = files, refcount = 1000)
Time (mean ± σ): 152.5 ms ± 5.2 ms [User: 19.1 ms, System: 133.1 ms]
Range (min … max): 148.5 ms … 167.8 ms 15 runs
Benchmark 4: update-ref: create many refs (refformat = reftable, refcount = 1000)
Time (mean ± σ): 58.0 ms ± 2.5 ms [User: 28.4 ms, System: 29.4 ms]
Range (min … max): 56.3 ms … 72.9 ms 40 runs
Benchmark 5: update-ref: create many refs (refformat = files, refcount = 1000000)
Time (mean ± σ): 152.752 s ± 0.710 s [User: 20.315 s, System: 131.310 s]
Range (min … max): 152.165 s … 153.542 s 3 runs
Benchmark 6: update-ref: create many refs (refformat = reftable, refcount = 1000000)
Time (mean ± σ): 51.912 s ± 0.127 s [User: 26.483 s, System: 25.424 s]
Range (min … max): 51.769 s … 52.012 s 3 runs
- Deleting a ref in a fully-packed repository shows that the "files"
backend scales with the number of refs. The "reftable" backend has
constant-time deletions.
Benchmark 1: update-ref: delete ref (refformat = files, refcount = 1)
Time (mean ± σ): 1.7 ms ± 0.1 ms [User: 0.4 ms, System: 1.2 ms]
Range (min … max): 1.6 ms … 2.1 ms 316 runs
Benchmark 2: update-ref: delete ref (refformat = reftable, refcount = 1)
Time (mean ± σ): 1.8 ms ± 0.1 ms [User: 0.4 ms, System: 1.3 ms]
Range (min … max): 1.7 ms … 2.1 ms 294 runs
Benchmark 3: update-ref: delete ref (refformat = files, refcount = 1000)
Time (mean ± σ): 2.0 ms ± 0.1 ms [User: 0.5 ms, System: 1.4 ms]
Range (min … max): 1.9 ms … 2.5 ms 287 runs
Benchmark 4: update-ref: delete ref (refformat = reftable, refcount = 1000)
Time (mean ± σ): 1.9 ms ± 0.1 ms [User: 0.5 ms, System: 1.3 ms]
Range (min … max): 1.8 ms … 2.1 ms 217 runs
Benchmark 5: update-ref: delete ref (refformat = files, refcount = 1000000)
Time (mean ± σ): 229.8 ms ± 7.9 ms [User: 182.6 ms, System: 46.8 ms]
Range (min … max): 224.6 ms … 245.2 ms 6 runs
Benchmark 6: update-ref: delete ref (refformat = reftable, refcount = 1000000)
Time (mean ± σ): 2.0 ms ± 0.0 ms [User: 0.6 ms, System: 1.3 ms]
Range (min … max): 2.0 ms … 2.1 ms 3 runs
- Listing all refs shows no significant advantage for either of the
backends. The "files" backend is a bit faster, but not by a
significant margin. When repositories are not packed the "reftable"
backend outperforms the "files" backend because the "reftable"
backend performs auto-compaction.
Benchmark 1: show-ref: print all refs (refformat = files, refcount = 1, packed = true)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.5 ms … 2.0 ms 1729 runs
Benchmark 2: show-ref: print all refs (refformat = reftable, refcount = 1, packed = true)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.5 ms … 1.8 ms 1816 runs
Benchmark 3: show-ref: print all refs (refformat = files, refcount = 1000, packed = true)
Time (mean ± σ): 4.3 ms ± 0.1 ms [User: 0.9 ms, System: 3.3 ms]
Range (min … max): 4.1 ms … 4.6 ms 645 runs
Benchmark 4: show-ref: print all refs (refformat = reftable, refcount = 1000, packed = true)
Time (mean ± σ): 4.5 ms ± 0.2 ms [User: 1.0 ms, System: 3.3 ms]
Range (min … max): 4.2 ms … 5.9 ms 643 runs
Benchmark 5: show-ref: print all refs (refformat = files, refcount = 1000000, packed = true)
Time (mean ± σ): 2.537 s ± 0.034 s [User: 0.488 s, System: 2.048 s]
Range (min … max): 2.511 s … 2.627 s 10 runs
Benchmark 6: show-ref: print all refs (refformat = reftable, refcount = 1000000, packed = true)
Time (mean ± σ): 2.712 s ± 0.017 s [User: 0.653 s, System: 2.059 s]
Range (min … max): 2.692 s … 2.752 s 10 runs
Benchmark 7: show-ref: print all refs (refformat = files, refcount = 1, packed = false)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.5 ms … 1.9 ms 1834 runs
Benchmark 8: show-ref: print all refs (refformat = reftable, refcount = 1, packed = false)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.4 ms … 2.0 ms 1840 runs
Benchmark 9: show-ref: print all refs (refformat = files, refcount = 1000, packed = false)
Time (mean ± σ): 13.8 ms ± 0.2 ms [User: 2.8 ms, System: 10.8 ms]
Range (min … max): 13.3 ms … 14.5 ms 208 runs
Benchmark 10: show-ref: print all refs (refformat = reftable, refcount = 1000, packed = false)
Time (mean ± σ): 4.5 ms ± 0.2 ms [User: 1.2 ms, System: 3.3 ms]
Range (min … max): 4.3 ms … 6.2 ms 624 runs
Benchmark 11: show-ref: print all refs (refformat = files, refcount = 1000000, packed = false)
Time (mean ± σ): 12.127 s ± 0.129 s [User: 2.675 s, System: 9.451 s]
Range (min … max): 11.965 s … 12.370 s 10 runs
Benchmark 12: show-ref: print all refs (refformat = reftable, refcount = 1000000, packed = false)
Time (mean ± σ): 2.799 s ± 0.022 s [User: 0.735 s, System: 2.063 s]
Range (min … max): 2.769 s … 2.836 s 10 runs
- Printing a single ref shows no real difference between the "files"
and "reftable" backends.
Benchmark 1: show-ref: print single ref (refformat = files, refcount = 1)
Time (mean ± σ): 1.5 ms ± 0.1 ms [User: 0.4 ms, System: 1.0 ms]
Range (min … max): 1.4 ms … 1.8 ms 1779 runs
Benchmark 2: show-ref: print single ref (refformat = reftable, refcount = 1)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.4 ms … 2.5 ms 1753 runs
Benchmark 3: show-ref: print single ref (refformat = files, refcount = 1000)
Time (mean ± σ): 1.5 ms ± 0.1 ms [User: 0.3 ms, System: 1.1 ms]
Range (min … max): 1.4 ms … 1.9 ms 1840 runs
Benchmark 4: show-ref: print single ref (refformat = reftable, refcount = 1000)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.5 ms … 2.0 ms 1831 runs
Benchmark 5: show-ref: print single ref (refformat = files, refcount = 1000000)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.5 ms … 2.1 ms 1848 runs
Benchmark 6: show-ref: print single ref (refformat = reftable, refcount = 1000000)
Time (mean ± σ): 1.6 ms ± 0.1 ms [User: 0.4 ms, System: 1.1 ms]
Range (min … max): 1.5 ms … 2.1 ms 1762 runs
So overall, performance depends on the usecases. Except for many
sequential writes the "reftable" backend is roughly on par or
significantly faster than the "files" backend though. Given that the
"files" backend has received 18 years of optimizations by now this can
be seen as a win. Furthermore, we can expect that the "reftable" backend
will grow faster over time when attention turns more towards
optimizations.
The complete test suite passes, except for those tests explicitly marked
to require the REFFILES prerequisite. Some tests in t0610 are marked as
failing because they depend on still-in-flight bug fixes. Tests can be
run with the new backend by setting the GIT_TEST_DEFAULT_REF_FORMAT
environment variable to "reftable".
There is a single known conceptual incompatibility with the dumb HTTP
transport. As "info/refs" SHOULD NOT contain the HEAD reference, and
because the "HEAD" file is not valid anymore, it is impossible for the
remote client to figure out the default branch without changing the
protocol. This shortcoming needs to be handled in a subsequent patch
series.
As the reftable library has already been introduced a while ago, this
commit message will not go into the details of how exactly the on-disk
format works. Please refer to our preexisting technical documentation at
Documentation/technical/reftable for this.
[1]: https://public-inbox.org/git/CAJo=hJtyof=HRy=2sLP0ng0uZ4=S-DpZ5dR1aF+VHVETKG20OQ@mail.gmail.com/
Original-idea-by: Shawn Pearce <spearce@spearce.org>
Based-on-patch-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Cirrus CI jobs started breaking because we specified version of
FreeBSD that is no longer available, which has been corrected.
* cb/use-freebsd-13-2-at-cirrus-ci:
ci: update FreeBSD cirrus job
The Makefile often had to say "-L$(path) -R$(path)" that repeats
the path to the same library directory for link time and runtime.
A Makefile template is used to reduce such repetition.
* jc/make-libpath-template:
Makefile: simplify output of the libpath_template
Makefile: reduce repetitive library paths
More tests that are supposed to pass leak sanitizer are marked as such.
* rj/test-with-leak-check:
t0080: mark as leak-free
test-lib: check for TEST_PASSES_SANITIZE_LEAK
t6113: mark as leak-free
t5332: mark as leak-free
Prepare existing tests on refs to work better with non-default
backends.
* ps/tests-with-ref-files-backend:
t: mark tests regarding git-pack-refs(1) to be backend specific
t5526: break test submodule differently
t1419: mark test suite as files-backend specific
t1302: make tests more robust with new extensions
t1301: mark test for `core.sharedRepository` as reffiles specific
t1300: make tests more robust with non-default ref backends
"git diff --no-index file1 file2" segfaulted while invoking the
external diff driver, which has been corrected.
* jk/diff-external-with-no-index:
diff: handle NULL meta-info when spawning external diff
The write codepath for the reftable data learned to honor
core.fsync configuration.
* jc/reftable-core-fsync:
reftable/stack: fsync "tables.list" during compaction
reftable: honor core.fsync
Similar to how 2731d048 (Makefile: say the default target upfront.,
2005-12-01) added the default target to the very beginning of the
main Makefile to prevent a random rule that happens to be defined
first in an included makefile fragments from becoming the default
target, protect this Makefile the same way.
This started to matter as we started to include config.mak.uname
and that included makefile fragment does more than defining Make
macros, unfortunately.
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The documentation for the --exclude-per-directory option marked it
as deprecated, which confused readers into thinking there may be a
plan to remove it in the future, which was not our intention.
* jc/ls-files-doc-update:
ls-files: avoid the verb "deprecate" for individual options
Fetching via protocol v0 over Smart HTTP transport sometimes failed
to correctly auto-follow tags.
* jk/fetch-auto-tag-following-fix:
transport-helper: re-examine object dir after fetching
The labels on conflict markers for the common ancestor, our version,
and the other version are available to custom 3-way merge driver
via %S, %X, and %Y placeholders.
* ad/custom-merge-placeholder-for-symbolic-pathnames:
merge-ll: expose revision names to custom drivers
Tests on ref API are moved around to prepare for reftable.
* jc/reffiles-tests:
t5312: move reffiles specific tests to t0601
t4202: move reffiles specific tests to t0600
t3903: make drop stash test ref backend agnostic
t1503: move reffiles specific tests to t0600
t1415: move reffiles specific tests to t0601
t1410: move reffiles specific tests to t0600
t1406: move reffiles specific tests to t0600
t1405: move reffiles specific tests to t0601
t1404: move reffiles specific tests to t0600
t1414: convert test to use Git commands instead of writing refs manually
remove REFFILES prerequisite for some tests in t1405 and t2017
t3210: move to t0601
The completion script (in contrib/) learned more options that can
be used with "git log".
* pb/complete-log-more:
completion: complete missing 'git log' options
completion: complete --encoding
completion: complete --patch-with-raw
completion: complete missing rev-list options
The call to index_file_exists() in the loop in expand_to_path() passes
the wrong string length. Let's fix that.
The loop in expand_to_path() searches the name-hash for each
sub-directory prefix in the provided pathname. That is, by searching
for "dir1/" then "dir1/dir2/" then "dir1/dir2/dir3/" and so on until
it finds a cache-entry representing a sparse directory.
The code creates "strbuf path_mutable" to contain the working pathname
and modifies the buffer in-place by temporarily replacing the character
following each successive "/" with NUL for the duration of the call to
index_file_exists().
It does not update the strbuf.len during this substitution.
Pass the patched length of the prefix path instead.
Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Building with NO_CURL is currently broken since imap-send.c uses things
defined in "strbuf.h" wihtout including it.
The inclusion of that header was removed in eea0e59ffb (treewide: remove
unnecessary includes in source files, 2023-12-23), which failed to
notice that "strbuf.h" was transitively included in imap-send.c via
"http.h", but only if USE_CURL_FOR_IMAP_SEND is defined. Add back the
missing include. Note that it was explicitely added in 3307f7dde2
(imap-send: include strbuf.h, 2023-05-17) after a similar breakage in
ba3d1c73da (treewide: remove unnecessary cache.h includes, 2023-02-24) -
see the thread starting at [1].
It can be verified by inspection that this is the only case where a
header we include is dependent on a Makefile knob in the files modified
in eea0e59ffb.
[1] https://lore.kernel.org/git/20230517070632.71884-1-list@eworm.de/
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-index-pack has a --strict option that can take an optional argument
to provide a list of fsck issues to change their severity.
--fsck-objects does not have such a utility, which would be useful if
one would like to be more lenient or strict on data integrity in a
repository.
Like --strict, allow --fsck-objects to also take a list of fsck msgs to
change the severity.
Remove the "For internal use only" note for --fsck-objects, and document
the option. This won't often be used by the normal end user, but it
turns out it is useful for Git forges like GitLab.
Reviewed-by: Christian Couder <christian.couder@gmail.com>
Signed-off-by: John Cai <johncai86@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
5d477a334a (fsck (receive-pack): allow demoting errors to warnings,
2015-06-22) allowed a list of fsck msg to downgrade to be passed to
--strict. However this is a hidden argument that was not documented nor
tested. Though it is true that most users would not call this option
directly, (nor use index-pack for that matter) it is still useful to
document and test this feature.
Reviewed-by: Christian Couder <christian.couder@gmail.com>
Signed-off-by: John Cai <johncai86@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a platform lacks the support to specify the dynamic library path,
there is no suitable value to give to the CC_LD_DYNPATH variable.
Allow them to be set to an empty string to signal that they do not
need to add the usual -Wl,-rpath, or -R or whatever option followed
by a directory name. This way,
$(call libpath_template,$(SOMELIBDIR))
would expand to just a single mention of that directory, i.e.
-L$(SOMELIBDIR)
when CC_LD_DYNPATH is set to an empty string (or a "-L", which
would have repeated the same "-L$(SOMELIBDIR)" twice without any
ill effect).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
FreeBSD 12 is EOL and no longer available, causing errors in this job.
Upgrade to 13.2, which is the next oldest release with support and that
should keep it for at least another 4 months.
This will be upgraded again once 13.3 is released to avoid further
surprises.
The original report [*] of this problem mentions an error message
"Not enough compute credits to prioritize tasks!". It seems to be
just a reminder that the credit allocate for the Free Tier by Cirrus
is all used up and which might result in additional delays getting a
result.
[*] https://lore.kernel.org/git/d2d7da84-e2a3-a7b2-3f95-c8d53ad4dd5f@gmx.de/
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We decide on the set of unit tests to run by asking make to expand the
wildcard "t/unit-tests/bin/*". One unfortunate outcome of this is that
we'll run anything in that directory, even if it is leftover cruft from
a previous build. This isn't _quite_ as bad as it sounds, since in
theory the unit tests executables are self-contained (so if they passed
before, they'll pass even though they now have nothing to do with the
checked out version of Git). But at the very least it's wasteful, and if
they _do_ fail it can be quite confusing to understand why they are
being run at all.
This wildcarding presumably came from our handling of the regular
shell-script tests, which use $(wildcard t[0-9][0-9][0-9][0-9]-*.sh).
But the difference there is that those are actual tracked files. So if
you checkout a different commit, they'll go away. Whereas the contents
of unit-tests/bin are ignored (so not only do they stick around, but you
are not even warned of the stale files via "git status").
This patch fixes the situation by looking for the actual unit-test
source files and then massaging those names into the final executable
names. This has two additional benefits:
1. It will notice if we failed to build one or more unit-tests for
some reason (whereas the current code just runs whatever made it to
the bin/ directory).
2. The wildcard should avoid other build cruft, like the pdb files we
worked around in 0df903d402 (unit-tests: do not mistake `.pdb`
files for being executable, 2023-09-25).
Our new wildcard does make an assumption that unit tests are built from
C sources. It would be a bit cleaner if we consulted UNIT_TEST_PROGRAMS
from the top-level Makefile. But doing so is tricky unless we reorganize
that Makefile to split the source file lists into include-able subfiles.
That might be worth doing in general, but in the meantime, the
assumptions made by the wildcard here seems reasonable.
Note that we do need to include config.mak.uname either way, though, as
we need the value of $(X) to compute the correct executable names (which
would be true even if we had access to the top-level's UNIT_TEST_PROGRAMS
variable).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we take a library package we depend on (e.g., LIBPCRE) from a
directory other than the default location of the system, we add the
same directory twice on the linker command like, like so:
EXTLIBS += -L$(LIBPCREDIR)/$(lib) $(CC_LD_DYNPATH)$(LIBPCREDIR)/$(lib)
Introduce a template "libpath_template" that takes the path to the
directory, which can be used like so:
EXTLIBS += $(call libpath_template,$(LIBPCREDIR)/$(lib))
and expand it into the "-L$(DIR) $(CC_LD_DYNPATH)$(DIR)" form.
Hopefully we can reduce the chance of typoes this way.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since c6d3cce6f3 (pipe_command(): handle ENOSPC when writing to a
pipe, 2022-08-17), one `write()` call that results in an `errno` value
`ENOSPC` (which typically indicates out of disk space, which makes
little sense in the context of a pipe) is treated the same as `EAGAIN`.
However, contrary to expectations, as diagnosed in
https://github.com/python/cpython/issues/101881#issuecomment-1428667015,
when writing to a non-blocking pipe on Windows, an `errno` value of
`ENOSPC` means something else: the write _fails_. Completely. Because
more data was provided than the internal pipe buffer can handle.
Somewhat surprising, considering that `write()` is allowed to write less
than the specified amount, e.g. by writing only as much as fits in that
buffer. But it doesn't, it writes no byte at all in that instance.
Let's handle this by manually detecting when an `ENOSPC` indicates that
a pipe's buffer is smaller than what needs to be written, and re-try
using the pipe's buffer size as `size` parameter.
It would be plausible to try writing the entire buffer in a loop,
feeding pipe buffer-sized chunks, but experiments show that trying to
write more than one buffer-sized chunk right after that will immediately
fail because the buffer is unlikely to be drained as fast as `write()`
could write again. And the whole point of a non-blocking pipe is to be
non-blocking.
Which means that the logic that determines the pipe's buffer size
unfortunately has to be run potentially many times when writing large
amounts of data to a non-blocking pipe, as there is no elegant way to
cache that information between `write()` calls. It's the best we can do,
though, so it has to be good enough.
This fix is required to let t3701.60 (handle very large filtered diff)
pass with the MSYS2 runtime provided by the MSYS2 project: Without this
patch, the failed write would result in an infinite loop. This patch is
not required with Git for Windows' variant of the MSYS2 runtime only
because Git for Windows added an ugly work-around specifically to avoid
a hang in that test case.
The diff is slightly chatty because it extends an already-existing
conditional that special-cases a _different_ `errno` value for pipes,
and because this patch needs to account for the fact that
`_get_osfhandle()` potentially overwrites `errno`.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "disable repository discovery of a bare repository" check,
triggered by setting safe.bareRepository configuration variable to
'explicit', has been loosened to exclude the ".git/" directory inside
a non-bare repository from the check. So you can do "cd .git &&
git cmd" to run a Git command that works on a bare repository without
explicitly specifying $GIT_DIR now.
* kl/allow-working-in-dot-git-in-non-bare-repository:
setup: allow cwd=.git w/ bareRepository=explicit
"git archive --remote=<remote>" learned to talk over the smart
http (aka stateless) transport.
* jx/remote-archive-over-smart-http:
transport-helper: call do_take_over() in process_connect
transport-helper: call do_take_over() in connect_helper
http-backend: new rpc-service for git-upload-archive
transport-helper: protocol v2 supports upload-archive
remote-curl: supports git-upload-archive service
transport-helper: no connection restriction in connect_helper