From 5632baf238c09b2556c92897bcf0d1fc236499ba Mon Sep 17 00:00:00 2001 From: Jeff King Date: Tue, 30 Oct 2018 19:18:51 -0400 Subject: [PATCH 1/3] t1450: check large blob in trailing-garbage test Commit cce044df7f (fsck: detect trailing garbage in all object types, 2017-01-13) added two tests of trailing garbage in a loose object file: one with a commit and one with a blob. The point of having two is that blobs would follow a different code path that streamed the contents, instead of loading it into a buffer as usual. At the time, merely being a blob was enough to trigger the streaming code path. But since 7ac4f3a007 (fsck: actually fsck blob data, 2018-05-02), we now only stream blobs that are actually large. So since then, the streaming code path is not tested at all for this case. We can restore the original intent of the test by tweaking core.bigFileThreshold to make our small blob seem large. There's no easy way to externally verify that we followed the streaming code path, but I did check before/after using a temporary debug statement. Signed-off-by: Jeff King Signed-off-by: Junio C Hamano --- t/t1450-fsck.sh | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/t/t1450-fsck.sh b/t/t1450-fsck.sh index 33a51c9a67..770d68e44e 100755 --- a/t/t1450-fsck.sh +++ b/t/t1450-fsck.sh @@ -636,13 +636,13 @@ test_expect_success 'fsck detects trailing loose garbage (commit)' ' test_i18ngrep "garbage.*$commit" out ' -test_expect_success 'fsck detects trailing loose garbage (blob)' ' +test_expect_success 'fsck detects trailing loose garbage (large blob)' ' blob=$(echo trailing | git hash-object -w --stdin) && file=$(sha1_file $blob) && test_when_finished "remove_object $blob" && chmod +w "$file" && echo garbage >>"$file" && - test_must_fail git fsck 2>out && + test_must_fail git -c core.bigfilethreshold=5 fsck 2>out && test_i18ngrep "garbage.*$blob" out ' From ccdc4819d56202d49e35fbfec2960e221576f862 Mon Sep 17 00:00:00 2001 From: Jeff King Date: Tue, 30 Oct 2018 19:23:12 -0400 Subject: [PATCH 2/3] check_stream_sha1(): handle input underflow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This commit fixes an infinite loop when fscking large truncated loose objects. The check_stream_sha1() function takes an mmap'd loose object buffer and streams 4k of output at a time, checking its sha1. The loop quits when we've output enough bytes (we know the size from the object header), or when zlib tells us anything except Z_OK or Z_BUF_ERROR. The latter is expected because zlib may run out of room in our 4k buffer, and that is how it tells us to process the output and loop again. But Z_BUF_ERROR also covers another case: one in which zlib cannot make forward progress because it needs more _input_. This should never happen in this loop, because though we're streaming the output, we have the entire deflated input available in the mmap'd buffer. But since we don't check this case, we'll just loop infinitely if we do see a truncated object, thinking that zlib is asking for more output space. It's tempting to fix this by checking stream->avail_in as part of the loop condition (and quitting if all of our bytes have been consumed). But that assumes that once zlib has consumed the input, there is nothing left to do. That's not necessarily the case: it may have read our input into its internal state, but still have bytes to output. Instead, let's continue on Z_BUF_ERROR only when we see the case we're expecting: the previous round filled our output buffer completely. If it didn't (and we still saw Z_BUF_ERROR), we know something is wrong and should break out of the loop. The bug comes from commit f6371f9210 (sha1_file: add read_loose_object() function, 2017-01-13), which reimplemented some of the existing loose object functions. So it's worth checking if this bug was inherited from any of those. The answers seems to be no. The two obvious candidates are both OK: 1. unpack_sha1_rest(); this doesn't need to loop on Z_BUF_ERROR at all, since it allocates the expected output buffer in advance (which we can't do since we're explicitly streaming here) 2. check_object_signature(); the streaming path relies on the istream interface, which uses read_istream_loose() for this case. That function uses a similar "is our output buffer full" check with Z_BUF_ERROR (which is where I stole it from for this patch!) Reported-by: Ævar Arnfjörð Bjarmason Helped-by: Junio C Hamano Signed-off-by: Jeff King Signed-off-by: Junio C Hamano --- sha1_file.c | 3 ++- t/t1450-fsck.sh | 19 +++++++++++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/sha1_file.c b/sha1_file.c index d77b915db6..7d016184a3 100644 --- a/sha1_file.c +++ b/sha1_file.c @@ -3847,7 +3847,8 @@ static int check_stream_sha1(git_zstream *stream, * see the comment in unpack_sha1_rest for details. */ while (total_read <= size && - (status == Z_OK || status == Z_BUF_ERROR)) { + (status == Z_OK || + (status == Z_BUF_ERROR && !stream->avail_out))) { stream->next_out = buf; stream->avail_out = sizeof(buf); if (size - total_read < stream->avail_out) diff --git a/t/t1450-fsck.sh b/t/t1450-fsck.sh index 770d68e44e..1f245bbc5f 100755 --- a/t/t1450-fsck.sh +++ b/t/t1450-fsck.sh @@ -646,6 +646,25 @@ test_expect_success 'fsck detects trailing loose garbage (large blob)' ' test_i18ngrep "garbage.*$blob" out ' +test_expect_success 'fsck detects truncated loose object' ' + # make it big enough that we know we will truncate in the data + # portion, not the header + test-genrandom truncate 4096 >file && + blob=$(git hash-object -w file) && + file=$(sha1_file $blob) && + test_when_finished "remove_object $blob" && + test_copy_bytes 1024 <"$file" >tmp && + rm "$file" && + mv -f tmp "$file" && + + # check both regular and streaming code paths + test_must_fail git fsck 2>out && + test_i18ngrep corrupt.*$blob out && + + test_must_fail git -c core.bigfilethreshold=128 fsck 2>out && + test_i18ngrep corrupt.*$blob out +' + # for each of type, we have one version which is referenced by another object # (and so while unreachable, not dangling), and another variant which really is # dangling. From 98f425b453870cdb20b4bf8daa29f52af8e59866 Mon Sep 17 00:00:00 2001 From: Jeff King Date: Tue, 30 Oct 2018 19:23:38 -0400 Subject: [PATCH 3/3] cat-file: handle streaming failures consistently There are three ways to convince cat-file to stream a blob: - cat-file -p $blob - cat-file blob $blob - echo $batch | cat-file --batch In the first two, we simply exit with the error code of streaw_blob_to_fd(). That means that an error will cause us to exit with "-1" (which we try to avoid) without printing any kind of error message (which is confusing to the user). Instead, let's match the third case, which calls die() on an error. Unfortunately we cannot be more specific, as stream_blob_to_fd() does not tell us whether the problem was on reading (e.g., a corrupt object) or on writing (e.g., ENOSPC). That might be an opportunity for future work, but for now we will at least exit with a sane message and exit code. Signed-off-by: Jeff King Signed-off-by: Junio C Hamano --- builtin/cat-file.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/builtin/cat-file.c b/builtin/cat-file.c index 30383e9eb4..f117328637 100644 --- a/builtin/cat-file.c +++ b/builtin/cat-file.c @@ -45,6 +45,13 @@ static int filter_object(const char *path, unsigned mode, return 0; } +static int stream_blob(const struct object_id *oid) +{ + if (stream_blob_to_fd(1, oid, NULL, 0)) + die("unable to stream %s to stdout", oid_to_hex(oid)); + return 0; +} + static int cat_one_file(int opt, const char *exp_type, const char *obj_name, int unknown_type) { @@ -124,7 +131,7 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, } if (type == OBJ_BLOB) - return stream_blob_to_fd(1, &oid, NULL, 0); + return stream_blob(&oid); buf = read_sha1_file(oid.hash, &type, &size); if (!buf) die("Cannot read object %s", obj_name); @@ -146,7 +153,7 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name, oidcpy(&blob_oid, &oid); if (sha1_object_info(blob_oid.hash, NULL) == OBJ_BLOB) - return stream_blob_to_fd(1, &blob_oid, NULL, 0); + return stream_blob(&blob_oid); /* * we attempted to dereference a tag to a blob * and failed; there may be new dereference @@ -306,8 +313,9 @@ static void print_object_or_die(struct batch_options *opt, struct expand_data *d die("BUG: invalid cmdmode: %c", opt->cmdmode); batch_write(opt, contents, size); free(contents); - } else if (stream_blob_to_fd(1, oid, NULL, 0) < 0) - die("unable to stream %s to stdout", oid_to_hex(oid)); + } else { + stream_blob(oid); + } } else { enum object_type type;