You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
72 lines
2.6 KiB
72 lines
2.6 KiB
From 63cfdd987b1dfbf97486f0f884380faee0ae25d0 Mon Sep 17 00:00:00 2001 |
|
From: Ravishankar N <ravishankar@redhat.com> |
|
Date: Wed, 4 Sep 2019 11:27:30 +0530 |
|
Subject: [PATCH 416/449] tests: fix spurious failure of |
|
bug-1402841.t-mt-dir-scan-race.t |
|
|
|
Upstream patch: https://review.gluster.org/23352 |
|
|
|
Problem: |
|
Since commit 600ba94183333c4af9b4a09616690994fd528478, shd starts |
|
healing as soon as it is toggled from disabled to enabled. This was |
|
causing the following line in the .t to fail on a 'fast' machine (always |
|
on my laptop and sometimes on the jenkins slaves). |
|
|
|
EXPECT_NOT "^0$" get_pending_heal_count $V0 |
|
|
|
because by the time shd was disabled, the heal was already completed. |
|
|
|
Fix: |
|
Increase the no. of files to be healed and make it a variable called |
|
FILE_COUNT, should we need to bump it up further because the machines |
|
become even faster. Also created pending metadata heals to increase the |
|
time taken to heal a file. |
|
|
|
>fixes: bz#1748744 |
|
>Change-Id: I5a26b08e45b8c19bce3c01ce67bdcc28ed48198d |
|
Signed-off-by: Ravishankar N <ravishankar@redhat.com> |
|
|
|
BUG: 1844359 |
|
Change-Id: Ie3676c6c2c27e7574b958d2eaac23801dfaed3a9 |
|
Reviewed-on: https://code.engineering.redhat.com/gerrit/202481 |
|
Tested-by: RHGS Build Bot <nigelb@redhat.com> |
|
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com> |
|
--- |
|
tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t | 9 +++++---- |
|
1 file changed, 5 insertions(+), 4 deletions(-) |
|
|
|
diff --git a/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t b/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t |
|
index 6351ba2..a1b9a85 100755 |
|
--- a/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t |
|
+++ b/tests/bugs/core/bug-1402841.t-mt-dir-scan-race.t |
|
@@ -3,6 +3,8 @@ |
|
. $(dirname $0)/../../volume.rc |
|
cleanup; |
|
|
|
+FILE_COUNT=500 |
|
+ |
|
TEST glusterd |
|
TEST pidof glusterd |
|
TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{0,1} |
|
@@ -11,15 +13,14 @@ TEST $CLI volume set $V0 cluster.shd-wait-qlength 100 |
|
TEST $CLI volume start $V0 |
|
|
|
TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 $M0; |
|
-touch $M0/file{1..200} |
|
- |
|
+for i in `seq 1 $FILE_COUNT`; do touch $M0/file$i; done |
|
TEST kill_brick $V0 $H0 $B0/${V0}1 |
|
-for i in {1..200}; do echo hello>$M0/file$i; done |
|
+for i in `seq 1 $FILE_COUNT`; do echo hello>$M0/file$i; chmod -x $M0/file$i; done |
|
TEST $CLI volume start $V0 force |
|
EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" brick_up_status $V0 $H0 $B0/${V0}1 |
|
EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1" afr_child_up_status $V0 1 |
|
|
|
-EXPECT "200" get_pending_heal_count $V0 |
|
+EXPECT "$FILE_COUNT" get_pending_heal_count $V0 |
|
TEST $CLI volume set $V0 self-heal-daemon on |
|
|
|
EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" glustershd_up_status |
|
-- |
|
1.8.3.1 |
|
|
|
|