Browse Source
In directory rename detection (when a directory is removed on one side of history and the other side adds new files to that directory), we work to find where the greatest number of files within that directory were renamed to so that the new files can be moved with the majority of the files. Naively, we can just do this by detecting renames for *all* files within the removed/renamed directory, looking at all the destination directories where files within that directory were moved, and if there is more than one such directory then taking the one with the greatest number of files as the directory where the old directory was renamed to. However, sometimes there are enough renames from exact rename detection or basename-guided rename detection that we have enough information to determine the majority winner already. Add a function meant to compute whether particular renames are still needed based on this majority rules check. The next several commits will then add the necessary infrastructure to get the information we need to compute which additional rename sources we can skip. An important side note for future further optimization: There is a possible improvement to this optimization that I have not yet attempted and will not be included in this series of patches: we could first check whether exact renames provide enough information for us to determine directory renames, and avoid doing basename-guided rename detection on some or all of the RELEVANT_LOCATION files within those directories. In effect, this variant would mean doing the handle_early_known_dir_renames() both after exact rename detection and again after basename-guided rename detection, though it would also mean decrementing the number of "unknown" renames for each rename we found from basename-guided rename detection. Adding this additional check for skippable renames right after exact rename detection might turn out to be valuable, especially for partial clones where it might allow us to download certain source files entirely. However, this particular optimization was actually the last one I did in original implementation order, and by the time I implemented this idea, every testcase I had was sufficiently fast that further optimization was unwarranted. If future testcases arise that tax rename detection more heavily (or perhaps partial clones can benefit from avoiding loading more objects), it may be worth implementing this more involved variant. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>maint


1 changed files with 25 additions and 0 deletions
Loading…
Reference in new issue