Tree:
0f8a02c640
main
maint
master
next
seen
todo
gitgui-0.10.0
gitgui-0.10.1
gitgui-0.10.2
gitgui-0.11.0
gitgui-0.12.0
gitgui-0.13.0
gitgui-0.14.0
gitgui-0.15.0
gitgui-0.16.0
gitgui-0.17.0
gitgui-0.18.0
gitgui-0.19.0
gitgui-0.20.0
gitgui-0.21.0
gitgui-0.6.0
gitgui-0.6.1
gitgui-0.6.2
gitgui-0.6.3
gitgui-0.6.4
gitgui-0.6.5
gitgui-0.7.0
gitgui-0.7.0-rc1
gitgui-0.7.1
gitgui-0.7.2
gitgui-0.7.3
gitgui-0.7.4
gitgui-0.7.5
gitgui-0.8.0
gitgui-0.8.1
gitgui-0.8.2
gitgui-0.8.3
gitgui-0.8.4
gitgui-0.9.0
gitgui-0.9.1
gitgui-0.9.2
gitgui-0.9.3
junio-gpg-pub
v0.99
v0.99.1
v0.99.2
v0.99.3
v0.99.4
v0.99.5
v0.99.6
v0.99.7
v0.99.7a
v0.99.7b
v0.99.7c
v0.99.7d
v0.99.8
v0.99.8a
v0.99.8b
v0.99.8c
v0.99.8d
v0.99.8e
v0.99.8f
v0.99.8g
v0.99.9
v0.99.9a
v0.99.9b
v0.99.9c
v0.99.9d
v0.99.9e
v0.99.9f
v0.99.9g
v0.99.9h
v0.99.9i
v0.99.9j
v0.99.9k
v0.99.9l
v0.99.9m
v0.99.9n
v1.0.0
v1.0.0a
v1.0.0b
v1.0.1
v1.0.10
v1.0.11
v1.0.12
v1.0.13
v1.0.2
v1.0.3
v1.0.4
v1.0.5
v1.0.6
v1.0.7
v1.0.8
v1.0.9
v1.0rc1
v1.0rc2
v1.0rc3
v1.0rc4
v1.0rc5
v1.0rc6
v1.1.0
v1.1.1
v1.1.2
v1.1.3
v1.1.4
v1.1.5
v1.1.6
v1.2.0
v1.2.1
v1.2.2
v1.2.3
v1.2.4
v1.2.5
v1.2.6
v1.3.0
v1.3.0-rc1
v1.3.0-rc2
v1.3.0-rc3
v1.3.0-rc4
v1.3.1
v1.3.2
v1.3.3
v1.4.0
v1.4.0-rc1
v1.4.0-rc2
v1.4.1
v1.4.1-rc1
v1.4.1-rc2
v1.4.1.1
v1.4.2
v1.4.2-rc1
v1.4.2-rc2
v1.4.2-rc3
v1.4.2-rc4
v1.4.2.1
v1.4.2.2
v1.4.2.3
v1.4.2.4
v1.4.3
v1.4.3-rc1
v1.4.3-rc2
v1.4.3-rc3
v1.4.3.1
v1.4.3.2
v1.4.3.3
v1.4.3.4
v1.4.3.5
v1.4.4
v1.4.4-rc1
v1.4.4-rc2
v1.4.4.1
v1.4.4.2
v1.4.4.3
v1.4.4.4
v1.4.4.5
v1.5.0
v1.5.0-rc0
v1.5.0-rc1
v1.5.0-rc2
v1.5.0-rc3
v1.5.0-rc4
v1.5.0.1
v1.5.0.2
v1.5.0.3
v1.5.0.4
v1.5.0.5
v1.5.0.6
v1.5.0.7
v1.5.1
v1.5.1-rc1
v1.5.1-rc2
v1.5.1-rc3
v1.5.1.1
v1.5.1.2
v1.5.1.3
v1.5.1.4
v1.5.1.5
v1.5.1.6
v1.5.2
v1.5.2-rc0
v1.5.2-rc1
v1.5.2-rc2
v1.5.2-rc3
v1.5.2.1
v1.5.2.2
v1.5.2.3
v1.5.2.4
v1.5.2.5
v1.5.3
v1.5.3-rc0
v1.5.3-rc1
v1.5.3-rc2
v1.5.3-rc3
v1.5.3-rc4
v1.5.3-rc5
v1.5.3-rc6
v1.5.3-rc7
v1.5.3.1
v1.5.3.2
v1.5.3.3
v1.5.3.4
v1.5.3.5
v1.5.3.6
v1.5.3.7
v1.5.3.8
v1.5.4
v1.5.4-rc0
v1.5.4-rc1
v1.5.4-rc2
v1.5.4-rc3
v1.5.4-rc4
v1.5.4-rc5
v1.5.4.1
v1.5.4.2
v1.5.4.3
v1.5.4.4
v1.5.4.5
v1.5.4.6
v1.5.4.7
v1.5.5
v1.5.5-rc0
v1.5.5-rc1
v1.5.5-rc2
v1.5.5-rc3
v1.5.5.1
v1.5.5.2
v1.5.5.3
v1.5.5.4
v1.5.5.5
v1.5.5.6
v1.5.6
v1.5.6-rc0
v1.5.6-rc1
v1.5.6-rc2
v1.5.6-rc3
v1.5.6.1
v1.5.6.2
v1.5.6.3
v1.5.6.4
v1.5.6.5
v1.5.6.6
v1.6.0
v1.6.0-rc0
v1.6.0-rc1
v1.6.0-rc2
v1.6.0-rc3
v1.6.0.1
v1.6.0.2
v1.6.0.3
v1.6.0.4
v1.6.0.5
v1.6.0.6
v1.6.1
v1.6.1-rc1
v1.6.1-rc2
v1.6.1-rc3
v1.6.1-rc4
v1.6.1.1
v1.6.1.2
v1.6.1.3
v1.6.1.4
v1.6.2
v1.6.2-rc0
v1.6.2-rc1
v1.6.2-rc2
v1.6.2.1
v1.6.2.2
v1.6.2.3
v1.6.2.4
v1.6.2.5
v1.6.3
v1.6.3-rc0
v1.6.3-rc1
v1.6.3-rc2
v1.6.3-rc3
v1.6.3-rc4
v1.6.3.1
v1.6.3.2
v1.6.3.3
v1.6.3.4
v1.6.4
v1.6.4-rc0
v1.6.4-rc1
v1.6.4-rc2
v1.6.4-rc3
v1.6.4.1
v1.6.4.2
v1.6.4.3
v1.6.4.4
v1.6.4.5
v1.6.5
v1.6.5-rc0
v1.6.5-rc1
v1.6.5-rc2
v1.6.5-rc3
v1.6.5.1
v1.6.5.2
v1.6.5.3
v1.6.5.4
v1.6.5.5
v1.6.5.6
v1.6.5.7
v1.6.5.8
v1.6.5.9
v1.6.6
v1.6.6-rc0
v1.6.6-rc1
v1.6.6-rc2
v1.6.6-rc3
v1.6.6-rc4
v1.6.6.1
v1.6.6.2
v1.6.6.3
v1.7.0
v1.7.0-rc0
v1.7.0-rc1
v1.7.0-rc2
v1.7.0.1
v1.7.0.2
v1.7.0.3
v1.7.0.4
v1.7.0.5
v1.7.0.6
v1.7.0.7
v1.7.0.8
v1.7.0.9
v1.7.1
v1.7.1-rc0
v1.7.1-rc1
v1.7.1-rc2
v1.7.1.1
v1.7.1.2
v1.7.1.3
v1.7.1.4
v1.7.10
v1.7.10-rc0
v1.7.10-rc1
v1.7.10-rc2
v1.7.10-rc3
v1.7.10-rc4
v1.7.10.1
v1.7.10.2
v1.7.10.3
v1.7.10.4
v1.7.10.5
v1.7.11
v1.7.11-rc0
v1.7.11-rc1
v1.7.11-rc2
v1.7.11-rc3
v1.7.11.1
v1.7.11.2
v1.7.11.3
v1.7.11.4
v1.7.11.5
v1.7.11.6
v1.7.11.7
v1.7.12
v1.7.12-rc0
v1.7.12-rc1
v1.7.12-rc2
v1.7.12-rc3
v1.7.12.1
v1.7.12.2
v1.7.12.3
v1.7.12.4
v1.7.2
v1.7.2-rc0
v1.7.2-rc1
v1.7.2-rc2
v1.7.2-rc3
v1.7.2.1
v1.7.2.2
v1.7.2.3
v1.7.2.4
v1.7.2.5
v1.7.3
v1.7.3-rc0
v1.7.3-rc1
v1.7.3-rc2
v1.7.3.1
v1.7.3.2
v1.7.3.3
v1.7.3.4
v1.7.3.5
v1.7.4
v1.7.4-rc0
v1.7.4-rc1
v1.7.4-rc2
v1.7.4-rc3
v1.7.4.1
v1.7.4.2
v1.7.4.3
v1.7.4.4
v1.7.4.5
v1.7.5
v1.7.5-rc0
v1.7.5-rc1
v1.7.5-rc2
v1.7.5-rc3
v1.7.5.1
v1.7.5.2
v1.7.5.3
v1.7.5.4
v1.7.6
v1.7.6-rc0
v1.7.6-rc1
v1.7.6-rc2
v1.7.6-rc3
v1.7.6.1
v1.7.6.2
v1.7.6.3
v1.7.6.4
v1.7.6.5
v1.7.6.6
v1.7.7
v1.7.7-rc0
v1.7.7-rc1
v1.7.7-rc2
v1.7.7-rc3
v1.7.7.1
v1.7.7.2
v1.7.7.3
v1.7.7.4
v1.7.7.5
v1.7.7.6
v1.7.7.7
v1.7.8
v1.7.8-rc0
v1.7.8-rc1
v1.7.8-rc2
v1.7.8-rc3
v1.7.8-rc4
v1.7.8.1
v1.7.8.2
v1.7.8.3
v1.7.8.4
v1.7.8.5
v1.7.8.6
v1.7.9
v1.7.9-rc0
v1.7.9-rc1
v1.7.9-rc2
v1.7.9.1
v1.7.9.2
v1.7.9.3
v1.7.9.4
v1.7.9.5
v1.7.9.6
v1.7.9.7
v1.8.0
v1.8.0-rc0
v1.8.0-rc1
v1.8.0-rc2
v1.8.0-rc3
v1.8.0.1
v1.8.0.2
v1.8.0.3
v1.8.1
v1.8.1-rc0
v1.8.1-rc1
v1.8.1-rc2
v1.8.1-rc3
v1.8.1.1
v1.8.1.2
v1.8.1.3
v1.8.1.4
v1.8.1.5
v1.8.1.6
v1.8.2
v1.8.2-rc0
v1.8.2-rc1
v1.8.2-rc2
v1.8.2-rc3
v1.8.2.1
v1.8.2.2
v1.8.2.3
v1.8.3
v1.8.3-rc0
v1.8.3-rc1
v1.8.3-rc2
v1.8.3-rc3
v1.8.3.1
v1.8.3.2
v1.8.3.3
v1.8.3.4
v1.8.4
v1.8.4-rc0
v1.8.4-rc1
v1.8.4-rc2
v1.8.4-rc3
v1.8.4-rc4
v1.8.4.1
v1.8.4.2
v1.8.4.3
v1.8.4.4
v1.8.4.5
v1.8.5
v1.8.5-rc0
v1.8.5-rc1
v1.8.5-rc2
v1.8.5-rc3
v1.8.5.1
v1.8.5.2
v1.8.5.3
v1.8.5.4
v1.8.5.5
v1.8.5.6
v1.9-rc0
v1.9-rc1
v1.9-rc2
v1.9.0
v1.9.0-rc3
v1.9.1
v1.9.2
v1.9.3
v1.9.4
v1.9.5
v2.0.0
v2.0.0-rc0
v2.0.0-rc1
v2.0.0-rc2
v2.0.0-rc3
v2.0.0-rc4
v2.0.1
v2.0.2
v2.0.3
v2.0.4
v2.0.5
v2.1.0
v2.1.0-rc0
v2.1.0-rc1
v2.1.0-rc2
v2.1.1
v2.1.2
v2.1.3
v2.1.4
v2.10.0
v2.10.0-rc0
v2.10.0-rc1
v2.10.0-rc2
v2.10.1
v2.10.2
v2.10.3
v2.10.4
v2.10.5
v2.11.0
v2.11.0-rc0
v2.11.0-rc1
v2.11.0-rc2
v2.11.0-rc3
v2.11.1
v2.11.2
v2.11.3
v2.11.4
v2.12.0
v2.12.0-rc0
v2.12.0-rc1
v2.12.0-rc2
v2.12.1
v2.12.2
v2.12.3
v2.12.4
v2.12.5
v2.13.0
v2.13.0-rc0
v2.13.0-rc1
v2.13.0-rc2
v2.13.1
v2.13.2
v2.13.3
v2.13.4
v2.13.5
v2.13.6
v2.13.7
v2.14.0
v2.14.0-rc0
v2.14.0-rc1
v2.14.1
v2.14.2
v2.14.3
v2.14.4
v2.14.5
v2.14.6
v2.15.0
v2.15.0-rc0
v2.15.0-rc1
v2.15.0-rc2
v2.15.1
v2.15.2
v2.15.3
v2.15.4
v2.16.0
v2.16.0-rc0
v2.16.0-rc1
v2.16.0-rc2
v2.16.1
v2.16.2
v2.16.3
v2.16.4
v2.16.5
v2.16.6
v2.17.0
v2.17.0-rc0
v2.17.0-rc1
v2.17.0-rc2
v2.17.1
v2.17.2
v2.17.3
v2.17.4
v2.17.5
v2.17.6
v2.18.0
v2.18.0-rc0
v2.18.0-rc1
v2.18.0-rc2
v2.18.1
v2.18.2
v2.18.3
v2.18.4
v2.18.5
v2.19.0
v2.19.0-rc0
v2.19.0-rc1
v2.19.0-rc2
v2.19.1
v2.19.2
v2.19.3
v2.19.4
v2.19.5
v2.19.6
v2.2.0
v2.2.0-rc0
v2.2.0-rc1
v2.2.0-rc2
v2.2.0-rc3
v2.2.1
v2.2.2
v2.2.3
v2.20.0
v2.20.0-rc0
v2.20.0-rc1
v2.20.0-rc2
v2.20.1
v2.20.2
v2.20.3
v2.20.4
v2.20.5
v2.21.0
v2.21.0-rc0
v2.21.0-rc1
v2.21.0-rc2
v2.21.1
v2.21.2
v2.21.3
v2.21.4
v2.22.0
v2.22.0-rc0
v2.22.0-rc1
v2.22.0-rc2
v2.22.0-rc3
v2.22.1
v2.22.2
v2.22.3
v2.22.4
v2.22.5
v2.23.0
v2.23.0-rc0
v2.23.0-rc1
v2.23.0-rc2
v2.23.1
v2.23.2
v2.23.3
v2.23.4
v2.24.0
v2.24.0-rc0
v2.24.0-rc1
v2.24.0-rc2
v2.24.1
v2.24.2
v2.24.3
v2.24.4
v2.25.0
v2.25.0-rc0
v2.25.0-rc1
v2.25.0-rc2
v2.25.1
v2.25.2
v2.25.3
v2.25.4
v2.25.5
v2.26.0
v2.26.0-rc0
v2.26.0-rc1
v2.26.0-rc2
v2.26.1
v2.26.2
v2.26.3
v2.27.0
v2.27.0-rc0
v2.27.0-rc1
v2.27.0-rc2
v2.27.1
v2.28.0
v2.28.0-rc0
v2.28.0-rc1
v2.28.0-rc2
v2.28.1
v2.29.0
v2.29.0-rc0
v2.29.0-rc1
v2.29.0-rc2
v2.29.1
v2.29.2
v2.29.3
v2.3.0
v2.3.0-rc0
v2.3.0-rc1
v2.3.0-rc2
v2.3.1
v2.3.10
v2.3.2
v2.3.3
v2.3.4
v2.3.5
v2.3.6
v2.3.7
v2.3.8
v2.3.9
v2.30.0
v2.30.0-rc0
v2.30.0-rc1
v2.30.0-rc2
v2.30.1
v2.30.2
v2.30.3
v2.30.4
v2.30.5
v2.30.6
v2.30.7
v2.30.8
v2.30.9
v2.31.0
v2.31.0-rc0
v2.31.0-rc1
v2.31.0-rc2
v2.31.1
v2.31.2
v2.31.3
v2.31.4
v2.31.5
v2.31.6
v2.31.7
v2.31.8
v2.32.0
v2.32.0-rc0
v2.32.0-rc1
v2.32.0-rc2
v2.32.0-rc3
v2.32.1
v2.32.2
v2.32.3
v2.32.4
v2.32.5
v2.32.6
v2.32.7
v2.33.0
v2.33.0-rc0
v2.33.0-rc1
v2.33.0-rc2
v2.33.1
v2.33.2
v2.33.3
v2.33.4
v2.33.5
v2.33.6
v2.33.7
v2.33.8
v2.34.0
v2.34.0-rc0
v2.34.0-rc1
v2.34.0-rc2
v2.34.1
v2.34.2
v2.34.3
v2.34.4
v2.34.5
v2.34.6
v2.34.7
v2.34.8
v2.35.0
v2.35.0-rc0
v2.35.0-rc1
v2.35.0-rc2
v2.35.1
v2.35.2
v2.35.3
v2.35.4
v2.35.5
v2.35.6
v2.35.7
v2.35.8
v2.36.0
v2.36.0-rc0
v2.36.0-rc1
v2.36.0-rc2
v2.36.1
v2.36.2
v2.36.3
v2.36.4
v2.36.5
v2.36.6
v2.37.0
v2.37.0-rc0
v2.37.0-rc1
v2.37.0-rc2
v2.37.1
v2.37.2
v2.37.3
v2.37.4
v2.37.5
v2.37.6
v2.37.7
v2.38.0
v2.38.0-rc0
v2.38.0-rc1
v2.38.0-rc2
v2.38.1
v2.38.2
v2.38.3
v2.38.4
v2.38.5
v2.39.0
v2.39.0-rc0
v2.39.0-rc1
v2.39.0-rc2
v2.39.1
v2.39.2
v2.39.3
v2.4.0
v2.4.0-rc0
v2.4.0-rc1
v2.4.0-rc2
v2.4.0-rc3
v2.4.1
v2.4.10
v2.4.11
v2.4.12
v2.4.2
v2.4.3
v2.4.4
v2.4.5
v2.4.6
v2.4.7
v2.4.8
v2.4.9
v2.40.0
v2.40.0-rc0
v2.40.0-rc1
v2.40.0-rc2
v2.40.1
v2.41.0
v2.41.0-rc0
v2.41.0-rc1
v2.41.0-rc2
v2.5.0
v2.5.0-rc0
v2.5.0-rc1
v2.5.0-rc2
v2.5.0-rc3
v2.5.1
v2.5.2
v2.5.3
v2.5.4
v2.5.5
v2.5.6
v2.6.0
v2.6.0-rc0
v2.6.0-rc1
v2.6.0-rc2
v2.6.0-rc3
v2.6.1
v2.6.2
v2.6.3
v2.6.4
v2.6.5
v2.6.6
v2.6.7
v2.7.0
v2.7.0-rc0
v2.7.0-rc1
v2.7.0-rc2
v2.7.0-rc3
v2.7.1
v2.7.2
v2.7.3
v2.7.4
v2.7.5
v2.7.6
v2.8.0
v2.8.0-rc0
v2.8.0-rc1
v2.8.0-rc2
v2.8.0-rc3
v2.8.0-rc4
v2.8.1
v2.8.2
v2.8.3
v2.8.4
v2.8.5
v2.8.6
v2.9.0
v2.9.0-rc0
v2.9.0-rc1
v2.9.0-rc2
v2.9.1
v2.9.2
v2.9.3
v2.9.4
v2.9.5
${ noResults }
21 Commits (0f8a02c640397ce22080e1c2eaa5029388638589)
Author | SHA1 | Message | Date |
---|---|---|---|
Nicolas Pitre | 30ae47b4cc |
remove ARM and Mozilla SHA1 implementations
They are both slower than the new BLK_SHA1 implementation, so it is pointless to keep them around. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Nicolas Pitre | e9c5dcd131 |
block-sha1: guard gcc extensions with __GNUC__
With this, the code should now be portable to any C compiler. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Nicolas Pitre | 51ea55190b |
make sure byte swapping is optimal for git
We rely on ntohl() and htonl() to perform byte swapping in many places. However, some platforms have libraries providing really poor implementations of those which might cause significant performance issues, especially with the block-sha1 code. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Nicolas Pitre | d5f6a96fa4 |
block-sha1: make the size member first in the context struct
This is a 64-bit value, hence having it first provides a better alignment. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Brandon Casey | a12218572f |
block-sha1/sha1.c: silence compiler complaints by casting void * to char *
Some compilers produce errors when arithmetic is attempted on pointers to void. We want computations done on byte addresses, so cast them to char * to work them around. Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Nicolas Pitre | ee7dc310af |
block-sha1: more good unaligned memory access candidates
In addition to X86, PowerPC and S390 are capable of unaligned memory accesses. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Nicolas Pitre | 660231aa97 |
block-sha1: support for architectures with memory alignment restrictions
This is needed on architectures with poor or non-existent unaligned memory support and/or no fast byte swap instruction (such as ARM) by using byte accesses to memory and shifting the result together. This also makes the code portable, therefore the byte access methods are the defaults. Any architecture that properly supports unaligned word accesses in hardware simply has to enable the alternative methods. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Nicolas Pitre | dc52fd2973 |
block-sha1: split the different "hacks" to be individually selected
This is to make it easier for them to be selected individually depending on the architecture instead of the other way around i.e. having each architecture select a list of hacks up front. That makes for clearer documentation as well. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Nicolas Pitre | 30ba0de726 |
block-sha1: move code around
Move the code around so specific architecture hacks are defined first. Also make one line comments actually one line. No code change. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | 926172c5e4 |
block-sha1: improve code on large-register-set machines
For x86 performance (especially in 32-bit mode) I added that hack to write the SHA1 internal temporary hash using a volatile pointer, in order to get gcc to not try to cache the array contents. Because gcc will do all the wrong things, and then spill things in insane random ways. But on architectures like PPC, where you have 32 registers, it's actually perfectly reasonable to put the whole temporary array[] into the register set, and gcc can do so. So make the 'volatile unsigned int *' cast be dependent on a SMALL_REGISTER_SET preprocessor symbol, and enable it (currently) on just x86 and x86-64. With that, the routine is fairly reasonable even when compared to the hand-scheduled PPC version. Ben Herrenschmidt reports on a G5: * Paulus asm version: about 3.67s * Yours with no change: about 5.74s * Yours without "volatile": about 3.78s so with this the C version is within about 3% of the asm one. And add a lot of commentary on what the heck is going on. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | 66c9c6c0fb |
block-sha1: improved SHA1 hashing
I think I have found a way to avoid the gcc crazyness. Lookie here: # TIME[s] SPEED[MB/s] rfc3174 5.094 119.8 rfc3174 5.098 119.7 linus 1.462 417.5 linusas 2.008 304 linusas2 1.878 325 mozilla 5.566 109.6 mozillaas 5.866 104.1 openssl 1.609 379.3 spelvin 1.675 364.5 spelvina 1.601 381.3 nettle 1.591 383.6 notice? I outperform all the hand-tuned asm on 32-bit too. By quite a margin, in fact. Now, I didn't try a P4, and it's possible that it won't do that there, but the 32-bit code generation sure looks impressive on my Nehalem box. The magic? I force the stores to the 512-bit hash bucket to be done in order. That seems to help a lot. The diff is trivial (on top of the "rename registers with cpp" patch), as appended. And it does seem to fix the P4 issues too, although I can obviously (once again) only test Prescott, and only in 64-bit mode: # TIME[s] SPEED[MB/s] rfc3174 1.662 36.73 rfc3174 1.64 37.22 linus 0.2523 241.9 linusas 0.4367 139.8 linusas2 0.4487 136 mozilla 0.9704 62.9 mozillaas 0.9399 64.94 that's some really impressive improvement. All from just saying "do the stores in the order I told you to, dammit!" to the compiler. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | 30d12d4c16 |
block-sha1: perform register rotation using cpp
Instead of letting the compiler to figure out the optimal way to rotate register usage, explicitly rotate the register names with cpp. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | 5d5210c35a |
block-sha1: get rid of redundant 'lenW' context
.. and simplify the ctx->size logic. We now count the size in bytes, which means that 'lenW' was always just the low 6 bits of the total size, so we don't carry it around separately any more. And we do the 'size in bits' shift at the end. Suggested by Nicolas Pitre and linux@horizon.com. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | e869e113c8 |
block-sha1: Use '(B&C)+(D&(B^C))' instead of '(B&C)|(D&(B|C))' in round 3
It's an equivalent expression, but the '+' gives us some freedom in instruction selection (for example, we can use 'lea' rather than 'add'), and associates with the other additions around it to give some minor scheduling freedom. Suggested-by: linux@horizon.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | ab14c823df |
block-sha1: macroize the rounds a bit further
Avoid repeating the shared parts of the different rounds by adding a macro layer or two. It was already more cpp than C. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | 7b5075fcfb |
block-sha1: re-use the temporary array as we calculate the SHA1
The mozilla-SHA1 code did this 80-word array for the 80 iterations. But the SHA1 state is really just 512 bits, and you can actually keep it in a kind of "circular queue" of just 16 words instead. This requires us to do the xor updates as we go along (rather than as a pre-phase), but that's really what we want to do anyway. This gets me really close to the OpenSSL performance on my Nehalem. Look ma, all C code (ok, there's the rol/ror hack, but that one doesn't strictly even matter on my Nehalem, it's just a local optimization). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | 139e3456ec |
block-sha1: make the 'ntohl()' part of the first SHA1 loop
This helps a teeny bit. But what I -really- want to do is to avoid the whole 80-array loop, and do the xor updates as I go along.. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Junio C Hamano | fd536d3439 |
block-sha1: minor fixups
Bert Wesarg noticed non-x86 version of SHA_ROT() had a typo. Also spell in-line assembly as __asm__(), otherwise I seem to get error: implicit declaration of function 'asm' from my compiler. Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | b8e48a89b8 |
block-sha1: try to use rol/ror appropriately
Use the one with the smaller constant. It _can_ generate slightly smaller code (a constant of 1 is special), but perhaps more importantly it's possibly faster on any uarch that does a rotate with a loop. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Junio C Hamano | b26a9d5089 |
block-sha1: undo ctx->size change
Undo the change I picked up from the mailing list discussion suggested by Nico, not because it is wrong, but it will be done at the end of the follow-up series. Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |
Linus Torvalds | d7c208a92e |
Add new optimized C 'block-sha1' routines
Based on the mozilla SHA1 routine, but doing the input data accesses a word at a time and with 'htonl()' instead of loading bytes and shifting. It requires an architecture that is ok with unaligned 32-bit loads and a fast htonl(). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
15 years ago |