Tree:
40d9632514
main
maint
master
next
seen
todo
gitgui-0.10.0
gitgui-0.10.1
gitgui-0.10.2
gitgui-0.11.0
gitgui-0.12.0
gitgui-0.13.0
gitgui-0.14.0
gitgui-0.15.0
gitgui-0.16.0
gitgui-0.17.0
gitgui-0.18.0
gitgui-0.19.0
gitgui-0.20.0
gitgui-0.21.0
gitgui-0.6.0
gitgui-0.6.1
gitgui-0.6.2
gitgui-0.6.3
gitgui-0.6.4
gitgui-0.6.5
gitgui-0.7.0
gitgui-0.7.0-rc1
gitgui-0.7.1
gitgui-0.7.2
gitgui-0.7.3
gitgui-0.7.4
gitgui-0.7.5
gitgui-0.8.0
gitgui-0.8.1
gitgui-0.8.2
gitgui-0.8.3
gitgui-0.8.4
gitgui-0.9.0
gitgui-0.9.1
gitgui-0.9.2
gitgui-0.9.3
junio-gpg-pub
v0.99
v0.99.1
v0.99.2
v0.99.3
v0.99.4
v0.99.5
v0.99.6
v0.99.7
v0.99.7a
v0.99.7b
v0.99.7c
v0.99.7d
v0.99.8
v0.99.8a
v0.99.8b
v0.99.8c
v0.99.8d
v0.99.8e
v0.99.8f
v0.99.8g
v0.99.9
v0.99.9a
v0.99.9b
v0.99.9c
v0.99.9d
v0.99.9e
v0.99.9f
v0.99.9g
v0.99.9h
v0.99.9i
v0.99.9j
v0.99.9k
v0.99.9l
v0.99.9m
v0.99.9n
v1.0.0
v1.0.0a
v1.0.0b
v1.0.1
v1.0.10
v1.0.11
v1.0.12
v1.0.13
v1.0.2
v1.0.3
v1.0.4
v1.0.5
v1.0.6
v1.0.7
v1.0.8
v1.0.9
v1.0rc1
v1.0rc2
v1.0rc3
v1.0rc4
v1.0rc5
v1.0rc6
v1.1.0
v1.1.1
v1.1.2
v1.1.3
v1.1.4
v1.1.5
v1.1.6
v1.2.0
v1.2.1
v1.2.2
v1.2.3
v1.2.4
v1.2.5
v1.2.6
v1.3.0
v1.3.0-rc1
v1.3.0-rc2
v1.3.0-rc3
v1.3.0-rc4
v1.3.1
v1.3.2
v1.3.3
v1.4.0
v1.4.0-rc1
v1.4.0-rc2
v1.4.1
v1.4.1-rc1
v1.4.1-rc2
v1.4.1.1
v1.4.2
v1.4.2-rc1
v1.4.2-rc2
v1.4.2-rc3
v1.4.2-rc4
v1.4.2.1
v1.4.2.2
v1.4.2.3
v1.4.2.4
v1.4.3
v1.4.3-rc1
v1.4.3-rc2
v1.4.3-rc3
v1.4.3.1
v1.4.3.2
v1.4.3.3
v1.4.3.4
v1.4.3.5
v1.4.4
v1.4.4-rc1
v1.4.4-rc2
v1.4.4.1
v1.4.4.2
v1.4.4.3
v1.4.4.4
v1.4.4.5
v1.5.0
v1.5.0-rc0
v1.5.0-rc1
v1.5.0-rc2
v1.5.0-rc3
v1.5.0-rc4
v1.5.0.1
v1.5.0.2
v1.5.0.3
v1.5.0.4
v1.5.0.5
v1.5.0.6
v1.5.0.7
v1.5.1
v1.5.1-rc1
v1.5.1-rc2
v1.5.1-rc3
v1.5.1.1
v1.5.1.2
v1.5.1.3
v1.5.1.4
v1.5.1.5
v1.5.1.6
v1.5.2
v1.5.2-rc0
v1.5.2-rc1
v1.5.2-rc2
v1.5.2-rc3
v1.5.2.1
v1.5.2.2
v1.5.2.3
v1.5.2.4
v1.5.2.5
v1.5.3
v1.5.3-rc0
v1.5.3-rc1
v1.5.3-rc2
v1.5.3-rc3
v1.5.3-rc4
v1.5.3-rc5
v1.5.3-rc6
v1.5.3-rc7
v1.5.3.1
v1.5.3.2
v1.5.3.3
v1.5.3.4
v1.5.3.5
v1.5.3.6
v1.5.3.7
v1.5.3.8
v1.5.4
v1.5.4-rc0
v1.5.4-rc1
v1.5.4-rc2
v1.5.4-rc3
v1.5.4-rc4
v1.5.4-rc5
v1.5.4.1
v1.5.4.2
v1.5.4.3
v1.5.4.4
v1.5.4.5
v1.5.4.6
v1.5.4.7
v1.5.5
v1.5.5-rc0
v1.5.5-rc1
v1.5.5-rc2
v1.5.5-rc3
v1.5.5.1
v1.5.5.2
v1.5.5.3
v1.5.5.4
v1.5.5.5
v1.5.5.6
v1.5.6
v1.5.6-rc0
v1.5.6-rc1
v1.5.6-rc2
v1.5.6-rc3
v1.5.6.1
v1.5.6.2
v1.5.6.3
v1.5.6.4
v1.5.6.5
v1.5.6.6
v1.6.0
v1.6.0-rc0
v1.6.0-rc1
v1.6.0-rc2
v1.6.0-rc3
v1.6.0.1
v1.6.0.2
v1.6.0.3
v1.6.0.4
v1.6.0.5
v1.6.0.6
v1.6.1
v1.6.1-rc1
v1.6.1-rc2
v1.6.1-rc3
v1.6.1-rc4
v1.6.1.1
v1.6.1.2
v1.6.1.3
v1.6.1.4
v1.6.2
v1.6.2-rc0
v1.6.2-rc1
v1.6.2-rc2
v1.6.2.1
v1.6.2.2
v1.6.2.3
v1.6.2.4
v1.6.2.5
v1.6.3
v1.6.3-rc0
v1.6.3-rc1
v1.6.3-rc2
v1.6.3-rc3
v1.6.3-rc4
v1.6.3.1
v1.6.3.2
v1.6.3.3
v1.6.3.4
v1.6.4
v1.6.4-rc0
v1.6.4-rc1
v1.6.4-rc2
v1.6.4-rc3
v1.6.4.1
v1.6.4.2
v1.6.4.3
v1.6.4.4
v1.6.4.5
v1.6.5
v1.6.5-rc0
v1.6.5-rc1
v1.6.5-rc2
v1.6.5-rc3
v1.6.5.1
v1.6.5.2
v1.6.5.3
v1.6.5.4
v1.6.5.5
v1.6.5.6
v1.6.5.7
v1.6.5.8
v1.6.5.9
v1.6.6
v1.6.6-rc0
v1.6.6-rc1
v1.6.6-rc2
v1.6.6-rc3
v1.6.6-rc4
v1.6.6.1
v1.6.6.2
v1.6.6.3
v1.7.0
v1.7.0-rc0
v1.7.0-rc1
v1.7.0-rc2
v1.7.0.1
v1.7.0.2
v1.7.0.3
v1.7.0.4
v1.7.0.5
v1.7.0.6
v1.7.0.7
v1.7.0.8
v1.7.0.9
v1.7.1
v1.7.1-rc0
v1.7.1-rc1
v1.7.1-rc2
v1.7.1.1
v1.7.1.2
v1.7.1.3
v1.7.1.4
v1.7.10
v1.7.10-rc0
v1.7.10-rc1
v1.7.10-rc2
v1.7.10-rc3
v1.7.10-rc4
v1.7.10.1
v1.7.10.2
v1.7.10.3
v1.7.10.4
v1.7.10.5
v1.7.11
v1.7.11-rc0
v1.7.11-rc1
v1.7.11-rc2
v1.7.11-rc3
v1.7.11.1
v1.7.11.2
v1.7.11.3
v1.7.11.4
v1.7.11.5
v1.7.11.6
v1.7.11.7
v1.7.12
v1.7.12-rc0
v1.7.12-rc1
v1.7.12-rc2
v1.7.12-rc3
v1.7.12.1
v1.7.12.2
v1.7.12.3
v1.7.12.4
v1.7.2
v1.7.2-rc0
v1.7.2-rc1
v1.7.2-rc2
v1.7.2-rc3
v1.7.2.1
v1.7.2.2
v1.7.2.3
v1.7.2.4
v1.7.2.5
v1.7.3
v1.7.3-rc0
v1.7.3-rc1
v1.7.3-rc2
v1.7.3.1
v1.7.3.2
v1.7.3.3
v1.7.3.4
v1.7.3.5
v1.7.4
v1.7.4-rc0
v1.7.4-rc1
v1.7.4-rc2
v1.7.4-rc3
v1.7.4.1
v1.7.4.2
v1.7.4.3
v1.7.4.4
v1.7.4.5
v1.7.5
v1.7.5-rc0
v1.7.5-rc1
v1.7.5-rc2
v1.7.5-rc3
v1.7.5.1
v1.7.5.2
v1.7.5.3
v1.7.5.4
v1.7.6
v1.7.6-rc0
v1.7.6-rc1
v1.7.6-rc2
v1.7.6-rc3
v1.7.6.1
v1.7.6.2
v1.7.6.3
v1.7.6.4
v1.7.6.5
v1.7.6.6
v1.7.7
v1.7.7-rc0
v1.7.7-rc1
v1.7.7-rc2
v1.7.7-rc3
v1.7.7.1
v1.7.7.2
v1.7.7.3
v1.7.7.4
v1.7.7.5
v1.7.7.6
v1.7.7.7
v1.7.8
v1.7.8-rc0
v1.7.8-rc1
v1.7.8-rc2
v1.7.8-rc3
v1.7.8-rc4
v1.7.8.1
v1.7.8.2
v1.7.8.3
v1.7.8.4
v1.7.8.5
v1.7.8.6
v1.7.9
v1.7.9-rc0
v1.7.9-rc1
v1.7.9-rc2
v1.7.9.1
v1.7.9.2
v1.7.9.3
v1.7.9.4
v1.7.9.5
v1.7.9.6
v1.7.9.7
v1.8.0
v1.8.0-rc0
v1.8.0-rc1
v1.8.0-rc2
v1.8.0-rc3
v1.8.0.1
v1.8.0.2
v1.8.0.3
v1.8.1
v1.8.1-rc0
v1.8.1-rc1
v1.8.1-rc2
v1.8.1-rc3
v1.8.1.1
v1.8.1.2
v1.8.1.3
v1.8.1.4
v1.8.1.5
v1.8.1.6
v1.8.2
v1.8.2-rc0
v1.8.2-rc1
v1.8.2-rc2
v1.8.2-rc3
v1.8.2.1
v1.8.2.2
v1.8.2.3
v1.8.3
v1.8.3-rc0
v1.8.3-rc1
v1.8.3-rc2
v1.8.3-rc3
v1.8.3.1
v1.8.3.2
v1.8.3.3
v1.8.3.4
v1.8.4
v1.8.4-rc0
v1.8.4-rc1
v1.8.4-rc2
v1.8.4-rc3
v1.8.4-rc4
v1.8.4.1
v1.8.4.2
v1.8.4.3
v1.8.4.4
v1.8.4.5
v1.8.5
v1.8.5-rc0
v1.8.5-rc1
v1.8.5-rc2
v1.8.5-rc3
v1.8.5.1
v1.8.5.2
v1.8.5.3
v1.8.5.4
v1.8.5.5
v1.8.5.6
v1.9-rc0
v1.9-rc1
v1.9-rc2
v1.9.0
v1.9.0-rc3
v1.9.1
v1.9.2
v1.9.3
v1.9.4
v1.9.5
v2.0.0
v2.0.0-rc0
v2.0.0-rc1
v2.0.0-rc2
v2.0.0-rc3
v2.0.0-rc4
v2.0.1
v2.0.2
v2.0.3
v2.0.4
v2.0.5
v2.1.0
v2.1.0-rc0
v2.1.0-rc1
v2.1.0-rc2
v2.1.1
v2.1.2
v2.1.3
v2.1.4
v2.10.0
v2.10.0-rc0
v2.10.0-rc1
v2.10.0-rc2
v2.10.1
v2.10.2
v2.10.3
v2.10.4
v2.10.5
v2.11.0
v2.11.0-rc0
v2.11.0-rc1
v2.11.0-rc2
v2.11.0-rc3
v2.11.1
v2.11.2
v2.11.3
v2.11.4
v2.12.0
v2.12.0-rc0
v2.12.0-rc1
v2.12.0-rc2
v2.12.1
v2.12.2
v2.12.3
v2.12.4
v2.12.5
v2.13.0
v2.13.0-rc0
v2.13.0-rc1
v2.13.0-rc2
v2.13.1
v2.13.2
v2.13.3
v2.13.4
v2.13.5
v2.13.6
v2.13.7
v2.14.0
v2.14.0-rc0
v2.14.0-rc1
v2.14.1
v2.14.2
v2.14.3
v2.14.4
v2.14.5
v2.14.6
v2.15.0
v2.15.0-rc0
v2.15.0-rc1
v2.15.0-rc2
v2.15.1
v2.15.2
v2.15.3
v2.15.4
v2.16.0
v2.16.0-rc0
v2.16.0-rc1
v2.16.0-rc2
v2.16.1
v2.16.2
v2.16.3
v2.16.4
v2.16.5
v2.16.6
v2.17.0
v2.17.0-rc0
v2.17.0-rc1
v2.17.0-rc2
v2.17.1
v2.17.2
v2.17.3
v2.17.4
v2.17.5
v2.17.6
v2.18.0
v2.18.0-rc0
v2.18.0-rc1
v2.18.0-rc2
v2.18.1
v2.18.2
v2.18.3
v2.18.4
v2.18.5
v2.19.0
v2.19.0-rc0
v2.19.0-rc1
v2.19.0-rc2
v2.19.1
v2.19.2
v2.19.3
v2.19.4
v2.19.5
v2.19.6
v2.2.0
v2.2.0-rc0
v2.2.0-rc1
v2.2.0-rc2
v2.2.0-rc3
v2.2.1
v2.2.2
v2.2.3
v2.20.0
v2.20.0-rc0
v2.20.0-rc1
v2.20.0-rc2
v2.20.1
v2.20.2
v2.20.3
v2.20.4
v2.20.5
v2.21.0
v2.21.0-rc0
v2.21.0-rc1
v2.21.0-rc2
v2.21.1
v2.21.2
v2.21.3
v2.21.4
v2.22.0
v2.22.0-rc0
v2.22.0-rc1
v2.22.0-rc2
v2.22.0-rc3
v2.22.1
v2.22.2
v2.22.3
v2.22.4
v2.22.5
v2.23.0
v2.23.0-rc0
v2.23.0-rc1
v2.23.0-rc2
v2.23.1
v2.23.2
v2.23.3
v2.23.4
v2.24.0
v2.24.0-rc0
v2.24.0-rc1
v2.24.0-rc2
v2.24.1
v2.24.2
v2.24.3
v2.24.4
v2.25.0
v2.25.0-rc0
v2.25.0-rc1
v2.25.0-rc2
v2.25.1
v2.25.2
v2.25.3
v2.25.4
v2.25.5
v2.26.0
v2.26.0-rc0
v2.26.0-rc1
v2.26.0-rc2
v2.26.1
v2.26.2
v2.26.3
v2.27.0
v2.27.0-rc0
v2.27.0-rc1
v2.27.0-rc2
v2.27.1
v2.28.0
v2.28.0-rc0
v2.28.0-rc1
v2.28.0-rc2
v2.28.1
v2.29.0
v2.29.0-rc0
v2.29.0-rc1
v2.29.0-rc2
v2.29.1
v2.29.2
v2.29.3
v2.3.0
v2.3.0-rc0
v2.3.0-rc1
v2.3.0-rc2
v2.3.1
v2.3.10
v2.3.2
v2.3.3
v2.3.4
v2.3.5
v2.3.6
v2.3.7
v2.3.8
v2.3.9
v2.30.0
v2.30.0-rc0
v2.30.0-rc1
v2.30.0-rc2
v2.30.1
v2.30.2
v2.30.3
v2.30.4
v2.30.5
v2.30.6
v2.30.7
v2.30.8
v2.30.9
v2.31.0
v2.31.0-rc0
v2.31.0-rc1
v2.31.0-rc2
v2.31.1
v2.31.2
v2.31.3
v2.31.4
v2.31.5
v2.31.6
v2.31.7
v2.31.8
v2.32.0
v2.32.0-rc0
v2.32.0-rc1
v2.32.0-rc2
v2.32.0-rc3
v2.32.1
v2.32.2
v2.32.3
v2.32.4
v2.32.5
v2.32.6
v2.32.7
v2.33.0
v2.33.0-rc0
v2.33.0-rc1
v2.33.0-rc2
v2.33.1
v2.33.2
v2.33.3
v2.33.4
v2.33.5
v2.33.6
v2.33.7
v2.33.8
v2.34.0
v2.34.0-rc0
v2.34.0-rc1
v2.34.0-rc2
v2.34.1
v2.34.2
v2.34.3
v2.34.4
v2.34.5
v2.34.6
v2.34.7
v2.34.8
v2.35.0
v2.35.0-rc0
v2.35.0-rc1
v2.35.0-rc2
v2.35.1
v2.35.2
v2.35.3
v2.35.4
v2.35.5
v2.35.6
v2.35.7
v2.35.8
v2.36.0
v2.36.0-rc0
v2.36.0-rc1
v2.36.0-rc2
v2.36.1
v2.36.2
v2.36.3
v2.36.4
v2.36.5
v2.36.6
v2.37.0
v2.37.0-rc0
v2.37.0-rc1
v2.37.0-rc2
v2.37.1
v2.37.2
v2.37.3
v2.37.4
v2.37.5
v2.37.6
v2.37.7
v2.38.0
v2.38.0-rc0
v2.38.0-rc1
v2.38.0-rc2
v2.38.1
v2.38.2
v2.38.3
v2.38.4
v2.38.5
v2.39.0
v2.39.0-rc0
v2.39.0-rc1
v2.39.0-rc2
v2.39.1
v2.39.2
v2.39.3
v2.4.0
v2.4.0-rc0
v2.4.0-rc1
v2.4.0-rc2
v2.4.0-rc3
v2.4.1
v2.4.10
v2.4.11
v2.4.12
v2.4.2
v2.4.3
v2.4.4
v2.4.5
v2.4.6
v2.4.7
v2.4.8
v2.4.9
v2.40.0
v2.40.0-rc0
v2.40.0-rc1
v2.40.0-rc2
v2.40.1
v2.41.0
v2.41.0-rc0
v2.41.0-rc1
v2.41.0-rc2
v2.5.0
v2.5.0-rc0
v2.5.0-rc1
v2.5.0-rc2
v2.5.0-rc3
v2.5.1
v2.5.2
v2.5.3
v2.5.4
v2.5.5
v2.5.6
v2.6.0
v2.6.0-rc0
v2.6.0-rc1
v2.6.0-rc2
v2.6.0-rc3
v2.6.1
v2.6.2
v2.6.3
v2.6.4
v2.6.5
v2.6.6
v2.6.7
v2.7.0
v2.7.0-rc0
v2.7.0-rc1
v2.7.0-rc2
v2.7.0-rc3
v2.7.1
v2.7.2
v2.7.3
v2.7.4
v2.7.5
v2.7.6
v2.8.0
v2.8.0-rc0
v2.8.0-rc1
v2.8.0-rc2
v2.8.0-rc3
v2.8.0-rc4
v2.8.1
v2.8.2
v2.8.3
v2.8.4
v2.8.5
v2.8.6
v2.9.0
v2.9.0-rc0
v2.9.0-rc1
v2.9.0-rc2
v2.9.1
v2.9.2
v2.9.3
v2.9.4
v2.9.5
${ noResults }
171 Commits (40d96325143ae04cb3f645ac96f6413d1f641b9b)
Author | SHA1 | Message | Date |
---|---|---|---|
![]() |
50d3413740 |
http: make redirects more obvious
We instruct curl to always follow HTTP redirects. This is
convenient, but it creates opportunities for malicious
servers to create confusing situations. For instance,
imagine Alice is a git user with access to a private
repository on Bob's server. Mallory runs her own server and
wants to access objects from Bob's repository.
Mallory may try a few tricks that involve asking Alice to
clone from her, build on top, and then push the result:
1. Mallory may simply redirect all fetch requests to Bob's
server. Git will transparently follow those redirects
and fetch Bob's history, which Alice may believe she
got from Mallory. The subsequent push seems like it is
just feeding Mallory back her own objects, but is
actually leaking Bob's objects. There is nothing in
git's output to indicate that Bob's repository was
involved at all.
The downside (for Mallory) of this attack is that Alice
will have received Bob's entire repository, and is
likely to notice that when building on top of it.
2. If Mallory happens to know the sha1 of some object X in
Bob's repository, she can instead build her own history
that references that object. She then runs a dumb http
server, and Alice's client will fetch each object
individually. When it asks for X, Mallory redirects her
to Bob's server. The end result is that Alice obtains
objects from Bob, but they may be buried deep in
history. Alice is less likely to notice.
Both of these attacks are fairly hard to pull off. There's a
social component in getting Mallory to convince Alice to
work with her. Alice may be prompted for credentials in
accessing Bob's repository (but not always, if she is using
a credential helper that caches). Attack (1) requires a
certain amount of obliviousness on Alice's part while making
a new commit. Attack (2) requires that Mallory knows a sha1
in Bob's repository, that Bob's server supports dumb http,
and that the object in question is loose on Bob's server.
But we can probably make things a bit more obvious without
any loss of functionality. This patch does two things to
that end.
First, when we encounter a whole-repo redirect during the
initial ref discovery, we now inform the user on stderr,
making attack (1) much more obvious.
Second, the decision to follow redirects is now
configurable. The truly paranoid can set the new
http.followRedirects to false to avoid any redirection
entirely. But for a more practical default, we will disallow
redirects only after the initial ref discovery. This is
enough to thwart attacks similar to (2), while still
allowing the common use of redirects at the repository
level. Since
|
8 years ago |
![]() |
fcaa6e64b3 |
remote-curl: rename shadowed options variable
The discover_refs() function has a local "options" variable to hold the http_get_options we pass to http_get_strbuf(). But this shadows the global "struct options" that holds our program-level options, which cannot be accessed from this function. Let's give the local one a more descriptive name so we can tell the two apart. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
8 years ago |
![]() |
296b847c0d |
remote-curl: don't hang when a server dies before any output
In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
8 years ago |
![]() |
5ce5f5fa5a |
common-main: call git_setup_gettext()
This should be part of every program, as otherwise users do not get translated error messages. However, some external commands forgot to do so (e.g., git-credential-store). This fixes them, and eliminates the repeated code in programs that did remember to use it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
650c449250 |
common-main: call git_extract_argv0_path()
Every program which links against libgit.a must call this function, or risk hitting an assert() in system_path() that checks whether we have configured argv0_path (though only when RUNTIME_PREFIX is defined, so essentially only on Windows). Looking at the diff, you can see that putting it into the common main() saves us having to do it individually in each of the external commands. But what you can't see are the cases where we _should_ have been doing so, but weren't (e.g., git-credential-store, and all of the t/helper test programs). This has been an accident-waiting-to-happen for a long time, but wasn't triggered until recently because it involves one of those programs actually calling system_path(). That happened with git-credential-store in v2.8.0 with |
9 years ago |
![]() |
3f2e2297b9 |
add an extra level of indirection to main()
There are certain startup tasks that we expect every git process to do. In some cases this is just to improve the quality of the program (e.g., setting up gettext()). In others it is a requirement for using certain functions in libgit.a (e.g., system_path() expects that you have called git_extract_argv0_path()). Most commands are builtins and are covered by the git.c version of main(). However, there are still a few external commands that use their own main(). Each of these has to remember to include the correct startup sequence, and we are not always consistent. Rather than just fix the inconsistencies, let's make this harder to get wrong by providing a common main() that can run this standard startup. We basically have two options to do this: - the compat/mingw.h file already does something like this by adding a #define that replaces the definition of main with a wrapper that calls mingw_startup(). The upside is that the code in each program doesn't need to be changed at all; it's rewritten on the fly by the preprocessor. The downside is that it may make debugging of the startup sequence a bit more confusing, as the preprocessor is quietly inserting new code. - the builtin functions are all of the form cmd_foo(), and git.c's main() calls them. This is much more explicit, which may make things more obvious to somebody reading the code. It's also more flexible (because of course we have to figure out _which_ cmd_foo() to call). The downside is that each of the builtins must define cmd_foo(), instead of just main(). This patch chooses the latter option, preferring the more explicit approach, even though it is more invasive. We introduce a new file common-main.c, with the "real" main. It expects to call cmd_main() from whatever other objects it is linked against. We link common-main.o against anything that links against libgit.a, since we know that such programs will need to do this setup. Note that common-main.o can't actually go inside libgit.a, as the linker would not pick up its main() function automatically (it has no callers). The rest of the patch is just adjusting all of the various external programs (mostly in t/helper) to use cmd_main(). I've provided a global declaration for cmd_main(), which means that all of the programs also need to match its signature. In particular, many functions need to switch to "const char **" instead of "char **" for argv. This effect ripples out to a few other variables and functions, as well. This makes the patch even more invasive, but the end result is much better. We should be treating argv strings as const anyway, and now all programs conform to the same signature (which also matches the way builtins are defined). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
cccf74e2da |
fetch, upload-pack: --deepen=N extends shallow boundary by N commits
In git-fetch, --depth argument is always relative with the latest remote refs. This makes it a bit difficult to cover this use case, where the user wants to make the shallow history, say 3 levels deeper. It would work if remote refs have not moved yet, but nobody can guarantee that, especially when that use case is performed a couple months after the last clone or "git fetch --depth". Also, modifying shallow boundary using --depth does not work well with clones created by --since or --not. This patch fixes that. A new argument --deepen=<N> will add <N> more (*) parent commits to the current history regardless of where remote refs are. Have/Want negotiation is still respected. So if remote refs move, the server will send two chunks: one between "have" and "want" and another to extend shallow history. In theory, the client could send no "want"s in order to get the second chunk only. But the protocol does not allow that. Either you send no want lines, which means ls-remote; or you have to send at least one want line that carries deep-relative to the server.. The main work was done by Dongcan Jiang. I fixed it up here and there. And of course all the bugs belong to me. (*) We could even support --deepen=<N> where <N> is negative. In that case we can cut some history from the shallow clone. This operation (and --depth=<shorter depth>) does not require interaction with remote side (and more complicated to implement as a result). Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
a45a260086 |
fetch: define shallow boundary with --shallow-exclude
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
508ea88226 |
fetch: define shallow boundary with --shallow-since
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
b5f62ebea5 |
remote-curl.c: convert fetch_git() to use argv_array
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
8cb01e2fd3 |
http: support sending custom HTTP headers
We introduce a way to send custom HTTP headers with all requests. This allows us, for example, to send an extra token from build agents for temporary access to private repositories. (This is the use case that triggered this patch.) This feature can be used like this: git -c http.extraheader='Secret: sssh!' fetch $URL $REF Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only a single list, overriding any previous call. This means we have to collect _all_ of the headers we want to use into a single list, and feed it to cURL in one shot. Since we already unconditionally set a "pragma" header when initializing the curl handles, we can add our new headers to that list. For callers which override the default header list (like probe_rpc), we provide `http_copy_default_headers()` so they can do the same trick. Big thanks to Jeff King and Junio Hamano for their outstanding help and patient reviews. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
b32fa95fd8 |
convert trivial cases to ALLOC_ARRAY
Each of these cases can be converted to use ALLOC_ARRAY or REALLOC_ARRAY, which has two advantages: 1. It automatically checks the array-size multiplication for overflow. 2. It always uses sizeof(*array) for the element-size, so that it can never go out of sync with the declared type of the array. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
850d2fec53 |
convert manual allocations to argv_array
There are many manual argv allocations that predate the argv_array API. Switching to that API brings a few advantages: 1. We no longer have to manually compute the correct final array size (so it's one less thing we can screw up). 2. In many cases we had to make a separate pass to count, then allocate, then fill in the array. Now we can do it in one pass, making the code shorter and easier to follow. 3. argv_array handles memory ownership for us, making it more obvious when things should be free()d and and when not. Most of these cases are pretty straightforward. In some, we switch from "run_command_v" to "run_command" which lets us directly use the argv_array embedded in "struct child_process". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
00540458a8 |
remote-curl: include curl_errorstr on SSL setup failures
For curl error 35 (CURLE_SSL_CONNECT_ERROR) users need the additional text stored in CURLOPT_ERRORBUFFER to debug why the connection did not start. This is curl_errorstr inside of http.c, so include that in the message if it is non-empty. Sometimes HTTP response codes aren't yet available, such as when the SSL setup fails. Don't include HTTP 0 in the message. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
c915f11eb4 |
connect & http: support -4 and -6 switches for remote operations
Sometimes it is necessary to force IPv4-only or IPv6-only operation on networks where name lookups may return a non-routable address and stall remote operations. The ssh(1) command has an equivalent switches which we may pass when we run them. There may be old ssh(1) implementations out there which do not support these switches; they should report the appropriate error in that case. rsync support is untouched for now since it is deprecated and scheduled to be removed. Signed-off-by: Eric Wong <normalperson@yhbt.net> Reviewed-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
8f309aeb82 |
strbuf: introduce strbuf_getline_{lf,nul}()
The strbuf_getline() interface allows a byte other than LF or NUL as the line terminator, but this is only because I wrote these codepaths anticipating that there might be a value other than NUL and LF that could be useful when I introduced line_termination long time ago. No useful caller that uses other value has emerged. By now, it is clear that the interface is overly broad without a good reason. Many codepaths have hardcoded preference to read either LF terminated or NUL terminated records from their input, and then call strbuf_getline() with LF or NUL as the third parameter. This step introduces two thin wrappers around strbuf_getline(), namely, strbuf_getline_lf() and strbuf_getline_nul(), and mechanically rewrites these call sites to call either one of them. The changes contained in this patch are: * introduction of these two functions in strbuf.[ch] * mechanical conversion of all callers to strbuf_getline() with either '\n' or '\0' as the third parameter to instead call the respective thin wrapper. After this step, output from "git grep 'strbuf_getline('" would become a lot smaller. An interim goal of this series is to make this an empty set, so that we can have strbuf_getline_crlf() take over the shorter name strbuf_getline(). Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
8338c911d1 |
parse_fetch: convert to use struct object_id
Convert the parse_fetch function to use struct object_id. Remove the strlen check as get_oid_hex will fail safely on receiving a too-short NUL-terminated string. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net> |
9 years ago |
![]() |
f4e54d02b8 |
Convert struct ref to use object_id.
Use struct object_id in three fields in struct ref and convert all the necessary places that use it. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net> |
9 years ago |
![]() |
6f687c21c0 |
use alloc_ref rather than hand-allocating "struct ref"
This saves us some manual computation, and eliminates a call to strcpy. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
9 years ago |
![]() |
30261094b1 |
push: support signing pushes iff the server supports it
Add a new flag --sign=true (or --sign=false), which means the same thing as the original --signed (or --no-signed). Give it a third value --sign=if-asked to tell push and send-pack to send a push certificate if and only if the server advertised a push cert nonce. If not, warn the user that their push may not be as secure as they thought. Signed-off-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
10 years ago |
![]() |
9a6f1287fb |
zlib: initialize git_zstream in git_deflate_init{,_gzip,_raw}
Clear the git_zstream variable at the start of git_deflate_init() etc. so that callers don't have to do that. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
10 years ago |
![]() |
94ee8e2c98 |
do not check truth value of flex arrays
There is no point in checking "!ref->name" when ref is a "struct ref". The name field is a flex-array, and there always has a non-zero address. This is almost certainly not hurting anything, but it does cause clang-3.6 to complain. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
10 years ago |
![]() |
f18604bbf2 |
http: add Accept-Language header if possible
Add an Accept-Language header which indicates the user's preferred languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG. Examples: LANGUAGE= -> "" LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1" LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1" LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1" This gives git servers a chance to display remote error messages in the user's preferred language. Limit the number of languages to 1,000 because q-value must not be smaller than 0.001, and limit the length of Accept-Language header to 4,000 bytes for some HTTP servers which cannot accept such long header. Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
10 years ago |
![]() |
0ea47f9d33 |
signed push: teach smart-HTTP to pass "git push --signed" around
The "--signed" option received by "git push" is first passed to the transport layer, which the native transport directly uses to notice that a push certificate needs to be sent. When the transport-helper is involved, however, the option needs to be told to the helper with set_helper_option(), and the helper needs to take necessary action. For the smart-HTTP helper, the "necessary action" involves spawning the "git send-pack" subprocess with the "--signed" option. Once the above all gets wired in, the smart-HTTP transport now can use the push certificate mechanism to authenticate its pushes. Add a test that is modeled after tests for the native transport in t5534-push-signed.sh to t5541-http-push-smart.sh. Update the test Apache configuration to pass GNUPGHOME environment variable through. As PassEnv would trigger warnings for an environment variable that is not set, export it from test-lib.sh set to a harmless value when GnuPG is not being used in the tests. Note that the added test is deliberately loose and does not check the nonce in this step. This is because the stateless RPC mode is inevitably flaky and a nonce that comes back in the actual push processing is one issued by a different process; if the two interactions with the server crossed a second boundary, the nonces will not match and such a check will fail. A later patch in the series will work around this shortcoming. Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
24d36f1472 |
stylefix: asterisks stick to the variable, not the type
Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
26be19ba8d |
send-pack: take refspecs over stdin
Pushing a large number of refs works over most transports,
because we implement send-pack as an internal function.
However, it can sometimes fail when pushing over http,
because we have to spawn "git send-pack --stateless-rpc" to
do the heavy lifting, and we pass each refspec on the
command line. This can cause us to overflow the OS limits on
the size of the command line for a large push.
We can solve this by giving send-pack a --stdin option and
using it from remote-curl. We already dealt with this on
the fetch-pack side in
|
11 years ago |
![]() |
d318027932 |
run-command: introduce CHILD_PROCESS_INIT
Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
cdaa4e98ca |
remote-curl: mark helper-protocol errors more clearly
When we encounter an error in remote-curl, we generally just report it to stderr. There is no need for the user to care that the "could not connect to server" error was generated by git-remote-https rather than a function in the parent git-fetch process. However, when the error is in the protocol between git and the helper, it makes sense to clearly identify which side is complaining. These cases shouldn't ever happen, but when they do, we can make them less confusing by being more verbose. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
b725b270d1 |
remote-curl: use error instead of fprintf(stderr)
We usually prefix our error messages with "error: ", but many error messages from remote-curl are simply printed with fprintf. This can make the output a little harder to read (especially because such message may be intermingled with errors from the parent git process). There is no reason to avoid error(), as we are already calling it many places (in addition to libgit.a functions which use it). While we're adjusting the messages, we can also drop the capitalization which makes them unlike other git error messages. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
37943e4c38 |
remote-curl: do not complain on EOF from parent git
The parent git process is supposed to send us an empty line to indicate that the conversation is over. However, the parent process may die() if there is a problem with the operation (e.g., we try to fetch a ref that does not exist). In this case, it produces a useful message, but then remote-curl _also_ produces an unhelpful message: $ git pull origin matser fatal: couldn't find remote ref matser Unexpected end of command stream The "right" way to fix this is to teach the parent git to always cleanly close the connection to the helper, letting it know that we are done. Implementing that is rather clunky, though, as it would involve either replacing die() operations with returning errors up the stack (until we disconnect the transport), or adding an atexit handler to clean up any transport helpers left open. It's much simpler to just suppress the EOF message in remote-curl. It was not added to address any real-world situation in the first place, but rather a "we should probably report unexpected things" suggestion[1]. It is the parent git which drives the operation, and whose exit value actually matters. If the parent dies, then the helper has no need to complain (except as a debugging aid). In the off chance that the pipe is closed without the parent dying, it can still notice the non-zero exit code. [1] http://article.gmane.org/gmane.comp.version-control.git/176036 Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
95b567c7c3 |
use skip_prefix to avoid repeating strings
It's a common idiom to match a prefix and then skip past it with strlen, like: if (starts_with(foo, "bar")) foo += strlen("bar"); This avoids magic numbers, but means we have to repeat the string (and there is no compiler check that we didn't make a typo in one of the strings). We can use skip_prefix to handle this case without repeating ourselves. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
fc1b774c72 |
remote-curl: reencode http error messages
We currently recognize an error message with a content-type "text/plain; charset=utf-16" as text, but we ignore the charset parameter entirely. Let's encode it to log_output_encoding, which is presumably something the user's terminal can handle. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
bf197fd7ee |
http: extract type/subtype portion of content-type
When we get a content-type from curl, we get the whole header line, including any parameters, and without any normalization (like downcasing or whitespace) applied. If we later try to match it with strcmp() or even strcasecmp(), we may get false negatives. This could cause two visible behaviors: 1. We might fail to recognize a smart-http server by its content-type. 2. We might fail to relay text/plain error messages to users (especially if they contain a charset parameter). This patch teaches the http code to extract and normalize just the type/subtype portion of the string. This is technically passing out less information to the callers, who can no longer see the parameters. But none of the current callers cares, and a future patch will add back an easier-to-use method for accessing those parameters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
beed336c3e |
http: never use curl_easy_perform
We currently don't reuse http connections when fetching via the smart-http protocol. This is bad because the TCP handshake introduces latency, and especially because SSL connection setup may be non-trivial. We can fix it by consistently using curl's "multi" interface. The reason is rather complicated: Our http code has two ways of being used: queuing many "slots" to be fetched in parallel, or fetching a single request in a blocking manner. The parallel code is built on curl's "multi" interface. Most of the single-request code uses http_request, which is built on top of the parallel code (we just feed it one slot, and wait until it finishes). However, one could also accomplish the single-request scheme by avoiding curl's multi interface entirely and just using curl_easy_perform. This is simpler, and is used by post_rpc in the smart-http protocol. It does work to use the same curl handle in both contexts, as long as it is not at the same time. However, internally curl may not share all of the cached resources between both contexts. In particular, a connection formed using the "multi" code will go into a reuse pool connected to the "multi" object. Further requests using the "easy" interface will not be able to reuse that connection. The smart http protocol does ref discovery via http_request, which uses the "multi" interface, and then follows up with the "easy" interface for its rpc calls. As a result, we make two HTTP connections rather than reusing a single one. We could teach the ref discovery to use the "easy" interface. But it is only once we have done this discovery that we know whether the protocol will be smart or dumb. If it is dumb, then our further requests, which want to fetch objects in parallel, will not be able to reuse the same connection. Instead, this patch switches post_rpc to build on the parallel interface, which means that we use it consistently everywhere. It's a little more complicated to use, but since we have the infrastructure already, it doesn't add any code; we can just factor out the relevant bits from http_request. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
16094885ca |
smart-http: support shallow fetch/clone
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
58f2ed051f |
remote-curl: pass ref SHA-1 to fetch-pack as well
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
b06dcd7d68 |
connect.c: teach get_remote_heads to parse "shallow" lines
No callers pass a non-empty pointer as shallow_points at this stage. As a result, all clients still refuse to talk to shallow repository on the other end. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
5955654823 |
replace {pre,suf}fixcmp() with {starts,ends}_with()
Leaving only the function definitions and declarations so that any new topic in flight can still make use of the old functions, replace existing uses of the prefixcmp() and suffixcmp() with new API functions. The change can be recreated by mechanically applying this: $ git grep -l -e prefixcmp -e suffixcmp -- \*.c | grep -v strbuf\\.c | xargs perl -pi -e ' s|!prefixcmp\(|starts_with\(|g; s|prefixcmp\(|!starts_with\(|g; s|!suffixcmp\(|ends_with\(|g; s|suffixcmp\(|!ends_with\(|g; ' on the result of preparatory changes in this series. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
11 years ago |
![]() |
c80d96ca0c |
remote-curl: fix large pushes with GSSAPI
Due to an interaction between the way libcurl handles GSSAPI authentication over HTTP and the way git uses libcurl, large pushes (those over http.postBuffer bytes) would fail due to an authentication failure requiring a rewind of the curl buffer. Such a rewind was not possible because the data did not fit into the entire buffer. Enable the use of the Expect: 100-continue header for large requests where the server offers GSSAPI authentication to avoid this issue, since the request would otherwise fail. This allows git to get the authentication data right before sending the pack contents. Existing cases where pushes would succeed, including small requests using GSSAPI, still disable the use of 100 Continue, as it causes problems for some remote HTTP implementations (servers and proxies). Signed-off-by: Brian M. Carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net> |
11 years ago |
![]() |
3a347ed707 |
remote-curl: pass curl slot_results back through run_slot
Some callers may want to know more than just the integer error code we return. Let them optionally pass a slot_results struct to fill in (or NULL if they do not care). In either case we continue to return the integer code. We can also give probe_rpc the same treatment (since it builds directly on run_slot). Signed-off-by: Jeff King <peff@peff.net> |
11 years ago |
![]() |
050ef3655c |
remote-curl: rewrite base url from info/refs redirects
For efficiency and security reasons, an earlier commit in this series taught http_get_* to re-write the base url based on redirections we saw while making a specific request. This commit wires that option into the info/refs request, meaning that a redirect from http://example.com/foo.git/info/refs to https://example.com/bar.git/info/refs will behave as if "https://example.com/bar.git" had been provided to git in the first place. The tests bear some explanation. We introduce two new hierearchies into the httpd test config: 1. Requests to /smart-redir-limited will work only for the initial info/refs request, but not any subsequent requests. As a result, we can confirm whether the client is re-rooting its requests after the initial contact, since otherwise it will fail (it will ask for "repo.git/git-upload-pack", which is not redirected). 2. Requests to smart-redir-auth will redirect, and require auth after the redirection. Since we are using the redirected base for further requests, we also update the credential struct, in order not to mislead the user (or credential helpers) about which credential is needed. We can therefore check the GIT_ASKPASS prompts to make sure we are prompting for the new location. Because we have neither multiple servers nor https support in our test setup, we can only redirect between paths, meaning we need to turn on credential.useHttpPath to see the difference. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> |
11 years ago |
![]() |
b227bbc43a |
remote-curl: store url as a strbuf
We use a strbuf to generate the string containing the remote URL, but then detach it to a bare pointer. This makes it harder to later manipulate the URL, as we have forgotten the length (and the allocation semantics are not as clear). Let's instead keep the strbuf around. As a bonus, this eliminates a confusing double-use of the "buf" strbuf in main(). Prior to this, it was used both for constructing the url, and for reading commands from stdin. The downside is that we have to update each call site to refer to "url.buf" rather than just "url" when they want the C string. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> |
11 years ago |
![]() |
c65d5692cd |
remote-curl: make refs_url a strbuf
In the discover_refs function, we use a strbuf named "buffer" for multiple purposes. First we build the info/refs URL in it, and then detach that to a bare pointer. Then, we use the same strbuf to store the result of fetching the refs. Let's instead keep a separate refs_url strbuf. This is less confusing, as the "buffer" strbuf is now used for only one thing. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> |
11 years ago |
![]() |
2501aff8b7 |
http: hoist credential request out of handle_curl_result
When we are handling a curl response code in http_request or in the remote-curl RPC code, we use the handle_curl_result helper to translate curl's response into an easy-to-use code. When we see an HTTP 401, we do one of two things: 1. If we already had a filled-in credential, we mark it as rejected, and then return HTTP_NOAUTH to indicate to the caller that we failed. 2. If we didn't, then we ask for a new credential and tell the caller HTTP_REAUTH to indicate that they may want to try again. Rejecting in the first case makes sense; it is the natural result of the request we just made. However, prompting for more credentials in the second step does not always make sense. We do not know for sure that the caller is going to make a second request, and nor are we sure that it will be to the same URL. Logically, the prompt belongs not to the request we just finished, but to the request we are (maybe) about to make. In practice, it is very hard to trigger any bad behavior. Currently, if we make a second request, it will always be to the same URL (even in the face of redirects, because curl handles the redirects internally). And we almost always retry on HTTP_REAUTH these days. The one exception is if we are streaming a large RPC request to the server (e.g., a pushed packfile), in which case we cannot restart. It's extremely unlikely to see a 401 response at this stage, though, as we would typically have seen it when we sent a probe request, before streaming the data. This patch drops the automatic prompt out of case 2, and instead requires the caller to do it. This is a few extra lines of code, and the bug it fixes is unlikely to come up in practice. But it is conceptually cleaner, and paves the way for better handling of credentials across redirects. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> |
11 years ago |
![]() |
1bbcc224cc |
http: refactor options to http_get_*
Over time, the http_get_strbuf function has grown several optional parameters. We now have a bitfield with multiple boolean options, as well as an optional strbuf for returning the content-type of the response. And a future patch in this series is going to add another strbuf option. Treating these as separate arguments has a few downsides: 1. Most call sites need to add extra NULLs and 0s for the options they aren't interested in. 2. The http_get_* functions are actually wrappers around 2 layers of low-level implementation functions. We have to pass these options through individually. 3. The http_get_strbuf wrapper learned these options, but nobody bothered to do so for http_get_file, even though it is backed by the same function that does understand the options. Let's consolidate the options into a single struct. For the common case of the default options, we'll allow callers to simply pass a NULL for the options struct. The resulting code is often a few lines longer, but it ends up being easier to read (and to change as we add new options, since we do not need to update each call site). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> |
12 years ago |
![]() |
05c1eb1034 |
push: teach --force-with-lease to smart-http transport
We have been passing enough information to enable the compare-and-swap logic down to the transport layer, but the transport helper was not passing it to smart-http transport. Signed-off-by: Junio C Hamano <gitster@pobox.com> |
12 years ago |
![]() |
9ba380481c |
smart http: use the same connectivity check on cloning
This is an extension of
|
12 years ago |
![]() |
222b1212c1 |
remote-http: use argv-array
Instead of using a hand-managed argument array, use argv-array API to manage dynamically formulated command line. Signed-off-by: Junio C Hamano <gitster@pobox.com> |
12 years ago |
![]() |
de89f0b25a |
remote-curl: die directly with http error messages
When we encounter an unknown http error (e.g., a 403), we hand the error code to http_error, which then prints it with error(). After that we die with the redundant message "HTTP request failed". Instead, let's just drop http_error entirely, which does nothing but pass arguments to error(), and instead die directly with a useful message. So before: $ git clone https://example.com/repo.git Cloning into 'repo'... error: unable to access 'https://example.com/repo.git': The requested URL returned error: 403 Forbidden fatal: HTTP request failed and after: $ git clone https://example.com/repo.git Cloning into 'repo'... fatal: unable to access 'https://example.com/repo.git': The requested URL returned error: 403 Forbidden Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
12 years ago |
![]() |
67d2a7b5c5 |
http: simplify http_error helper function
This helper function should really be a one-liner that prints an error message, but it has ended up unnecessarily complicated: 1. We call error() directly when we fail to start the curl request, so we must later avoid printing a duplicate error in http_error(). It would be much simpler in this case to just stuff the error message into our usual curl_errorstr buffer rather than printing it ourselves. This means that http_error does not even have to care about curl's exit value (the interesting part is in the errorstr buffer already). 2. We return the "ret" value passed in to us, but none of the callers actually cares about our return value. We can just drop this entirely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> |
12 years ago |