What would be the most straightforward way of making a GET request to a url over HTTPS, and getting the raw, unparsed response?
Could this be achieved with curl? If so, what options would you need to use?
If you want to use curl, this should work:
curl -D - https://www.google.com/
Note, however, that this is not exactly the raw response. For instance chunked transfer encoding will not be visible in the response. Using --raw solves this, also verbose mode (-v) is useful, too and -i shows the headers before the response body:
curl -iv --raw https://www.google.com/
If you want to use a pager like less on the result, it is also necessary to disable the progress-bar (-s):
curl -ivs --raw https://www.google.com/ | less
Depending on what you want to do this may or may not be a problem.
What you do get is all HTTP response headers and the document at the requested URL.
Here's a simple way that comes to mind
echo 'GET / HTTP/1.1
Host: google.com
' | openssl s_client -quiet -connect google.com:443 2>/dev/null
It's not curl, but it should be available on almost all Unices:
wget -S --spider https://encrypted.site
If the status messages bother you:
wget -S --spider https://encrypted.site 2>&1 | awk '/^ /'
If you want CRLF line endings:
wget -S --spider https://encrypted.site 2>&1 | awk '/^ / { sub(/$/,"\r"); print }'
$ GET -e https://www.google.com
On Debian/Ubuntu distros belongs to the package lwp-request.