I’m performing something like:
(let [s3-response (s3/get s3-component file_location)]
(-> (:response ctx)
(assoc :status 200
:body (:content s3-response))
(update :headers merge {"Content-Disposition" "attachment; filename=\"my-report.pdf\""
"Content-Type" "application/pdf"
"Content-Length" (-> s3-response :metadata :content-length)
:content-length (-> s3-response :metadata :content-length)
"X-Custom-Header" "Just to see what happens."})))
and seeing something like this at the command line from curl:
> GET /report/f7570b24-fd66-4486-ba68-f4ca7f67a1e9 HTTP/1.1
> Host: 127.0.0.1:1337
> User-Agent: curl/7.43.0
> Accept: */*
>
0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0< HTTP/1.1 200 OK
< Content-Disposition: attachment; filename="my-report.pdf"
< Content-Type: application/pdf
< X-Custom-Header: Just to see what happens.
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Type: application/pdf
< Server: Aleph/0.4.1
< Connection: Keep-Alive
< Date: Fri, 03 Nov 2017 13:16:15 GMT
< transfer-encoding: chunked
<
weirdly I have the Content-Type
twice, but no length
I think I might see why
the transfer-encoding is chunked :thinking_face:
I suspect that the double Content-Type
is because I have defined :produces "application/pdf"
for the route as well as passing it explicitly
ah - is that XOR with content-length?
https://www.httpwatch.com/httpgallery/chunked/ A light read of this suggests so.
I might be wrong.
but it would seem logical to me that they're incompatible.
sounds resonable
I’m trying to stream the data straight from S3 back to the client, but S3 tells me the content-length upfront, so it is knowable
https://en.wikipedia.org/wiki/Chunked_transfer_encoding#Applicability you can't put them into the trailing headers either.
it’s not a biggie. I just wanted people downloading documents to get a proper progress bar in their browser
I don't think there's any reason to do a chunked transfer if you know the content length?
I don’t necessarily have all the data in memory when the response begins.
I don't think that matters though
I also don’t know why it is doing chunked transfer - I don’t think I’ve explicitly asked for that anywhere
(At least, not from a fundamental perspective anyway!)
no - it can’t really - I just write the data to the network when I have it
Yep. It's not like you need to load it all into memory and send it as one big chunk, you just write it to the socket. The difference being that the client knows when you've finished.
Someone more knowledgeable than me would need to explain how to turn off chunked transfer. It's a safe default.
I’m handing a type of InputStream to yada/aleph, so I can see why it might assume that was sensible.