yada

2017-11-03T14:32:57.000215Z

I’m performing something like:

(let [s3-response (s3/get s3-component file_location)]
      (-> (:response ctx)
          (assoc :status 200
                 :body (:content s3-response))
          (update :headers merge {"Content-Disposition" "attachment; filename=\"my-report.pdf\""
                                  "Content-Type"        "application/pdf"
                                  "Content-Length"      (-> s3-response :metadata :content-length)
                                  :content-length       (-> s3-response :metadata :content-length)
                                  "X-Custom-Header"     "Just to see what happens."})))

2017-11-03T14:34:25.000467Z

and seeing something like this at the command line from curl:

> GET /report/f7570b24-fd66-4486-ba68-f4ca7f67a1e9 HTTP/1.1
> Host: 127.0.0.1:1337
> User-Agent: curl/7.43.0
> Accept: */*
>
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0< HTTP/1.1 200 OK
< Content-Disposition: attachment; filename="my-report.pdf"
< Content-Type: application/pdf
< X-Custom-Header: Just to see what happens.
< X-Frame-Options: SAMEORIGIN
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Type: application/pdf
< Server: Aleph/0.4.1
< Connection: Keep-Alive
< Date: Fri, 03 Nov 2017 13:16:15 GMT
< transfer-encoding: chunked
<

2017-11-03T14:35:01.000145Z

weirdly I have the Content-Type twice, but no length

dominicm 2017-11-03T14:36:04.000656Z

I think I might see why

dominicm 2017-11-03T14:36:10.000827Z

the transfer-encoding is chunked :thinking_face:

2017-11-03T14:36:26.000128Z

I suspect that the double Content-Type is because I have defined :produces "application/pdf" for the route as well as passing it explicitly

2017-11-03T14:36:48.000148Z

ah - is that XOR with content-length?

dominicm 2017-11-03T14:37:05.000238Z

https://www.httpwatch.com/httpgallery/chunked/ A light read of this suggests so.

dominicm 2017-11-03T14:37:17.000172Z

I might be wrong.

dominicm 2017-11-03T14:37:28.000255Z

but it would seem logical to me that they're incompatible.

2017-11-03T14:37:30.000516Z

sounds resonable

2017-11-03T14:38:05.000309Z

I’m trying to stream the data straight from S3 back to the client, but S3 tells me the content-length upfront, so it is knowable

dominicm 2017-11-03T14:38:27.000452Z

https://en.wikipedia.org/wiki/Chunked_transfer_encoding#Applicability you can't put them into the trailing headers either.

2017-11-03T14:40:22.000769Z

it’s not a biggie. I just wanted people downloading documents to get a proper progress bar in their browser

dominicm 2017-11-03T14:40:49.000283Z

I don't think there's any reason to do a chunked transfer if you know the content length?

2017-11-03T14:41:28.000155Z

I don’t necessarily have all the data in memory when the response begins.

dominicm 2017-11-03T14:41:37.000197Z

I don't think that matters though

2017-11-03T14:41:56.000554Z

I also don’t know why it is doing chunked transfer - I don’t think I’ve explicitly asked for that anywhere

dominicm 2017-11-03T14:42:02.000372Z

(At least, not from a fundamental perspective anyway!)

2017-11-03T14:42:16.000222Z

no - it can’t really - I just write the data to the network when I have it

dominicm 2017-11-03T14:42:59.000323Z

Yep. It's not like you need to load it all into memory and send it as one big chunk, you just write it to the socket. The difference being that the client knows when you've finished.

dominicm 2017-11-03T14:43:37.000417Z

Someone more knowledgeable than me would need to explain how to turn off chunked transfer. It's a safe default.

2017-11-03T14:45:07.000724Z

I’m handing a type of InputStream to yada/aleph, so I can see why it might assume that was sensible.