hi,
I am trying to upload big file (150mb) to the s3 using input-stream
, but seems that it is being consumed into byte-array with cognitect.aws.util/input-stream->byte-array
is there a way to use stream?
(with-open [stream (io/input-stream tempfile)]
let [response (aws/invoke s3 {:op :PutObject
:request {:Bucket (:bucket (config/aws))
:Key bucket-key
:ContentType content-type
:ContentLength size
:ACL "public-read"
:Body stream}})])
Should I provide ByteBuffer to the s3 client?
I’ve tried with MappedByteBuffer - doesnt work
(with-open [stream (<http://java.io|java.io>.FileInputStream. tempfile)]
(let [channel (.getChannel stream)
buffer (.map channel java.nio.channels.FileChannel$MapMode/READ_ONLY 0 (.size channel))
response (aws/invoke s3 {:op :PutObject
:request {:Bucket (:bucket (config/aws))
:Key bucket-key
:ContentType content-type
:ContentLength size
:ACL "public-read"
:Body buffer}})]))
aws s3 expects big uploads to be happening in chunks
not as one long long stream (if you put yourself in their shoes you wouldn't want to wait for long and possibly slow uploads either)
see multipart upload docs , i think you should rather refer to that ...