Probably a decently common problem
I have some images in s3
and I want to send them to the user
right now I have a basic handler that finds the right file
and gets the contents as an input stream
does anyone know why this is so slow and how can I improve it?
What for you’re proxying the image? Wouldn’t be easier to just send url to resource?
https://stackoverflow.com/a/33605888 that should be helpful.
I had no clue about the presigned url thing
It seems like a nightmarish tunnel of CORS so far
@emccue You could also serve the images through the cloudfront CDN Backed by S3. Can include auth and everything, then your webservers aren't burdened with all that io / memory usage.
Sorry for hijacking the above discussion but it is close to a problem I have been thinking. How about the other way around when you want the user to be able to save images to S3 and store information about that image to your DB (user abc112 has uploaded image cat-foobar.jpg). One option is that the frontend client directly stores the image to S3 and then reports to backend API that image was stored with a name X. But these two operations are not atomic and if the image upload is successful and backend API call fails then I have an image in S3 that does not have DB record. Any thoughts how this could be solved?
One way is to have them upload to an s3 bucket that gets autowiped after a day (or something) (There are lifecycle policies that enable this, IIRC). Then if you get the acknowledgement that they successfully uploaded, move that object to your real storage bucket.
Hello everyone,
i'm using vector based routes, how can i pass not common interceptors for get and post for the same route
for example: ["/company" {:get company/get-all :post [:create-company company/create!]} ^:interceptors [auth]]
now how can i add new interceptor for post for example: validate-new-company
?
i assume interceptors defined as metadata for get and post for company are common
but i only want to apply validate-new-company for only post
I am gonna be doing this too
My current plan to handle this is to somehow have the user's upload be key'ed under a uuid
and then who cares
and I can run a background job to clean it up later, if my bill goes too high or something
(also maybe a good trigger point to have this not be empty)
(considering its the third result on google right after example repos)
You can use an s3 lambda trigger to call your API / put a message onto SQS that adds that record to your database.
Good ideas, thanks!
I found something when using pedestal. The first request (when i boot up the web server) always takes a long time (1 seconds~) then it's 20-40ms 🙂 What could be the cause?
This is on localhost without TLS.
What is the cost of this approach vs S3?
I've never used cloudfront CDN before
How do you want to measure it?
Time? Dollars? Complexity? Performance?
somewhat confused why json params would be a map of string->string
shouldn't it be an "Any" just like edn?
since [1, 2, 3]
is valid json
and so is {"a": 10}
is this just a docs mistake?
For is.
It shoud be any?
JSON also allow values like 42 true false "s"