Hey folks ๐ We're getting an 'S3 request failed' trying to deploy a snapshot version of a Crux module to Clojars Other modules (and indeed the jar/pom files from this module) seem to be uploading fine - anyone know whether it's something we're doing wrong or is it a case of waiting it out for a bit?
Sending juxt/crux-http-server/maven-metadata.xml (1k) to <https://repo.clojars.org/>
Could not transfer metadata juxt:crux-http-server/maven-metadata.xml from/to clojars (<https://repo.clojars.org/>): Access denied to: <https://repo.clojars.org/juxt/crux-http-server/maven-metadata.xml>, ReasonPhrase: Forbidden - S3 request failed.
Failed to deploy metadata: Could not transfer metadata juxt:crux-http-server/maven-metadata.xml from/to clojars (<https://repo.clojars.org/>): Access denied to: <https://repo.clojars.org/juxt/crux-http-server/maven-metadata.xml>, ReasonPhrase: Forbidden - S3 request failed
This might me an intermittent failure that was cached by fastly - that looks like your client is trying to read the file. What do you see if you visit https://repo.clojars.org/juxt/crux-http-server/maven-metadata.xml in a browser?
I see the correct file, but I would be hitting a different fastly node.
mm, I see what looks like a valid maven-metadata.xml
I'll give it another go, it's been a couple of hours
ok, let me know how it goes. I suspected this was a read issue because we only write to s3 at the very end, but realize now that "the very end" is when you upload the maven-metadata.xml file - that's the signal to finalize the deploy. So the s3 failure could be on any artifact that is part of the deploy, not just the metadata file (not that it matters here).
Ah, ok, thanks
I've just tried to redeploy a previous version (as the same snapshot) and that went through ok, but coming back to current version still fails. One difference is that our JAR's got bigger - it's gone from around 2MB to around 6MB. Do you know if there's a file size limit? (I'll look into the issue of the larger JAR separately ๐)
There is a limit, but I'm not sure what it is atm. But it is enforced by nginx, so we should get a failure earlier in that case. I'll take a look at the logs to see if there is anything more useful there
Thanks ๐
Well, nothing helpful there, just :message "S3 request failed"
- no exception logged and no exception sent to sentry :(
Ah, thanks for checking ๐
I'm adding better error reporting now, should be just a few minutes
File size is looking a likely culprit at the moment, seems consistently fine with the smaller (2MB) JAR, consistently failing with the larger (6MB) JAR
I just deployed a change that should log the exception, so let me know if you still see the issue after figuring out the jar size and I'll take a look