aws

http://status.aws.amazon.com/ https://www.expeditedssl.com/aws-in-plain-english
taylor 2019-03-04T00:59:18.043900Z

@cfleming GraalVM added HTTPS support a few months ago, but it looks like there’s another issue here https://github.com/cognitect-labs/aws-api/blob/master/src/cognitect/aws/util.clj#L286 preventing native image generation:

Error: unbalanced monitors: mismatch at monitorexit, 96|LoadField#lockee__5699__auto__ != 3|LoadField#lockee__5699__auto__
Call path from entry point to cognitect.aws.util$dynaload$fn__637.invoke():
	at cognitect.aws.util$dynaload$fn__637.invoke(util.clj:265)
	at clojure.lang.AFn.applyToHelper(AFn.java:152)
	at clojure.lang.Ref.applyTo(Ref.java:366)
	at aws_api_cli.core.main(Unknown Source)
	at com.oracle.svm.core.JavaMainWrapper.run(JavaMainWrapper.java:152)
IIRC there was a ticket/patch for this in Clojure? I tried with 1.9 and 1.10 but got the same error (w/diff stack traces)

viesti 2019-03-04T10:06:15.055500Z

thinking that resources looked up at runtime are out of scope of the static analysis that Graal native images performs

viesti 2019-03-04T10:06:48.055700Z

so those should be instructed for graal to keep

viesti 2019-03-04T10:09:59.055900Z

Not of immediate gain regarding graal/aws-api, but @cgrand did something a bit similar with bytecode analysis in the #portkey project, in which we had an option to specify keeps for resources/classes that were dynamically loaded. I think some forms can be captured by static analysis, say .getResource("path/to/resources"), where the resource is a compile time literal

taylor 2019-03-04T01:07:00.045100Z

https://github.com/ghadishayban/clojure/commit/8acb995853761bc48b62190fe7005b70da692510 this is the patch I was thinking of, and this ticket isn’t the one I remember but seems relevant https://dev.clojure.org/jira/browse/CLJ-1472

steveb8n 2019-03-04T01:16:45.046100Z

@cfleming I presume CLJS is not an option? I have a couple of CLJS/Shadow lambdas that do AWS calls and cold start in a couple of seconds

steveb8n 2019-03-04T01:18:52.047500Z

they are not in a VPC which dramatically improves cold-start. The VPC ENI cold start can add up to 8 secs to a cold start. I suspect a Graal based Lambda would suffer that as well if VPC is required in your case

1😮1☝️
taylor 2019-03-04T01:35:54.047700Z

FWIW I was able to build a native image after replacing that dynaload with a patched version, but then it fails at run-time with Cannot find resource cognitect/aws/s3/service.edn. {} (because I used S3 as an example call). I might take a deeper look at this if I have time later this week :man-shrugging:

cfleming 2019-03-04T01:43:15.047900Z

@taylor Thanks! I’d appreciate any info you can provide after digging a bit.

cfleming 2019-03-04T01:44:42.049400Z

@steveb8n Yes, I’m currently using CLJS for my lambdas. But I like the new Cognitect API and AFAIK that’s not available for CLJS yet. And I would dearly love to leave all the funky async bit behind me forever (promesa helps, but it’s still not as nice as blocking)

1👍
steveb8n 2019-03-04T01:46:28.051Z

I agree with that. The new AWS lib is much nicer. I’ll be interested to hear how fast you can make the cold start work. Although nothing we can do about ENI - for that we (keep on) wait for AWS

cfleming 2019-03-04T01:46:58.051400Z

Yeah, I’d like to switch from Dynamo to RDS but the VPC has put me off.

alexmiller 2019-03-04T02:16:51.051500Z

1472 is the relevant ticket

alexmiller 2019-03-04T02:17:22.051700Z

that service.edn is from one of the other deps that needs to be included as a resource on the classpath (not sure what graal does about stuff like that)

steveb8n 2019-03-04T02:29:08.055200Z

Also interesting is that Graal is 50% slower than JVM once loaded (was mentioned at ClojuTre last year) so by using Graal we are favouring cold starts over warmed up invocations. I suppose 50% slower is ok if total time is some 100's of ms, users won’t notice that

viesti 2019-03-04T10:06:15.055500Z

thinking that resources looked up at runtime are out of scope of the static analysis that Graal native images performs

viesti 2019-03-04T10:06:48.055700Z

so those should be instructed for graal to keep

viesti 2019-03-04T10:09:59.055900Z

Not of immediate gain regarding graal/aws-api, but @cgrand did something a bit similar with bytecode analysis in the #portkey project, in which we had an option to specify keeps for resources/classes that were dynamically loaded. I think some forms can be captured by static analysis, say .getResource("path/to/resources"), where the resource is a compile time literal

viesti 2019-03-04T11:33:11.059Z

hum, isn’t jaotc is running on the c2/hotspot and the graalvm throughput is probably going to be better in the future?

viesti 2019-03-04T11:35:27.059400Z

was remembering this 🙂 https://www.graalvm.org/docs/reference-manual/aot-compilation/ > What is the typical performance profile on the SVM? > Right now peak performance is a bit worse than HotSpot, but we don’t want to advertise that (and we want to fix it of course).