I have question for the group: what’s the smoothest track into production
best options for [ cmd line | web app | event based service | …. others ] ...
web app: heroku
what do you mean by cmd line / event based service?
cmd line … some batch or scheduled task
batch: heroku worker I guess.
event based service …. something sitting behind a queue / kafka | kinesis style ingestion system
aws lambda? 🙂
it runs clojure
I read the Lambda Clojure thing … seems awkward 😞
or do you have a better experience?
since we have so many components in our backend (queues, db's, web api, batch) we are running everything in docker containers on google container engine
before we were running on VMs, with the necessary ceremony with ansible and supervisorctl to keep stuff running
but I wouldn't say it's the smoothest track into production 🙂
although it's a lot better than the VM setup
You have a lot going on .... But let's think hello world in each of these options and then I think it's easy to see how the infra dominates the code
Assuming you want to say hello to everybody at once ;-)
How do you keep your docker resources up to date?
well, event based, if you don't want to use lambda, you're going to need some compute infrastructure that watches for changes
docker resources as in, the VMs, docker, kubernetes?
or as in 'you have nginx running, now upgrade to the next version'?
The latter (nginx, JVM, ...)
yes, that is still a manual work of course
as for the JVM, everytime a component is built it will use the latest version of a major release (now 1.8)
I guess the platform should look after you for the other stuff
other components will get updated manually and go through dev > staging > prod path
So you have some base images with common infra?
obviously there still is configuration management
and try to use as much as possible the official images
That everyone 'uses' in docker speak
Makes a ton of sense
so, e.g. our app docker container is
and then a few lines to copy the uberjar and
ENTRYPOINT exec java $JVM_OPTS -jar /opt/your.jar
docker and kubernetes will watch that it keeps running
Ok so you use openjdk - easiest by far too!
and then for some stuff like queues I intend to use Google pub/sub
I wish datomic ran on Google Cloud Datastore
no need to run your own cassandra and stuff
yeah - and it’s annoying that they don’t enable storage extension points
cos it seems quite simple
I don't think the requirements for the underlying storage are complex indeed
you just need some transactionality to store the pointer to the indexes
I would love them to provide a protocol that could be implemented
anyway, off-topic 🙂
how do you determine the size of the compute resource on the Google engine?
we knew what our app was consuming in terms of resources on a VM
so, we took that, increased it a bit and spun up 4 hosts 🙂
you can increase the resources by updating the spec of the 'instance group'
so, you shouldn't be tied to what you have chosen
I think they are also working on a 'mixed' cluster with e.g. CPU heavy and MEM heavy VMs
and you can tie certain components of your app to a specific type of host
and I guess it attaches the equivalent of EBS to the instances so that compute is swappable?
well, the thing is, you should not use the default disks attached to the instances
kubernetes will manage volumes for you. you just specify which container needs which volume mounted where and it will do that on the right host where the container is running
you can use all kind of volumes but google persistent disk is a good choice for not losing data on cluster upgrades
side question - on something else I’m looking at - do you know how easy it is to resize the volumes (it’s a pain on EBS)
I have found a decent example from the uswitch guys to implement AWS Lambda
But when you see their example you see the yuckiness of the AWS Lambda API
I mean holy crap, you have to return the status via this bullshit JSON
I think this example could be wrapped nicer (I will make a PR for that but it doesn’t hide the fact)1👍
resizing a disk is easy, but only possible when the resulting disk is bigger than the old one