onyx

FYI: alternative Onyx :onyx: chat is at <https://gitter.im/onyx-platform/onyx> ; log can be found at <https://clojurians-log.clojureverse.org/onyx/index.html>
2018-06-12T07:42:31.000083Z

on that note, do people run the aeron driver in a separate docker container, or on a different process within the same container ?

2018-06-12T07:48:42.000483Z

https://github.com/onyx-platform/onyx-template/tree/0.13.x/src/leiningen/new/onyx_app runs it as a separate process within the same container, but i am not sure whether that's the best practice.. ?

2018-06-12T11:10:23.000400Z

we run it in the same container, i’ve tried running it in a separate container but ran into issues

2018-06-12T11:10:32.000332Z

i don’t remember the specifics though

mccraigmccraig 2018-06-12T12:18:39.000337Z

@lmergen separate process in the same container, using https://github.com/just-containers/s6-overlay

👍 2
2018-06-12T12:19:25.000354Z

@mccraigmccraig first time i'm seeing s6, looks great!

mccraigmccraig 2018-06-12T12:21:10.000075Z

from the comments in my onyx peer Dockerfile it was originally from @gardnervickers ... i'm guessing there is an onyx example Dockerfile somewhere ?

2018-06-12T12:27:06.000141Z

looks like the onyx-tempate also uses s6, but not the docker image https://github.com/onyx-platform/onyx-template/blob/0.13.x/src/leiningen/new/onyx_app/scripts/finish_media_driver.sh

Travis 2018-06-12T12:58:04.000421Z

I have done it both ways, in the same container and a separate container in a pod

gardnervickers 2018-06-12T14:14:37.000081Z

It’s recommended that you run a single process per docker container. For the Peer/MediaDriver case, you want to setup communication over shared memory (`/dev/shm`).

gardnervickers 2018-06-12T14:15:25.000131Z

S6 is just a process monitor from what I recall to make sure child processes are correctly reaped.

gardnervickers 2018-06-12T14:17:00.000629Z

Back when those scripts were created it wasn’t easy/possible to spawn two containers with shared memory between them but now the examples would be much better served by using /dev/shm on the host and mounting it in two separate containers.

2018-06-12T14:20:43.000340Z

so that would probably mean creating a tmpfs volume in docker and sharing it between multiple containers ?

2018-06-12T14:20:53.000198Z

(that's what i was thinking about)

2018-06-12T14:20:57.000148Z

and mounting it to /dev/shm

gardnervickers 2018-06-12T14:21:10.000618Z

Yea that would be even better

2018-06-12T14:22:03.000022Z

because that would also get rid of the whole --shm_size issue afaict

gardnervickers 2018-06-12T14:22:11.000697Z

Exactly :simple_smile:

gardnervickers 2018-06-12T14:22:43.000688Z

It also more directly maps to docker orchestrator abstractions like Kubernetes "Memory" volumes

2018-06-12T14:23:00.000411Z

yes

2018-06-12T14:23:11.000824Z

what about performance ? i assume it doesn't suffer ?

2018-06-12T14:23:38.000956Z

intuitively you would think there's a performance hit, but i can't think of any

gardnervickers 2018-06-12T14:24:04.000233Z

If any I doubt it’s enough to notice

2018-06-12T14:25:24.000024Z

i just read on the docs that you cannot share a tmpfs volume between docker containers

gardnervickers 2018-06-12T14:26:18.000674Z

Alright yea then just mounting the hosts /dev/shm should be fine

2018-06-12T14:26:44.000043Z

right