question, I’m going through prototyping out some ‘forward thinking’ on an architecture. I’ve played around with Onyx for some more traditional ‘move data around stuff’. But I just noticed the http ‘adapter’ or whatever. I’ve been building out a command/event based arch where I have micrservices that take the commands, do their thing persist reified transactions in Datomic with the approriate event tag and metadata and onyx pulls it out of the back end. So now I’m wondering if I could just have onyx accept the commands via http directly from browsers, etc
@eoliphant Yup, we've seen this done a few times. Can ask @robert-stuttaford about it in particular.
I'm running into a problem running multiple nodes in a cluster. The current behavior is as follows: I'll have the job running in a docker instance with some tenancy ID, using a hosted ZK cluster, and media driver in another process. The job will be running correctly (I can monitor it's output) so I'll go to start another container with the job. I use the same config and startup (same ID) and once it starts it locks up (I only know this because the output source stops receiving items). Eventually after a few minutes to job on both containers die
Any ideas?
I’ll ping him @michaeldrogalis thanks, Yeah read his older blog about what they were doing,but I got the impression that onyx was pulling stuff out of the ‘back’ lol of datomic
this is the problem with the clojure ecosystem… The Crisis of Too Much Cool Stuff lol
@innit29 do you have of the onyx.log? We generally redirect output to stdout when running inside docker
I do have a very verbose log from the containers. What the best way to share that out?
private gist works, but whatever you want to do
I've got the log level on INFO, but it doesn't log anything useful. However setting the log level to debug creates hundreds of MB's. Do you want the debug logs?
debug logs won’t likely be helpful
I’m surprised there isn’t anything interesting in info. I’d expect to at least see peers timing out
The peers eventually time out which can be seen in the debug log, but it doesn't output anything in the info log. Maybe I've got logging misconfigured? here's the log https://gist.github.com/dcrouch26/e9246600b1a01ad32dee19c2193fd2b5 https://gist.github.com/dcrouch26/89becf285f6b94082524fedc1cfac308
Shared memory gets pretty low, so that might be a thing
@daniel-tcgplayer k, there could be a few problems here
1. you will probably want to increase the shm-size of the container.
2. it seems like the job is being killed for some reason, see https://gist.github.com/dcrouch26/e9246600b1a01ad32dee19c2193fd2b5#file-gistfile1-txt-L40
(unless it’s completing, but it doesn’t seem like that)
3. it doesn’t look like you’re getting the onyx logging at all. Seems like logging is misconfigured - there should be a lot of other info level onyx logging.
3 will help you decide on 2. re: 2, you probably shouldn’t be using await-job-completion to decide when to shut down peers
Thanks Lucas! I'll look into getting my onyx logging back enabled, something did seem fishy. And I'll get that shared memory up. I'll post back when I've got all that
the shared memory issue is probably due to peers rebooting and the shm space not being reclaimed quickly enough. You’ll need a little extra to handle those reboots.
the requirements do go up as you go multi node since they all need connections to each other