onyx

FYI: alternative Onyx :onyx: chat is at <https://gitter.im/onyx-platform/onyx> ; log can be found at <https://clojurians-log.clojureverse.org/onyx/index.html>
lucasbradstreet 2018-08-08T03:44:18.000105Z

Yup. Term buffer needs to be a small fraction of the amount of available shm space

lucasbradstreet 2018-08-08T03:44:50.000132Z

@parameme we will not be open sourcing pyroclast

1😞
parameme 2018-08-08T03:46:35.000175Z

Fair enough @lucasbradstreet, @michaeldrogalis - it was worth a polite question at least 😉

lucasbradstreet 2018-08-08T03:46:40.000051Z

@sreekanth thanks

parameme 2018-08-08T03:46:47.000067Z

(At least we know it CAN be done)

lucasbradstreet 2018-08-08T03:47:03.000028Z

That’s true :)

lucasbradstreet 2018-08-08T03:47:10.000054Z

There’s still Onyx! ;)

3😍
lucasbradstreet 2018-08-08T03:48:30.000079Z

@dave.dixon I’m guessing the log gc dropped the job from the killed-jobs vector, so it was probably just misleading us

lucasbradstreet 2018-08-08T03:50:20.000056Z

@dave.dixon I think the most likely situation is there’s a kill-job log entry being emitted as a result of an exception (not one that I’ve seen a log message for, since those have hit the other code path thus far), or maybe maybe there’s a bug in the log gc

jasonbell 2018-08-08T07:11:29.000314Z

@kenny I covered the whole Aeron term buffer/shm-size thing in a talk at ClojureX last year. Looks like the website is a bit poorly right now. https://skillsmatter.com/skillscasts/10939-how-i-bled-all-over-onyx

1
rustam.gilaztdinov 2018-08-08T16:33:02.000013Z

Hello, @jasonbell ! you pointed in this video that huge (i.e., Mb) messages not good for onyx. I’ve been thinking about processing images with onyx -- it is not a good idea?

jasonbell 2018-08-08T16:45:02.000299Z

I don't see any problem with processing images. And remember that video is old now and things have moved on. I'd try it first and then make a call.

sparkofreason 2018-08-08T16:57:23.000459Z

@lucasbradstreet Had the same thing, saved all the peer logs this time, let me know if there's something I should look for. I'll restart the cluster at some point and remove the the job GC stuff.

sparkofreason 2018-08-08T17:06:08.000465Z

Actually, this time, looking through the logs, it does appear that onyx recovered from a transient S3 DNS issue, after a flurry of exceptions. The shutdown was due to aeron timeout, so my health check isn't working right.