Hi I have a pretty 101 question lol. I’ve been through the learn-onyx, etc stuff (great material BTW), but one thing that’s not clear is the best, or just a simple way to set stuff for a ‘typical’ deployment. When deploying and uberjar one would conceivably just set everything up then kick off the job. The onyx-app template seems like more of a generic job runner. trying to hack it down to just startup and run the configured job. Is something like this ‘right’?
(defn -main [& args]
(let [onyx-id (java.util.UUID/randomUUID)
env-config (assoc (-> "config.edn" io/resource slurp read-string :env-config)
:onyx/tenancy-id onyx-id)
peer-config (assoc (-> "config.edn"
io/resource slurp read-string :peer-config) :onyx/tenancy-id onyx-id)
env (onyx.api/start-env env-config)
peer-group (onyx.api/start-peer-group peer-config)
peers (onyx.api/start-peers 5 peer-group)
job-id (:job-id
(onyx.api/submit-job peer-config
(onyx.job/register-job "basic-job" nil)))]
(print job-id)
#_(onyx.api/await-job-completion peer-config)
#_(assoc component :env env :peer-group peer-group
:peers peers :onyx-id onyx-id)))
The way it used to work was by using CLI params. So one set of params would essentially just spin up your peers. If you run it with a job name it would it would submit the job to the peers
this main looks like it starts your peers and then immediately runs your job
yeah I’m trying to figure out the best approach if say this guy is in a docker container, etc in my case this is a ‘forever’ job, part of a cqrs-style system. So it should just come up and start processing commands
We ended up combining the startup of the peers and the job in the main function, not perfect but it handled the job in hand perfectly.
Wrapped in a Docker container and there's also a Marathon config template too so you can deploy via DCOS
It's a couple of years old now but you'll get the idea.
@eoliphant ^^
ah sweet, that looks exactly like what I need
thanks
np
There is a heartbeat server baked in there too so marathon could check the state of the container.
cool, this is going into kube, but will just add something comparable
The only gotcha if you submit when you startup the peers is you should use a stable job id to make the job submission idempotent
👍
ok that makes sense
i have a cli tool myself, that has options to start peers, submit jobs, kill jobs and perform gc
it’s also exposed as a web interface
works pretty well tbh
i handle job progress monitoring on a higher level
sounds pretty cool
hi, do you normally set the consumer group.id via the consumer opts with onyx-kafka or is it doing something internally ?
@eoliphant Onyx is managing the Kafka consumer state on its own. You can set it if you want, but if I recall right, it’s not doing anything.
It’s mostly useful if you’re monitoring consumers on the kafka side
ah ok, so say separate jobs, that are reading from the same topic, onyx is taking care of offset mgmt, etc
Correct. You can set it with :kafka/group-id
in the catalog if you like.
It defaults to "onyx"
.
gotcha,