Hi! I have a job with input of type: onyx.plugin.datomic / read-log. I'm dealing with exceptions in my tasks and I also deal exceptions in lifecycle. In thesis no exception should pass without being treated, but the fact is certain: T are reread from the: in and processed again. Apart from exceptions what could cause a job to redo the entire computing tree again?
Hi all - I'm just getting started with onyx and had a couple questions. What's the difference between shutdown-peers and shutdown-peer-group, and does it matter what order they are called in?
@markbastian Peers shuts down the individual peers, peer group shuts down all of the resources utilised by the peers. You should shut down peers then the peer group.
@lellis I’ll get back to you on that later tonight.
Another question. I'm working with the onyx-start project and I can't seem to re-run the sample job with the sample input-segments. Recall, this provides a few sentences and tweaks the strings into words with different cases and splits them into two keys on the output. When I run the input once on the system it works. The second time it returns empty vectors. When I tail the log it says something along the lines of
Job ID 03d5361d-46fb-f82b-8829-e3284bf135ae has been submitted with tenancy ID 46fa05f6-6e73-41a0-ba56-0e7d0e19d4ae, but received no virtual peers to start its execution.
Tasks each require at least one peer to be started, and may require more if :onyx/n-peers or :onyx/min-peers is set.
If you were expecting your job to start now, either start more virtual peers or choose a different job scheduler, or wait for existing jobs to finish.
I've done some searching on this and can't seem to figure out what to do. I tried increasing my virtual peers from 10 to 100 and it immediately gave me this error. Is there a way to signal the job as being complete?Related - If i have a bunch of segments that I'm feeding into the channel and it gives a result and then later on I feed another round of segments to the same process, is it "right" in onyx to make that one streaming job or is it 2 different jobs?
I think we got it figured out. We just needed to close the channels.