@mihaelkonjevic when i navigate to one of my counter detail page (it's just another component), i see that the subscription first returns an nil value for my counter. then the datasource's processor method is called, and everything updates as expected. I seem to remember that one of the design goals of the dataloader was to avoid such corner cases. So i guess i'm again doing something wrong ?
what i'm expecting is : the counter detail component only is shown when the data is ready
three tiered counter app going on here =) we don't want any shenanigans like a loading spinner !
@carkh for each datasource you have a -meta
subscription, so if you registered counter under the :counter
datasource, you can subscribe to :counter-meta
datasource which will hold the information about the datasource - there you can check if the datasource is :pending
or :loaded
. As of the datasource’s subscription being nil
that is a correct result in that point of time - since the data is not loaded yet (so the datasource will be in the :pending
state).
ok so do this manually
it's ok
Dataloader is loading data asynchronously even if you return the data synchronously from the loader fn - most datasources are going to be async anyway. But, it should be returned inside one reagent’s render cycle
the reason for async behavior is that the dataloader is using channels under the hood - to orchestrate the promises
ok good thanks.... one more question
pipeline controllers do not have access to the app-db-atom ?
like regular controllers
just the value itself ?
i'm bombarding you with questions, feel free to send me away =)
don’t worry about it, I enjoy answering questions about keechma 🙂
so, normal pipeline functions don’t have access to the atom, but there is an escape hatch if you really need it. What is your use case?
i said earlier that i had made a app-db logging controller
it's a regular controller
i was trying to perfect this and debounce the logging because i get many logs of the app-db
ok, so there is something that you can use for that use case - tasks
it’s not documented yet but it’s pretty battle tested
so i want to try using a pipeline controller to make use of the exclusive and delay-pipeline things
we are using it for animations and some other advanced needs, like event handlers inside the pipeline controllers. The idea behind tasks is that they can be inserted in the pipeline and (potentially) block the pipeline until they’re done. The task processor fn will run on every signal from the producer fn, and there is a built in task that can be used to listen to the app-db changes
(though i could do this from a regular controller i wanted to go "modern")
mhh that's in the toolbox ?
We still use regular controllers when there is a need, so pipeline controllers just solve one specific use case
Yeah, tasks are in a toolbox
i'll have to investigate this
Let me do a real qucik implementation of the logger with tasks
well i'm doing this for training purposes more than actual need... don't go out of your way for that
i must say until now every worry or doubt about keechma has been squashed, but i'm glad i started with a silly app rather than the real thing
so the app-db-change-producer is already debouncing the changes so it should be called less frequently than if you just use add-watch
reading it
We are using tasks in cases where you want to have a subprocess running and potentially updating app-db multiple times before releasing and letting the pipeline to continue. They were created because we needed a way to do state based animations
for instance wait-dataloader-pipeline!
function which you can use inside the pipeline to wait until the dataloader is done with loading all datasources is using tasks under the hood
i need to digest this and play with it
also, blocking tasks are “managed” - which means they will be stopped when the controller is stopped. If you use non-blocking-task!
flavor, you must stop them manually
allright ! thanks again
hum i see the :on-start key, where are those listed ?
:on-start
is the same as :start
for pipeline functions, but we added another name when we allowed the synchronous start and stop functions too. So instead of (constantly true)
for params, first argument can be an object that has :params
, :start
and :stop
functions - these are the lifecycle functions that are called synchronously (just like in the regular controllers) and must return app-db. So it made sense to have :on-start
and :on-stop
in pipeline functions to signalize that they are async
allright i understand
thanks again for your kind help, i'll be sure to bother you again soon =)
Please do ;)