If I remember correctly, there was some talk about backup/restore functionality for datomic cloud being in the works, but can’t find any news about it. Is that still being worked on?
am I misunderstanding the :limit
option to index-pull
?
(count
(d/index-pull db {:index :avet
:selector [:db/id]
:start [:story/group group-id]
:limit 5})))
10
I would expect to get no more than 5 results back. I get back 10 results (the total number of matching results) no matter what limit I specify.is this feature implemented only for Datomic Cloud? It’s described in the on-prem index-pull documentation.
You can Call (take 5 (d/index-pull ...
Yes, I know, but I was planning to use :limit
in conjunction with :offset
to do pagination without realizing the full collection of results. (`:offset` does not appear to have any effect either for me.)
@enn Do you have a link to the docs you're reading?
@lanejo01 https://docs.datomic.com/on-prem/query/index-pull.html
Thanks
And you're using on-prem, correct?
Yes
peer api or client api?
This is on a peer
Hi @ennhttps://docs.datomic.com/on-prem/clojure/index.html#datomic.api/index-pull does not include :limit
. This is implemented in the client-api which is accessible in the latest client-pro release https://docs.datomic.com/on-prem/overview/project-setup.html#client-setup
The reason for this is at the top level of client in https://docs.datomic.com/client-api/datomic.client.api.html#top:
Functions that support offset and limit take the following
additional optional keys:
:offset Number of results to omit from the beginning
of the returned data.
:limit Maximum total number of results to return.
Specify -1 for no limit. Defaults to -1 for q
and to 1000 for all other APIs.
I can see how this is confusing in our docs, given the example shows the usage of :limit
without the added context above. I will update the docs to reflect that.
I need to also discuss with the team if peer api will ever support index-pull with limit, but as Joe said, you can still take 5 etc.
Thanks @jaret this was a confusion point for my team as well
Is the pull realized by advancing the outer seq, or only by reading each entry? E.g. if we go (drop 100 result-of-index-pull), does that do the work of 100 pulls or 0?
(I’m trying to discern if drop is an exact workalike to :offset or potentially much more expensive in the peer api)
My understanding is it does the work of 100 pulls. But I need to validate that understanding and am running that by Stu.
@enn Ideally when implementing a pagination api though, you wouldn't use offset like that. Rather, you would grab the last element of the prior page and use that in the value position of :start
, or in your case, :group-id
.
@favila ^^
“Cursor based pagination” is the concept Joe Lane is referring to :)!
Sure, ideally. But obviously it’s important enough that the client api has :limit and :offset 🙂
pretty mind blown when I first saw it in GraphQL land, pretty cool actually!
What you’re suggesting would be more complex than this in the general case. You would have to retain the last pull of the page, transform it back into the next :start vector (which may have grown longer if e.g. a group spans multiple pages), serialize that as a cursor for the client, then rehydrate it when it comes back and know to skip the first result if it is an exact match for :start
. I can definitely see not wanting to take all that on in an initial implementation. It also makes it difficult to have a non-opaque cursor--a client may indeed want to skip 100 items or pipeline multiple fetches and be ok with the potentially inconsistent read.
IOW simple offset and limit still has its uses
@jaret thanks for the clarification, I appreciate it. If you hear anything back on whether this will be supported in the future I’d love to hear.