I'm trying to implement votes as you suggested. Seems pretty straightforward, but what would you say the best way would be to update a :post/score
without recalculating the sum every time?
@mroerni How do you check whether the file gets updated?
It works like expected, when I do it in my REPL In my project I have the connection in a https://github.com/tolitius/mount defstate and I make the transacts in a https://github.com/wilkerlucio/pathom parallel parser. (note that the first transaction, the schema, is not done in the parallel parser. I’m not sure where the problem is. Since it is in a hobby project I have only little time to investigate, but be sure I come back to you two when I can say more.
Cool! We are developing pre-configured fulcro components here btw. https://github.com/replikativ/datahike-frontend
@whilo Thats really nice.
I use the fulcro-template as well.
The project where I have the issue is:
https://github.com/MrEbbinghaus/Todoish there is a branch datahike-issue
where I experiment with my issue.
Ok, cool! Unfortunately I do not have the time to look into it now. If you keep having issues feel free to open them on GitHub. It would be good if this turns out to be not a problem though.
I would not expect you to. 🙂
While I have your attention. Are you planing to put every public function in datahike.api
?
Asking because I use db?
and conn?
from datahike.core
for specs and therefore I am sometimes require only core, only api or both.
ls -l & cat
So the last write time also does not change?
Is your transaction maybe empty?
I am entering
(d/transact conn [#:todoish.models.todo{:id (UUID/randomUUID) :task "My Task!" :done? false}])
in the REPL and getting a tx-report in return…
When running:
(d/datoms @conn :eavt)
the expected atoms are present
But the file size and date didn’t change.
(:db-after tx-report)
Doesn’t print the datoms only the key :schema
I guessed this is to not accidentally print the whole DB.. ist this correct?
That's correct. SInce we ran into problems when dealing with larger data sets.
Does datahike have support for tuple binding such as
:where [(tuple ?a ?b) ?tup]
I see I can cheat using vector
As of now, we don't support that. But we can add more distinctive data types if you would like to see that.
I guess datomic implements tuples in a more lightweight manner. It isn't critical for me atm since it isn't for production purposes
My use case is implementing a custom aggregator
It would be cool if aggregators like max were pluggable not only for top n
but for the comparator, instead of this custom version:
(defn decaying-score
"Rank tuples of [`score` `creation-time` `current-time`]"
[[score time now]]
(/ score (* (- now time) decay-factor)))
(defn compare-tups
[t1 t2]
(let [s1 (decaying-score t1)
s2 (decaying-score t2)]
(compare s1 s2)))
(defn decaying-max
([n coll]
(vec
(reduce (fn [acc x]
(cond
(< (count acc) n)
(sort compare-tups (conj acc x))
(pos? (compare-tups x (first acc)))
(sort compare-tups (conj (next acc) x))
:else acc))
[] coll))))
decaying-max
is just a variation on how 'max
is implemented in datahike
The file size and date should change.
@konrad.kuehne Can you check that in some setup you have? I am busy atm.
Yes, I'll do that tomorrow.