Does datahike support composite tuples?
Or you take the SNAPSHOT
Thanks, that is helping a lot.
@timok Are you sure it is already in SNAPSHOT? There is still an not merged PR: https://github.com/replikativ/datahike/pull/251
@mroerni sorry about that. Would you test it if I merged it to a SNAPSHOT release? That would be very helpful. Or do you want to compile it yourself?
I can test it. Sure.
io.replikativ/datahike {:git/url "<https://github.com/replikativ/datahike>"
:sha "06bfac9a3e10d4e41f27ba27adfcd99ad6fd021e"}
in my deps.edn leads to ClassNotFoundException: datahike.java.IEntity
but I suspect it is me who fucked something up.that needs a mvn compile
then
thanks for testing. very appreciated!
Are there some breaking changes since the last release? I am getting errors, when transacting my schema:
2020-12-14T13:31:59.957Z Jeb INFO [decide.server-components.database:58] - Transacting schema...
2020-12-14T13:31:59.958Z Jeb DEBUG [datahike.connector:70] - Transacting with arguments:
2020-12-14T13:31:59.990Z Jeb ERROR [datahike.db:1217] - Schema with attribute :user/id does not exist {:error :retract/schema, :attribute :user/id}
By the way, the debugf
print in datahike.connector:70
should be a debug
Hm.. But my schema is transacted and works just fine…
We saw that problem already. We will investigate that but it seems to work fine so far. Thanks for reporting though :thumbsup:
Yes, in the next release which should happen very soon.
2👀Yes, i think it is heading there, but @grounded_sage would be able to tell you more.
@robert.mather.rmm we are working on this right now. Hoping to have an initial version soon. But still bit of work to go for full API and before we would declare it stable.
@robert.mather.rmm It's a little more work than it sounds like because with Cljs if the storage is durable (indexDB or whatever, vs in memory) then all of the IO operations have to be async, which means all of the query logic has to be adapted to accomodate this.
However, if you need durability but aren't expecting a ton of data, you could always just write to disc, but query the data from an in-memory db, till a proper indexdb implementation is ready.
@grounded_sage Is that work going on somewhere public?
@metasoarous You mean like sync the whole db to LocalStorage after each op? Fortunately I found Mamulengo (https://github.com/wandersoncferreira/mamulengo), which seems to work and saves to LocalStorage. Not sure of the internals, maybe just doing what I would have done to hack it together. This should hold me for a while.
Yeah, that should get you by ok. I'll note though that since that project seems to store transactions individually, eventually you'll get so many transactions that it may take a while for it to boot up. Unless it's also periodically taking snapshots of the EAV index to boot from. Probably fine if you're just looking to tide yourself over until we get this full support in datahike.
1👍