@alekcz360 what do you expect to be in the SNAPSHOT release? we can push the current metadata support branch, but it is not finished yet. most backends (except the filestore for nodejs) are ported now and we figure out the automatic migration between versions now. after that we will port our garbage collector (should not be too much effort) and then port a selection of the other backends. if people could help porting some of them it would go faster.
@whilo the main thing at least from my perspective is the protocol. Without the protocol on clojars I can't put together any automated pipelines in a stable way.
What I'm thinking is that once the snapshot is there I can update the template repo. Put together some robust automated tests then porting the other backends should be much faster.
Ok, that sounds good. We will provide a compliance test for backends with the next vesion.
Do you want to collaborate on that? I think this is https://github.com/replikativ/konserve/blob/feature_metadata_support/src/konserve/protocols.cljc is already pretty much done, but I can ask @ferdi.kuehne to push his recent changes tomorrow.
Yeah I'm keen to collaborate on that.
hello i updated the indexddb ns to the new protocols.
Hey eveyone! I'm running into a weird issue when inserting a :`db.type/instant` value. The relevant schema entry looks like this:
{:db/ident :onair/started
:db/valueType :db.type/instant
:db/cardinality :db.cardinality/one}
and I'm setting said attribute-value like this:
(d/transact conn [{:db/id id :onair/started (new java.util.Date)}])
Now the weird thing is that the value is written to the database, but I also get this exception: https://paste.debian.net/hidden/83735403/
Does anyone have a clue what I might be doing wrong?A near-identical call for an attribute :onair/ended
(also db.type.instant
works without exception..
ah! found a ticket about it with the solution: https://github.com/replikativ/datahike/issues/131
Yes, sorry for the inconvenience. This will be automatically gone with the next release.
@alekcz360 and @ferdi.kuehne, my priority list of the stores is PostgreSQL, LevelDB, Redis, RocksDB, CouchDB and then the nodejs filestore. We need to provide an automatic migration path from the previous version at least for the konserve filestore (@ferdi.kuehne is working on that) PosgreSQL and LevelDB.
So the fastest way to the next release is to make sure that those are ported and the migration works.
@whilo I can give redis a shot
Never used RocksDB our CouchDB before.
I can give CouchDB at try once I'm done with redis
Cool! For the filestore we decided to explicitly encode the version of the schema in each value in the first byte. That will make it easier to migrate values on the fly in the future.
The idea being that you look up the serialized version, you lookup the version of the code running, you calculate the path of migrations and then apply a series of migration functions.
We can factor the migration code, the only thing we need to be able in each store is to determine the version of a serialized value.
And provide migration functions for each of the schema changes obviously.
Hopefully we do not need to change the schema often, we should avoid it as much as possible, but we need to be able to when it becomes necessary.
@alekcz360 Does that make sense?
Yeah that makes sense conceptually. I'll have a look and let you know if I get stuck.
Ok, perfect 🙂.
@whilo not at all! Thank you for your hard and amazing work, really enjoy working with datahike!