Yeah it needs a withEndpointConfiguration
call to the builder... I'm not sure why they don't offer it, there are many other S3 providers. đ
I've understood that composite tuples can't be deregistered, is that right? I mean, if I add a composite tuple a+b, but later notice that I need to add a new tuple (or well, triple) a+b+c and the tuple a+b is becomes useless, Datomic still keeps updating the a+b tuple and it can't be deregistered. I assume there's some performance penalty to have unused composite tuples. Because of this, we have avoided using composite tuples if we have a doubt that we might need to change it in the future. Is that a valid reason to avoid them? Or am I overestimating the performance penalty of updating composite tuples?
The performance penalty is a la carteâwhen you add a composite tuple itâs up to you to âtouchâ all entities that donât have it yet to populate it
The reason you cant change it probably just comes down to the inherent complexity of schema changes in a temporal db (what happens to the history view? To all old TX records?) and the philosophical stance Rich has against making the same name mean something different over time. His view: just use a new name and deprecate / remove the old one.
Notice that no schema changes which change type are allowedâtuples are not unique in that way
I think you may be thinking of composite tuples as a pure ephemeral projection of "realâ data like an index in a relational db. Thatâs not really how itâs implemented in datomicâitâs more like actual data that the transactor automatically updates when it notices itâs components change
It doesnât eagerly project it, it canât repopulate it for you, and the data is in the same covering indexes as all other data
Thanks for the answer! But I'm still wondering, isn't there performance penalty in the "just use a new and and deprecate the old" strategy? I mean, if I have attributes :a
, :b
and :c
, and a composite tuple a+b
, which I then decide to deprecate in favor of a new composite tuple a+b+c
, then whenever I'm changing the attribute :a
or :b
, Datomic will update the composite tuple a+b
, even though it's deprecated.
> I think you may be thinking of composite tuples as a pure ephemeral projection of "realâ data like an index in a relational db. Thatâs not really how itâs implemented in datomicâitâs more like actual data that the transactor automatically updates when it notices itâs components change Right... so it's not really a performance penalty, but penalty in storage?
Yes
Which you can mitigate by eg adding noHistory to the attr and removing any value indexes if you have it
If you really want it gone you need to create new component attrs also
Would you say then @favila that the storage cost of old unnecessary composite tuples shouldn't really be much of a factor in deciding between composites vs the other types of tuples, when a composite would otherwise work?
I would say that itâs rare that storage cost is a factor
I also wish you could âturn offâ a composite tuple--i.e. signal to the transaction processor that it should stop updating it
composite tuples do something no other tuple can do: they know the effective value of the db at the moment right before committing the transaction datoms, so they can update composites to their correct value within that transaction even if the contents of the tx-data was uncoordinated
Got it! Yeah, I wish we could. Maybe that will come as a feature one day. It sounds like the type of schema change that could be allowed.
you can use a ânormalâ tuple and update it yourself, but you will have to be careful that you only prepare transaction data where you know what the final value will be when the tx-data arrives at the transactor, and that nothing else in the tx-data might alter that calculation.
but if storage cost is a concern, thatâs what you gotta do
Itâs not impossible--datomic didnât have tuples of any kind for years. we were manually maintaining composite indexes as serialized strings
Storage isn't really that big of a concern in our case, I think. It was more like the bad aftertaste of having unused and unnecessary attributes getting asserted perpetually.
datomic doesnât let you remove the cognitive burden of past mistakes. I think thatâs the unspoken downside to the âno breaking changesâ mantra
Yeah, though there's a difference between old/deprecated attributes in the schema and having values for them asserted on entities.
my understanding is that the lambda produced when deploying an Ion is really just a proxy to code running on the compute or query group. does that mean that memory allocated to the lambda via the lambda configuration is less consequential than a typical lambda?
very interesting, thanks again
because of the history db and tx log, thereâs always an assertion somewhere
Yes!
thanks Joe! does the code that is proxied to run in its own memory space? in other words, if my long running http-direct process has some state, say via mount
, then there's no reason to expect that the proxied-to function can access that state, right?
(we tested this for fun and ruled it out, but i thought i'd ask anyway)
Hi. Is there a Datomic Connector (Source and Sink) for Kafka?
Hi đ in lieu of a more relevant response from someone else, you may be able to borrow and adapt some code from Crux https://github.com/juxt/crux/tree/master/crux-kafka-connect
Hi, thatâs the plan đ (if âsomeone elseâ doesnât show up) haha
Awesome đ