Ok, nevermind. Sorted it!
any recommendations for backup/restore in datomic cloud? the feature request hasn’t seen activity in a long time. we’ve been rolling our own application level export/import but that unfortunately makes every entity id change
basically there isn’t a solution except to build your own. the (un)official answer seems to be that backups are not required because s3 is so reliable and, because there’s no excision, no data is ever lost
I can see the sense in that response but I think it doesn’t account for our customers who don’t understand our powerful new toolset. it forces us to take our customers out of their comfort zone which is not great for conservative (i.e. many enterprise) customers
the new local dev client supports importing cloud data to local if that’s one of your use-cases
but for migration, you have to roll your own
I’ll be happy to be corrected on any of these interpretations. FWIW it doesn’t change the fact that I really like the cloud managed service.
I have a uuid attr on every entity. If you have this, you don't care about entity IDs changing. Is that something you have tried?
Not sure if you have the luxury of time, but from what little I know it would be a good idea to do what it takes to abide entity-ids changing. My understanding is that backup/restore will not guarantee entity-ids being unchanged.
i am in the exact same boat right now, and am finding it challenging to justify to our enterprise customers that we can't "simply" backup/restore a db from storage to meet their (i.e. not our) DR requirements. and unfortunately for us, dev-local is not yet an option because we have string values that exceed dev-local's max character limit. that being said, i did a small test with dev-local to replay the demo mbrainz db transaction log into a new db and it worked well, but the t
values are of course different which is a real shame.
@joshkh exactly! despite the fact that we might not need backup/recovery, in an enterprise sales cycle this can be a real problem. Even worse if it’s an RFP situation because the prospect writing the RFP might consider common DR techniques to be a must have. It doesn’t matter that we can explain it away, the internal politicians in the customer can simply use this as a battering ram to avoid choosing our product. It’s not always a technical question: I hope one day Cognitect will provide an answer for export so that Datomic Cloud can be used without this risk in the sales cycle. @marshall any comments on this?
what’s interesting is that Datomic provides DR features that other dbs cannot e.g. you can recover a single tenants data to any point in time, even in a multi-tenant system. Update in-place dbs cannot do this. So we are technically superior for DR. But that doesn’t always work in enterprise sales.
Curious, why does your application relies on entiti ids ?
DR is also to protect against human error, customers want to have a backup file stored in a completely different AWS account S3 bucket… or even have it downloaded to their own infastructure
and when you have backups you need to be able to restore from them… now we’ve rolled our own and fixed some mistakes we’ve made in relying too much on :db/id
values
that’s true. you could accidentally delete your Datomic s3 bucket and then you’d be finished! goodbye biz 😞
I wonder if some kind of s3 level backup would be supported by Datomic to guard against this?
cheers. i know that we're in good hands. 🙂
This is mega important, not just for enterprise, but for any datomic cloud user who wants to preserve their sanity. The sooner the better Stuart. This is huge deal.
mainly for disaster recovery or just migrating the database to a new environment
for the sake of testing query performance, is there a way to flush/bust the query cache in Datomic Cloud other than by renaming bound variables?