datomic

Ask questions on the official Q&A site at https://ask.datomic.com!
oxalorg (Mitesh) 2020-12-03T07:31:31.050500Z

Thank you so much pithyless, this was of great help! I'm going to try these out and see what works for me. ๐Ÿ™‚

plexus 2020-12-03T08:21:27.051600Z

I don't suppose there's a way to make a peer server serve up a database that was created after the peer server started? (apart from restarting the peer server)

plexus 2020-12-04T10:48:29.061700Z

this is for a multi-tenant system where one tenant = one db. We are setting up analytics and still figuring out what our setup will look like. It's appealing to have a single peer server = single catalog, but then we would have to restart it when adding a tenant.

2020-12-03T16:45:07.052900Z

Can you clone a full Datomic setup by copying the underlying storage over? For example, when copying Postgres dumps & importing them afterwards

2020-12-04T10:54:50.061900Z

Ah, we were shadowing our own databases apparently on both storage and Datomic level. It turns out, this is possible btw ๐Ÿ™‚

favila 2020-12-04T12:30:41.064900Z

Itโ€™s possible if your storage backups are atomic, consistent backups (no read-uncommitted or other read anomalies). Not all can (dynamo) or do by default (MySQL?) so just be careful

1๐Ÿ‘
jackson 2020-12-03T17:54:53.054600Z

Question about on-prem peer capacity.. the docs indicate that 4GB memory is recommended for production. If we have a beefy server that has plenty of ram, is there benefit to scaling everything up? Accounting for other processes being run and 64-bit java, etc.

favila 2020-12-03T18:00:41.054700Z

benefit exists in increasing peer object cache up to the size of the peerโ€™s working set (or the database); you can also run queries with larger intermediate result sets (which must always fit in memory). No benefit beyond these. Risk of large heap is the usual with java CMS or G1GC: longer pauses. If youโ€™re using a fancy new pauseless collector this should also be a non-issue.

jackson 2020-12-03T18:03:25.055Z

Our db is quite large and we've already started rewriting some of our heavier queries with datoms but having a larger cache in the peer should mean fewer trips to pull indices (hopefully). Does the transactor's memory need to increase to match the peers?

favila 2020-12-03T18:05:28.055200Z

no; you should size the transactor based on its own write and query load, not peers

1๐Ÿ‘
favila 2020-12-03T18:05:34.055400Z

i.e., treat it like a peer

favila 2020-12-03T18:06:45.055700Z

just to transact things it does have to perform some queries (e.g. to enforce uniqueness or cardinality, or running db/ensure predicates, or transaction functions)

favila 2020-12-03T18:06:59.055900Z

but you can judge that load separately from the number of other peers

jackson 2020-12-03T18:07:28.056100Z

ok awesome, thanks for the help!

jaret 2020-12-03T18:13:53.056300Z

No you would have to restart. What is the use case for doing this? Perhaps this is something we should consider adding a feature for. As an aside you can pass multiple -d options to serve multiple databases and serve in-memory dbs with the peer server.

jaret 2020-12-03T18:21:21.056500Z

Are you talking about on-prem or cloud? In on-prem the supported method would be backup/restore. You can even use backup and restore to move between underlying storages: https://docs.datomic.com/on-prem/backup.html Please note that Datomic Backup/Restore is not intended as tool for "forking" a DB, but you can restore into a URI that already points to a different point-in-time for the same database. You cannot restore into a URI that points to a different database.