datomic

Ask questions on the official Q&A site at https://ask.datomic.com!
Stefan 2020-09-18T07:36:50.000600Z

@val_waeselynck Yeah we’re already experimenting with that, and maybe it’s good enough. But if those tests take 1 second each because of setup/teardown of Datomic databases, that’s too long for me. For unit tests, I prefer to keep things as lean as possible.

zilti 2020-09-18T09:37:52.002900Z

I just updated Datomic from 1.0.6165 to 1.0.6202, both the transactor, and the peer library in my program. Now, "nothing works anymore". Interestingly, Datomic Console can still connect fine. But my program cannot anymore, giving me `AMQ212037: Connection failure has been detected: AMQ119011: Did not receive data from server for org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection@3145e5fe[local= /127.0.0.1:36065, remote=localhost/127.0.0.1:4334 ] [code=CONNECTION_TIMEDOUT]. AMQ119010: Connection is destroyed` https://termbin.com/wnhzu

zilti 2020-09-18T09:39:00.003200Z

Any ideas what could be causing this?

zilti 2020-09-19T15:28:58.022300Z

Java 13

val_waeselynck 2020-09-18T13:11:39.003300Z

@stefan.van.den.oord forking solves that problem

val_waeselynck 2020-09-18T13:13:11.003500Z

Put a populated db value in some Var, and then create a Datomock connection from it in each test, there's virtually no overhead to this

Stefan 2020-09-18T13:23:33.003700Z

Sounds good, will definitely try, thanks! 🙂

xceno 2020-09-18T14:21:53.009500Z

Hi, I need some clarification about Datomic Cloud vs. OnPrem setup. Difference 1 in the cloud guide (https://docs.datomic.com/on-prem/moving-to-cloud.html#aws-integration) states: > Datomic apps that do not run on AWS must target On-Prem Is this because of technical reasons or a licence thing? Our clients are all in on Azure, but we need a Datomic database. Should I convince them to bite the bullet and let us deploy on AWS, or do we have to deploy Datomic OnPremise on a Azure environment?

2020-09-18T14:25:14.010300Z

When datomic reports an anomaly :cognitect.anomalies/busy with category :cluster.error/db-not-ready, what is exactly the problem that datomic is having (cpu load?) and how could I go about mitigating this in the short term? Or is it just that my peers are severly overloaded and I need to add more 😛?

1🦜
marshall 2020-09-18T14:25:42.010400Z

I suppose that is a bit draconian You certainly could run Datomic Cloud in AWS and run your application in Azure

marshall 2020-09-18T14:25:52.010600Z

you'd have to handle the network stuff to make sure it was secure

marshall 2020-09-18T14:25:59.010800Z

and you'd be paying the cross-cloud latency

marshall 2020-09-18T14:26:57.011Z

the set of "active" databases on each node (query group or primary compute group instance) is dynamic. Datomic 'unloads' inactive databases after a period of time

xceno 2020-09-18T14:27:03.011200Z

That's what I thought. Just plug in the client config pointing to AWS, but the main app runs on Azure. So it must be a licensing issue then

marshall 2020-09-18T14:27:19.011400Z

if you issue a request to connect to a db that's not currently 'active', the serving node has to load that DB's current memory index/context/etc

marshall 2020-09-18T14:27:26.011600Z

that's what the anomaly you're seeing indicates

marshall 2020-09-18T14:27:58.011800Z

if you have only a few DBs in your system, you can use the preload db parameter in your compute group (or query group) cloudformation to automatically load those DBs on any node at startup

marshall 2020-09-18T14:28:11.012Z

it's not a licenseing issue

marshall 2020-09-18T14:28:23.012200Z

it's a 'we need to write that sentence better` issue

marshall 2020-09-18T14:28:35.012600Z

you are definitely free to do that ^

xceno 2020-09-18T14:28:39.012800Z

Ahh okay got it, thank you 🙂

marshall 2020-09-18T14:28:41.013Z

there is no way to run Datomic Cloud in Azure

marshall 2020-09-18T14:29:01.013400Z

but if you're OK with the cross-cloud configs/tradeoffs, there is no reason you can't do that

xceno 2020-09-18T14:29:17.013600Z

Yeah it would be like a datomic onPrem installation targeting a postgres DB on azure, but even typing this sounds a bit stupid

marshall 2020-09-18T14:29:50.013800Z

i mean, it's not that bad; I've definitely talked to several customers using Cloud and hosting their apps elsewhere

marshall 2020-09-18T14:29:53.014Z

GCP mostly

xceno 2020-09-18T14:30:27.014200Z

I see, fair enough. I talk to my client then. Thanks again!

marshall 2020-09-18T14:30:38.014400Z

sure

2020-09-18T14:33:24.014600Z

(This is an on prem peer server btw). But thats unique databases, or also database values at different t?

marshall 2020-09-18T14:35:08.014900Z

unique databases

2020-09-18T14:39:43.015100Z

Is there a way we can deal with this? We have around 12 databases in total, with only 1 (production) db being largely hit (and 2 little production dbs). How come it changes the active database after all?

marshall 2020-09-18T14:40:10.015300Z

does this only occur on starting up a new peer server?

2020-09-18T14:40:40.015600Z

No, it appears to occur randomly every few seconds or so

marshall 2020-09-18T14:41:00.015900Z

is it always db-not-ready?

marshall 2020-09-18T14:41:08.016100Z

how many peer servers do you have running?

2020-09-18T14:41:14.016300Z

We did plug a second peer server today, so we have 2 now. Loadbalanced by haproxy, no sticky sessions

2020-09-18T14:44:54.016600Z

Predominantly, yeah. We did see an ops limit reached exception before, but I can’t confirm right now when I last saw that

marshall 2020-09-18T14:48:21.016800Z

do you have cpu and memory metrics from the peer server?

2020-09-18T15:14:10.018700Z

Just for posterity/googlers: We ended up severing datomics connection to a badly provisioned memcached, which reduced these errors significantly. Can’t say for sure thats the problem, tho

Nassin 2020-09-18T15:55:25.019Z

What Java version?

favila 2020-09-18T16:35:16.021100Z

Running datomic on-prem+dynamodb with a very large database (>6 billion datoms). I’m noticing large amounts of data (3-5GB) written to the data directory that appear to be lucene fulltext indexes. Is this a scratch space for the transactor’s fulltext indexing?

favila 2020-09-18T16:35:26.021200Z

I ask because I see three items with old timestamps and I’m wondering if I can delete them.

favila 2020-09-18T16:35:39.021400Z

Also, that seems really big, is this normal?

favila 2020-09-18T16:36:19.021700Z

should I be provisioning a separate or faster disk for this?

favila 2020-09-18T16:41:23.021900Z

To be clear, this is 3-5gb per directory under fulltext and I currently have 3 of them