Hi @neo2551 , we did implement multiple databases with mount in the datahike-server project. A sample config you can find here: https://github.com/replikativ/datahike-server/blob/admin-endpoints/resources/config.edn.sample Loading the configs into a mount: https://github.com/replikativ/datahike-server/blob/admin-endpoints/src/clj/datahike_server/config.clj#L44 Specifying the db in a query: https://github.com/replikativ/datahike-server/blob/admin-endpoints/src/clj/datahike_server/handlers.clj#L37 Did I understand you correctly and does this answer your question? Regarding the context switches...I am not aware that there is a cost to using multiple databases but maybe I just don't know enough about the details.
Maybe my question is the following: how does the memory consumption works when you connect to multiple database? Does Datahike create the indices on the fly, or are they stored inside the files? :) sorry for asking these dumb questions.
Thanks a lot for your help!
Not a dumb question, I am sure someone with deeper knowledge will give a better answer to that soon.
@neo2551 The indices are created whenever you make calls to transact
. When they are stored in durable storage backends, such as the filestore, they are available also after restarts of the JVM. But the creation of the indices (in forms of incremental updates) is always first happening in memory and then flushed to disk.
What about the queries?
Say you have db a
and db b
. If you make perform (d/q some-query a)
and then (d/q some-other-query b)
, how does memory consumption works? Is the memory used for db a
released when calling the second query?
thanks a lot for your answers!
I am trying to use the JDBC backend [it works with all the advice you told me, using the latest SNAPSHOT artefacts.]. However, for JDBC and sqlserver, is there a integrated security option? Or can we put the jdbcURL directly?
It’s only released if you don’t use the db anymore.
Hey @neo2551, that's awesome! I am very interested in your experience with datahike-jdbc. behind the curtains of datahike-jdbc next.jdbc is used https://github.com/seancorfield/next-jdbc/blob/develop/doc/all-the-options.md. There you should be able to find the information on options you can provide. I did not yet try out providing options that are not mentioned, but next.jdbc should pass them to the database. In case there are problems with our backend failing to pass these I am happy to review a PR or open an issue and I will fix it asap.
Yes, this is what I find interesting: I can conencted with next.jdbc, I tried to pass the :jdbcUrl key, but it did not work. I checked the source code of datahike-jdbc, and it seems it is not within one of the specs.
https://github.com/replikativ/datahike-jdbc/blob/master/src/datahike_jdbc/core.clj
I don't see datahike.store.jdbc/jdbcUrl
I believe, if jdbcURL is provided, it should supersede the other arguments?
probably, why do you prefer a url over a map? just curious...
because of the URL can provide more options than the map
for example, integratedSecurity=true
(This is a policy of the company)
I actually think you can pass all you need as key in the config, at least that's how I understood it: Any additional keys provided in the "db spec" will be passed to the JDBC driver as Properties when each connection is made.
I thought next.jdbc just passes the additional options downstream. But if not I will of course implement the jdbcUrl with high prio.
I will test additional key with postgres-ssl-connection
Thanks
But I might not understand everything (I honestly did not dig more into it).
Thanks a lot for your support!
According to this document, you could use the jdbcUrl key directly
https://cljdoc.org/d/seancorfield/next.jdbc/1.1.588/doc/getting-started
yeah sorry, I have seen this already and it seems I did not test it thoroughly on this end. I will try to fix this asap.