clojure-europe

For people in Europe... or elsewhere... UGT https://indieweb.org/Universal_Greeting_Time
djm 2021-03-03T07:01:28.267500Z

👋

mccraigmccraig 2021-03-03T07:22:47.267700Z

¡månmån

dharrigan 2021-03-03T07:37:26.267900Z

Buongiorno!

kardan 2021-03-03T07:41:46.268100Z

Good morning

ordnungswidrig 2021-03-03T07:49:57.268300Z

moin moin

slipset 2021-03-03T07:52:47.268500Z

morning!

javahippie 2021-03-03T08:10:52.268700Z

Morrrrgen!

agigao 2021-03-03T08:51:23.268900Z

დილა მშვიდობის!

2021-03-03T09:10:31.269200Z

morning

ordnungswidrig 2021-03-03T09:11:32.270300Z

@chokheli I realize that the georgian script (mchedrull, right?) is liked by a lot of people. I wonder if there are different variations like sans-serif or like that. To me it seams like there is only that single “rounded one”.

agigao 2021-03-04T09:22:37.340Z

No, only Georgian linguists know the old ones I think.

thomas 2021-03-03T09:20:51.270900Z

moin moin

jkxyz 2021-03-03T09:31:42.271600Z

'Neața!

javahippie 2021-03-03T09:42:19.271700Z

Interesting! Found this: https://fonts.ge/en/

javahippie 2021-03-03T09:43:12.272Z

Something I never really thought about when choosing fonts, that some alphabets are not supported.

RAMart 2021-03-03T11:27:12.272400Z

https://en.wikipedia.org/wiki/Noto_fonts

RAMart 2021-03-03T11:27:44.272900Z

And be careful when choosing, publishing and redistributing fonts, because fonts have licenses too.

👍 2
ordnungswidrig 2021-03-03T13:02:52.273800Z

@javahippie that’s an intersting collection. It seems to me that the western scripts have a wider variety in styles. But that might also be because computers and electronic fonts clearly a are biased to western scripts.

ordnungswidrig 2021-03-03T13:03:00.274Z

(Or even just ASCII)

javahippie 2021-03-03T13:07:00.274200Z

If you type some letters from the georgian alpabet into google fonts, you have a really hard time finding any that work. I also guess that this is because most of IT is western-centric

javahippie 2021-03-03T13:07:22.274400Z

Even Noto Sans only works in bold:

javahippie 2021-03-03T13:13:56.277100Z

For a thing I am writing, I’d like to persist EDN to a database in a way, that I can search for (and index) certain values. Time series DB is a plus. Managed to avoid this topic until now, do you have any pointers and experiences? Needs to be open source, as the software I am writing will be, too. The first thing coming to mind for me is datahike.

borkdude 2021-03-03T13:15:19.277400Z

@javahippie I would go with datalevin probably

borkdude 2021-03-03T13:15:50.278Z

But that doesn't support history, if you need that, datahike probably works. Datalevin works with GraalVM which is a plus for me :)

javahippie 2021-03-03T13:20:12.279900Z

Interesting, didn’t know Datalevin. Also I see loom graph protocols in the roadmap, that’s great as I use loom, From first skimming their page, it seems like it needs lmdb as a backend technology, right? That’s not too common

javahippie 2021-03-03T13:21:40.280800Z

Also still under development, and having a clusterable database underneath would be important. But still interesting, will definitely take a look at it!

borkdude 2021-03-03T13:28:16.281200Z

Yes, right now you have to install lmdb separately but this will be addressed

borkdude 2021-03-03T13:28:33.281600Z

(using automatic bundling of the driver, just like xerial sqlite jdbc does)

borkdude 2021-03-03T13:28:43.281800Z

Soon coming as babashka pod too btw

😁 1
borkdude 2021-03-03T13:36:52.282300Z

The sqlite that Clojure always wanted, that's how I see it ;)

javahippie 2021-03-03T13:37:16.282500Z

Definitely a cool project!

mccraigmccraig 2021-03-03T13:45:44.285600Z

i had my hopes up for a moment there that i could add a datalog db to our mobile app... i don't think the clojurey bit will work though

mkvlr 2021-03-03T13:46:21.286Z

will datalevin & asami eventually support similar use cases? Datalevin is getting loom graph protocols https://github.com/juji-io/datalevin#earth_americas-roadmap and asami durable storage https://github.com/threatgrid/asami/compare/storage

borkdude 2021-03-03T13:46:49.286400Z

@mkvlr datalevin already supports durable storage

mkvlr 2021-03-03T13:47:15.286900Z

@borkdude yep, so coming at it from different directions

borkdude 2021-03-03T13:48:01.287300Z

ah I see, I've never looked into the loom graph protocol stuff

ordnungswidrig 2021-03-03T14:03:52.287700Z

what about crux on a local rocksd (or lmdb)?

simongray 2021-03-03T14:06:06.290800Z

The ongoing friendly competition between Asami, Datalevin and Datahike really excites me, but I wish I could fast-forward about 1 year to know which one to pick for new projects.

simongray 2021-03-04T08:05:31.336300Z

I’ve had to learn a lot about RDF and the Semantic Web at work and could see some clear parallels with the Datalog scene in Clojure, so I documented what I could find here: https://github.com/simongray/clojure-graph-resources#datalog

simongray 2021-03-04T08:05:48.336600Z

BTW Søren, you might be interested in this https://www.youtube.com/watch?v=zoOXCaZ3M2Y

reefersleep 2021-03-04T08:49:21.338Z

I am 🙂 Saw it on reddit. Haven’t had time to see it yet

reefersleep 2021-03-04T08:51:04.338200Z

Hard to distribute what little time I have among interesting things, there are so many. “Do I really need to see another video about rules engines? I already saw one and grasped the overall concept. Will this one bring anything new to the table?” I’m constantly making decisions like this, and mostly choosing the boring, uninformative and most importantly, time-saving option. Parent life, eh!

simongray 2021-03-04T09:14:43.339800Z

It’s only 30 minutes 😉

reefersleep 2021-03-04T09:29:13.340400Z

I see that you have an easy child 😛

javahippie 2021-03-03T14:06:28.291400Z

Crux is also a good candidate! to be honest, the architecture diagrams on their site scare me 😅

borkdude 2021-03-03T14:07:06.292800Z

FWIW Datalevin is used in production within Juji (company) and Asami is used within Cisco

simongray 2021-03-03T14:07:13.293100Z

@mkvlr not sure the directions are so different. AFAIK Asami, Datalevin and Datahike are all forks of Datascript.

mpenet 2021-03-03T14:07:38.294Z

they are all sightly different too

mpenet 2021-03-03T14:08:03.294800Z

they are not strictly competitors from what I understand

joelkuiper 2021-03-03T14:08:17.295300Z

I've been using datahike for a while now, and it seems to work as promised (history support was important to me), the datascript/datalog query language is really quite interesting!

mpenet 2021-03-03T14:08:23.295600Z

some support as-of, some don't some have persistence, some dont etc

borkdude 2021-03-03T14:09:23.297100Z

That language is directly based on Datomic's query language which has been around at least since 2013 (which is when I first used it) and that is based on Datalog in turn.

joelkuiper 2021-03-03T14:09:45.298Z

alternatively you can also use datascript itself and read/write the edn in a serialization format of your choosing with a watcher as a form of persistence, but it really depends on the size of the data if that makes sense

borkdude 2021-03-03T14:10:27.299100Z

You could also use postgres + jsonb :P

ordnungswidrig 2021-03-04T08:00:12.335600Z

I don’t mind too much about depth as more about the connectedness (how interconnected a graph is)

reefersleep 2021-03-04T08:52:47.338700Z

Just asking because we’ve got some fairly deep structures at work, and they seem a hassle to deal with in the query language - and they’re heavy to query as well, so ideally, you’d put indices in a lot of places in those structures, I guess. I was thinking that maybe it’s a dream for lighter structures.

borkdude 2021-03-04T09:41:23.340600Z

I guess that's one of the benefits of triples: no nested structures, easy to query

reefersleep 2021-03-04T10:45:10.341800Z

Indeed! I’ve had map fatigue, it’s very real, and I really liked the O’Doyle presentation that @simongray linked to in the Clojure Reddit recently for opening my eyes to triples. (I wasn’t mature enough to see the light when I used Datomic early in my Clojure carreer 🙂 )

simongray 2021-03-04T11:44:11.344200Z

I think Domain Modeling with Datalog (https://www.youtube.com/watch?v=oo-7mN9WXTw) is what really sparked my interest.

simongray 2021-03-04T11:48:21.344500Z

I think the simplicity of modeling using tuples as well as the ability to apply both graph theory and logic programming is what is drawing me to it. SQL is fine and a known quantity, but it’s not without its flaws.

simongray 2021-03-04T11:50:35.344700Z

It’s hard to move SQL out of the backend database, whereas the Clojure Datalog paradigm is much more universally applicably and portable.

ordnungswidrig 2021-03-04T13:35:26.346600Z

I like the idea of datalog in the frontend.... Where it makes sense.

ordnungswidrig 2021-03-04T13:36:18.346800Z

Also change based communication with the backend allows for a nice synchronisation story and even concurrent editing.

ordnungswidrig 2021-03-04T13:37:27.347200Z

I mean when you source in events from the server you can then build your local projected datastructures in the way you need them best to query them. Like CQRS does on the server side.

simongray 2021-03-04T13:54:39.349700Z

Yup, something like that is the dream. I feel that there is a lot of momentum in the Clojure ecosystem towards creating that kind of thing.

simongray 2021-03-04T13:56:20.349900Z

like the whole Fulcro framework, but also lots of smaller libraries. And if you can decompose your data into EAV tuples, you then get to recompose it with lots of things.

javahippie 2021-03-03T14:10:32.299300Z

What I like about Crux and Datahike is, that you are able to plug a JDBC datasource for persistence. The software is intended to be hosted on site, and choosing your own persistence provider, even one that supports JDBC and is clusterable on its own would be great

borkdude 2021-03-03T14:10:50.300Z

I think datahike now also supports postgres as storage, if I'm not mistaking

joelkuiper 2021-03-03T14:11:23.301100Z

yeah it can run through konserve https://github.com/replikativ/konserve

mpenet 2021-03-03T14:11:31.301500Z

yes, jdbc, the whole storage story is abstracted so in theory it's quite easy to use on any k-v store

mpenet 2021-03-03T14:12:04.301900Z

asami is about to have storage too

borkdude 2021-03-03T14:12:57.302700Z

I've heard some performance concerns about datahike vs crux btw, also good to keep in mind

joelkuiper 2021-03-03T14:13:02.302900Z

which is in turn a subset of prolog, which has been around for a very long time 😛

borkdude 2021-03-03T14:13:06.303100Z

(crux being the more performant one)

borkdude 2021-03-03T14:13:20.303200Z

> and that is based on Datalog in turn. that's what I said ;)

borkdude 2021-03-03T14:13:26.303400Z

ooh prolog

borkdude 2021-03-03T14:13:30.303700Z

sorry, bad reading

joelkuiper 2021-03-03T14:13:31.303900Z

😉

borkdude 2021-03-03T14:13:45.304400Z

yeah, and prolog is based on.... what's the source of everything?

mpenet 2021-03-03T14:14:07.305100Z

not sure datahike is optimised (for the write path at least) yet, that will come eventually

joelkuiper 2021-03-03T14:14:10.305300Z

I'll second that from experience, the datahike transactions are very slow in comparison with both the leveldb and the file based backend

joelkuiper 2021-03-03T14:14:19.305800Z

queries are generally very fast though

mpenet 2021-03-03T14:14:23.306Z

last time I checked (a while ago) there was no batching of writes

borkdude 2021-03-03T14:14:27.306200Z

I probably heard that from you then ;)

joelkuiper 2021-03-03T14:14:55.306700Z

Horn clauses 😛 ?

mpenet 2021-03-03T14:15:08.307100Z

crux for writes you essentially benchmark kafka, so yeah it's fast, read-after-write on both would be more interesting

ordnungswidrig 2021-03-03T14:16:37.307600Z

That works better than I want to confess actually.

ordnungswidrig 2021-03-03T14:17:28.308400Z

@joelkuiper nerd.

😅 1
ordnungswidrig 2021-03-03T14:17:32.308600Z

🙂

borkdude 2021-03-03T14:17:37.308800Z

yes, it's pretty good

mpenet 2021-03-03T14:17:40.309100Z

both are really cool projects, and quite different too again. I am not sure raw performance is a good metric to compare them tbh

borkdude 2021-03-03T14:18:18.309200Z

👿

borkdude 2021-03-03T14:18:49.309700Z

it depends what kind of perf is important for your use case

mpenet 2021-03-03T14:19:13.310400Z

depends on what you need to do with the data

mpenet 2021-03-03T14:19:19.310700Z

for asami: https://github.com/threatgrid/asami/wiki/Storage-Whitepaper

joelkuiper 2021-03-03T14:19:33.311100Z

the nice thing is that they're pretty much drop in replacements of each other from the query/transaction perspective, so as the projects mature more you can try all of them with relatively little effort

👍 1
joelkuiper 2021-03-03T14:20:11.311700Z

well, the core features I guess, history/loom protocols/replication etc will probably remain project dependent

mpenet 2021-03-03T14:20:14.311900Z

crux is also likely way more mature

mpenet 2021-03-03T14:20:34.312200Z

(than datahike)

2021-03-03T14:21:35.312500Z

I thought datahike had been around longer

mpenet 2021-03-03T14:21:49.313200Z

crux has people working full time on it I think

ordnungswidrig 2021-03-03T14:21:52.313400Z

nice, this is from the datalevin readme: > If you are interested in using the dialect of Datalog pioneered by Datomic®, here are your current options: > If you need time travel and rich features backed by the authors of Clojure, you should use https://www.datomic.com. > If you need an in-memory store that has almost the same API as Datomic®, https://github.com/tonsky/datascript is for you. > If you need an in-memory graph database, https://github.com/threatgrid/asami is fast. > If you need features such as bi-temporal graph queries, you may try https://github.com/juxt/crux. > If you need a durable store with some storage choices, you may try https://github.com/replikativ/datahike. > There was also https://github.com/Workiva/eva/, a distributed store, but it is no longer in active development. > If you need a simple and fast durable store with a battle tested backend, give https://github.com/juji-io/datalevin a try.

👍 5
mpenet 2021-03-03T14:22:20.313700Z

asami has also as-of iirc

2021-03-03T14:24:06.314700Z

what I like about crux is that it solves a problem I have a lot in financial services and other things where the application date is more important than the transaction insertion date

mpenet 2021-03-03T14:27:03.315800Z

I wish datomic (on-prem) was oss, it's really impressive and I really think it would be a huge boost to clj.

👍 1
mpenet 2021-03-03T14:27:49.316400Z

I hope it's in the plans for nubank to do just that, but maybe I am a bit naive

2021-03-03T14:31:29.317200Z

yeah, I'd be happier to have it fl/oss and pay for support. Had too many proprietary dbs disappear

2021-03-03T14:32:11.318Z

looking at hitchhiker-tree looks like a useful thing for me. I do lots of sort these things by date under and id and then reduce the events

2021-03-03T14:33:01.318700Z

tho some of the work with arrow, clojure and memory mapping being done by the http://tech.ml crew looks interesting too for different sizes of data

javahippie 2021-03-03T15:07:36.319200Z

So I have many things to look into now, thanks 😅

2021-03-03T15:15:07.319500Z

mostly nowadays I use excel and csv for io

2021-03-03T15:15:20.319800Z

but my data is pretty small

2021-03-03T15:16:09.320600Z

I do store the raw simulations as compressed transit. I think I might go over to arrow and http://tech.ml.dataset for it in the future though https://github.com/techascent/tech.ml.dataset

ordnungswidrig 2021-03-03T15:24:11.321400Z

@otfrom excel local files or excel stored on sharepoint/onedrive/that-office-online-thing?

javahippie 2021-03-03T15:26:30.322Z

Wonder if this works with Excel on S3 Buckets https://www.cdata.com/drivers/excel/jdbc/

mccraigmccraig 2021-03-03T15:33:54.323700Z

excel makes me 😿 - it has been the source of so many data issues with it's dubious habit of silently changing values in CSV files

ordnungswidrig 2021-03-03T15:54:00.324400Z

@mccraigmccraig isn’t the more likely the problem with the understandardized CSV?

2021-03-03T15:56:11.325600Z

well, they start off on my local drive. They end up all sorts of places.

2021-03-03T15:56:44.326700Z

@mccraigmccraig I've started working more w/excel directly to avoid some of the csv export issues

mccraigmccraig 2021-03-03T15:59:28.329Z

yeah, if csv specified type information, then the issues would go away - but they would also go away if excel didn't make type assumptions and convert values into an assumed type. either way it's painful

2021-03-03T16:00:44.329500Z

yeah, all the source data I deal with is in Excel, so mostly I'm trying to limit the number of changes they are making

agigao 2021-03-03T17:02:29.329700Z

@ordnungswidrig Mkhedruli is exactly the rounded one, some variations of fonts fs, but that’s a standard. On the other hand, previous “version” of Georgian alphabets were quite different, more squared - https://ka.wikipedia.org/wiki/ქართული_დამწერლობა#ანბანი

agigao 2021-03-03T17:02:44.329900Z

First 2 rows

ordnungswidrig 2021-03-03T19:31:17.331800Z

Oh nice. Ther german page is missing that kind of comparison. But is this used like a font (“Times” vs. “Helvetica”)?

ordnungswidrig 2021-03-03T19:32:17.332900Z

Isn’t the something in the context of scientific data, R or the like which is a robust data format more flexible than CSV but not an “application format” like Excel? (No, not JSON)

javahippie 2021-03-04T14:57:49.350500Z

My wife works as a statistician, and most of what they are using is CSV / other text formats

ordnungswidrig 2021-03-04T16:04:03.351200Z

Pour souls. :)

ordnungswidrig 2021-03-03T19:33:02.333100Z

CDF maybe?

reefersleep 2021-03-03T19:55:18.333200Z

Dang, I wasn’t even aware of all of these cool things. Sucks that Datomic didn’t work out for us back then. I’m happy to be back in SQL land, and I wonder (after Cassandra) whether some basic (for me) SQL stuff is missing from these alternatives.

reefersleep 2021-03-03T19:55:25.333400Z

Like it was in Cassandra, ugh

reefersleep 2021-03-03T19:55:36.333600Z

I mean, their briefs are all good

reefersleep 2021-03-03T19:56:36.333800Z

How deeply nested data structures have you worked with (and been happy about)?