any thoughts on http://flur.ee anyone ? "Open source semantic graph database that guarantees data integrity, facilitates secure data sharing, and powers connected data insights." "Fluree is more than a database and more than a blockchain. It is a data management platform that merges the analytic power of a modern graph..." it's written in clojure
Never heard of it, but it looks interesting
don't want to be that guy again but it doesn't appeal to me • the looks is a very generic template/style • the page is mostly filled with buzzwords • there are statements all over the place that rise serious questions about security, scalability, maintenance costs that are not answered immediately • json based query language? why do they have their own if they support others? Not saying there is anything bad there, but I am curious for the reasons • seems like a lot of stuff that is reimplemented for no obvious reasons https://docs.flur.ee/docs/1.0.0/schema/functions
thanks for bringing fluree up @thegobinath I'm a dev advocate there, so i can try to answer any questions, if needed.
@ashnur we have a json based query language to facilitate easy interop with other languages and via query/transaction calls via http, and yes our marketing site is somewhat buzzwordy, but we've got some pretty good stuff under the hood.
Well that's just it, 'easy interop' sounds like something to sell with not to build upon 😞 Really easy interop is when you don't even need to learn a new DSL, no?
@trevor Interesting! I’m curious about the origin of the db. It’s a technical product, but I can’t find any technical founders, is this correct? I’m looking at this page https://flur.ee/about-us/
The fact that it supports RDF/SparQL is a big plus for us, as we use this format internally and it's a standard
@jeroenvandijk Brian Platz is the technical founder and CEO
@ashnur thats why we also support GraphQL, SQL, a subset of SPARQL, and you can call directly via Clojure. But if you dont know Clojure and you're familiar with Javascript or Python, writing a JSON is something you are more than likely familiar with and gets you some of the benefits which using one of the other query languages doesn't support, namely time-based queries. If that is something needed by your app, then using FlureeQL in JSON or Clojure is necessary.
Looks pretty nice from the front page
I spent over a year deep diving blockchain tech and in the end my conclusion was that 1. it's a useful technology in a certain situation, e.g. when multiple transport companies want to use a single deposit, putting a blockchain on the system gives audit and a new company needs only to setup the tech and can integrate immediately without any further costs. 2. it still requires integration with the law and everything else like everything else What blockchain is not good for, for physical and philosophical reasons that I am very much happy to delve upon if anyone is interested, is to implement general solutions (e.g. like a programming language or a database).
• yes, ethereum is based on this idea, but it's more like a public research project than something that's commercially viable, checking the list of biggest apps: https://www.stateofthedapps.com/rankings/platform/ethereum • In my honest opinion, yes they hold lots of value, but not necessarily in what they promise. I think most of these projects are scams to bring in investor money and then run with it. The developers building most of these projects are there because the work is interesting and the pay is good. I am speaking from experience, we have implemented quite a few POCs for ethereum and was involved in a couple of more serious projects as well. Open ended projects tend to go on longer.
Hello! I'm looking for a good Clojure/Java developer in Russia. Do they exist? Please contact me directly for details.
Can I destructure keywords ignoring namespace? For example I have a generic function that accepts a name key and will handle them all the same regardless of the namespace
you might want to ask in https://t.me/clojure_ru
thanks!
I could just strip the namespace but maybe there is a feature in destructuring for this
no, destructure needs namespace for fully qualified keywords
Ah no worries. Thank you
Hello. If I need to send EDN data on the wire (from my web app's backend to frontend), is a function like this a way to go?
(require '[cognitect.transit :as transit])
(import [<http://java.io|java.io> ByteArrayInputStream ByteArrayOutputStream])
(defn to-edn-str [data]
(let [out (ByteArrayOutputStream. 4096)
writer (transit/writer out :json)]
(transit/write writer data)
(.toString out)))
(to-edn-str [:abc 1 2])
thats transit not edn? for EDN just pr-str
yes. Still not getting 100% of difference (and reasoning)
transit is better for sending stuff over the wire, so that is fine. but calling it to-edn-str
is rather confusing since what you get is transit json string
ah true
better in which sense btw? is that 'cause JSON can be "gzipped" (or something) that is more optimal to send than EDN, which for the browser is mere plain/text
?
no, both a just text strings. transit is just a little faster for parsing and a little smaller overall
gzip works for all, no difference there
ah, yes, so that's the browser's/server's parsing algorithm
no, as far as the browser is concerned its just a string. it has no notion of transit or EDN
I mean, when it tackles that transit JSON and later walks the tree (or sth) to turn it into proper CLJS objects
as opposed to parsing the EDN string
"it" doesn't do that. YOUR code does that. either via the transit reader or the EDN reader.
ok. agree. thanks
Might be a vague question. I am going to implement a system with several modules. Each module communicates with other through core.async channel. Haven’t touched this part before. Is there any example code/project for reference? I am mostly interested in the coordination and message passing (pub/sub) between these modules.
Maybe @ognivo?
The ns is stripped by destructuring bc local bindings are always unnamespaced
@admin055 ❤️
@i is a module something abstract? i.e. they still will be spawned by a single process?
(like lein run
)
Anyone here has experience with Jackson serializers? I'm trying to get it to use an IterableSerializer instead of a CollectionSerializer for a LazySeq with jsonista
yup. still spawned b a single process.
if they were in separate jvms then core.async wouldn't help at all. also please don't use lein as a prod process launcher, lein is a build tool and the run task is a convenience for developer iteration
for a while I've been avoiding jackson because of the brittle version sensitive deps and using clojure.data.json instead, ymmv but it never turned out that json encoding was my perf bottle neck
+1 to all of that, and when I do use jackson, it's not the ObjectMapper ORM-ey stuff
@seancorfield I remember you are doing some removing Jackson work from a codebase. How’s that going?
My concern is, Jackon might be indirectedly referenced by other libs. So it’s still got used.
I updated from data.json 1.1.0 to 2.3.0 and am getting some very odd results back. I'm not sure exactly what this is but in Cursive, one of the decoded strings gets printed in the REPL as a series of NUL
s (see attached screenshot). I'm also not sure how to repro this since it appears to have something to do with how the inputstream is originating. I am calling a GCP API with the Java 11 HTTP client and getting back an inputstream. I'm then calling json/read on the result of that.
(def resp
(java-http-clj.core/send
my-req
{:client http-client
:as :input-stream}))
(with-open [rdr (io/reader (:body resp))]
(json/read rdr))
The last form is the one returning the oddly decoded JSON.
If I spit the inputstream to a file and run the same code with a reader created from a file, the decoded result is correct (no NUL).
(with-open [rdr (io/reader (io/file "test.json"))]
(json/read rdr))
Seems like this is an issue with the 2.x data.json version. I will revert back to 1.1.0 for now. Happy to provide more info if the maintainers are interested.sure, but the problem with jackson is the version change brittleness, so each time you remove a usage of jackson you are mitigating that problem
it's not a question of "use it anywhere" vs. "don't ever use it", it's a strategy of reducing the number of places it's used to reduce the brittleness that its usage introduces
@kenny would be great to learn more about what's up so we can fix if needed - we have a channel #data_json if you could isolate something
I have some use cases where a large chunk of my CPU is wasted in Jackson
Jsonista is faster so I'm trying to work with that
be careful with that analysis - for example, if jackson is consuming a lazy seq, the profiler will describe the work done realizing that seq as jackson's CPU usage
@i We got to the point where we pin the Jackson version for just one subproject now (to 2.8.11 — because 2.9.0 introducing a breaking change around null
handling, so at least we’ve tracked down why it causes failures). All the other projects just ignore the issue now and let deps bring in whatever version of Jackson they want (mostly 2.10.x as I recall).
Yeah, I know, and this whole thing started because I saw that lazy seqs are consumed twice because the CollectionSerializer calls .size()
first
What I was hoping to do was avoid intermediate allocations as much as possible, it's a very large stream
This analysis still holds
lazy seqs are cached though - that would cause heap pressure but not CPU (except indirectly via more GC work)
Hi, I would like to point a new clojurian to this slack but I forgot where I got the invitation from.
i think http://clojurians.net will help in this case
@dpsutton Thanx! That worked!
It is an extremely garbage intensive piece of code
Has anyone here ever used a different arity than the 2-arity transit/write-handler
? If so, could you explain to me why?
without committing to any official policy, is there a ballpark number of votes on http://ask.clojure.org that get tickets added to a roadmap or release candidate?
no, I look at them from top down though for pulling into consideration
most have ≤ 1 so, more than that is noticeable :)
https://ask.clojure.org/index.php/questions/clojure?sort=votes is a starting point
haha. yeah. was just wondering if my fourth vote might hit some threshold 🙂
even then, this is just one of many things serving as fodders for attention
makes sense. thanks for the info
oh I thought it was 6 votes. I guess I can stop bribing folks!
Best autocomplete for Clojure?
How do I make the following function handle 'sequency' collections i.e. sets, lists, vectors properly? Feels like I have to deal with multiple specifities example (conj nil x)
returns a list, so the seq-init
is not the right one because I'm doing a conj that adds the element at the start of the coll.
(defn deep-remove-fn
{:test
(fn []
(is (= ((deep-remove-fn empty?) {}) nil))
(is (= ((deep-remove-fn empty?) []) nil))
(is (= ((deep-remove-fn empty?) '()) nil))
(is (= ((deep-remove-fn empty?) #{}) nil))
(is (= ((deep-remove-fn nil? boolean? keyword?)
[:a {:c true} 9 10 nil {:k {:j 8 :m false}}])
[{} 9 10 {:k {:j 8}}]))
(is (= ((deep-remove-fn false? zero?)
{:a 90 :k false :c {:d 0 :e 89}})
{:a 90, :c {:e 89}}))
(is (= ((deep-remove-fn empty?)
{:a 90 :k {:m {}} :c {:d 0 :e #{}}})
{:a 90 :c {:d 0}}))
(is (= ((deep-remove-fn empty?)
[#{7 8 9} [11 12 13] '(15 14)])
[#{7 8 9} [11 12 13] '(15 14)]))
(is (= ((deep-remove-fn empty?)
{:a {:b {} :c [[]]} :k #{#{}}})
nil))
(is (= ((deep-remove-fn nil?)
{:a {:b {} :c [[]]}})
{:a {:b {} :c [[]]}}))
(is (= ((deep-remove-fn nil? empty?)
{:a {:b {} :c [[]] :k #{#{}}}})
nil)))}
[& remove-fns]
(let [remove-fns
(for [remove-fn remove-fns]
#(try
(remove-fn %)
(catch Exception _
nil)))
removable? (apply some-fn remove-fns)
map-init (if (removable? {}) nil {})
seq-init (if (removable? []) nil [])]
(fn remove [x]
(when-not (removable? x)
(cond
(map? x) (reduce-kv
(fn [m k v]
(if-let [new-v (remove v)]
(assoc m k new-v)
m))
map-init
x)
(seq? x) (reduce
(fn [acc curr]
(if-let [new-curr (remove curr)]
(conj acc new-curr)
acc))
seq-init
x)
:else x)))))
I think clojure.walk/postwalk would make this code much simpler
Yeah, should try it with postwalk
also you might consider a multimethod / some multimethods on type, rather than inline conditionals everywhere
that way, to understand what is done with a type I can look at its method(s) instead of finding the relevaant line in each condition
Yup makes sense. That way I could easily extend it too
Would love to hear some thoughts on the blocking aspects of blockchain with regard to general programming languages or databases - a lot of projects seem to try to do this, perhaps like the recently discussed Fluree DB on reddit https://github.com/fluree/db
Maybe this isn’t even close to the way I should be going about solving this problem, in which case, please suggest what you think might be a better approach
hi! i just spent several hours trying to track down a really strange bug with some of my code. doing an end-to-end splitting of a file, FEC, encrypt, persist to db, write header, then roundtrip back the other way, but while i read from a file initially, to reduce my initial code, instead i just read the whole thing into memory to do the compare (blake2b hash on source and end result) finally managed to track the culprit down after adding logging to my all-nighter mess of a personal codebase 😄
enki.buffers> (byte-array 3145728000)
Execution error (NegativeArraySizeException) at enki.buffers/eval43087 (form-init18270525509685357804.clj:12).
-1149239296
enki.buffers> (. clojure.lang.Numbers byte_array 3145728000)
Execution error (NegativeArraySizeException) at enki.buffers/eval43089 (form-init18270525509685357804.clj:15).
-1149239296
from:
https://github.com/clojure/clojure/blob/b1b88dd25373a86e41310a525a21b497799dbbf2/src/jvm/clojure/lang/Numbers.java#L1394
@WarnBoxedMath(false)
static public byte[] byte_array(Object sizeOrSeq){
if(sizeOrSeq instanceof Number)
return new byte[((Number) sizeOrSeq).intValue()];
obviously the issue is 3145728000 > integer max size, so it's overflowing.
(defn byte-array
"Creates an array of bytes"
{:inline (fn [& args] `(. clojure.lang.Numbers byte_array ~@args))
:inline-arities #{1 2}
:added "1.1"}
([size-or-seq] (. clojure.lang.Numbers byte_array size-or-seq))
([size init-val-or-seq] (. clojure.lang.Numbers byte_array size init-val-or-seq)))
there's nothing obvious in the docstring nor warnings on clojuredocs about max size for byte-arrays.
is this a JVM limitation? (i know it's extremely bad practice, but it was the quick-and-dirty way to test my functionality and i have plenty of RAM. i'll of course rewrite it to use some other method.)
maybe at least the docstring should be modified, or maybe Numbers.java can be extended, i dunno. what do you think? at least it is a hidden footgun.it is a jvm limitation
e.g. arrays are indexed by integers
yep, just read up on it
obvious when one knows the limitations of the underlying platform, but was a nightmare to discover (as i calculate the array size from a custom binary datastructure and summing block sizes, so assumed i had a mistake somewhere. of course, upon discovering it only blew up > max int, narrowed the scope somewhat...) gone midnight here, but after some sleep i might see if i can add a note somewhere as a suggestion. (64bit sbcl spoiled me.)