clojure-dev

Issues: https://clojure.atlassian.net/browse/CLJ | Guide: https://insideclojure.org/2015/05/01/contributing-clojure/
2019-08-04T02:45:25.145300Z

One source for the purpose of the volatiles is here: https://clojure.atlassian.net/browse/CLJ-1580 Those were a continuation of changes started with this ticket: https://clojure.atlassian.net/browse/CLJ-1498

schmee 2019-08-04T09:21:23.146500Z

has anyone done any experiments with the inline classes in Valhalla early access and Clojure’s persistent data structures? πŸ™‚

alexmiller 2019-08-04T15:02:27.146800Z

Maybe @ghadi

alexmiller 2019-08-04T15:02:52.147300Z

He was talking about it from the jvm lang summit

ghadi 2019-08-04T17:52:03.147900Z

I haven't had a chance to do anything except think about how it could apply within PersistentHashMap

ghadi 2019-08-04T17:52:33.148400Z

Brian Goetz was strongly encouraging experimentation -- it's ready for that https://wiki.openjdk.java.net/display/valhalla/LW2

schmee 2019-08-04T17:54:16.149600Z

yeah, I saw both your talks! πŸ˜„

schmee 2019-08-04T17:54:25.150Z

really cool stuff going on with the JVM at the moment

schmee 2019-08-04T17:54:51.150800Z

I’m going to try to pair Clojure and the Vector API and see what happens

ghadi 2019-08-04T17:57:10.152800Z

the Vector API is very exciting for the JVM. I think we'll be able to use the Vector API, but I don't think it will be very performant unless we can write our functions in such a way that a large areas of code have the proper Vector types exposed -- that way the JVM will optimize through it

ghadi 2019-08-04T17:57:46.153500Z

I'm not sure whether it would work as well with intervening casts to/from Object as with IFn

ghadi 2019-08-04T17:58:09.154Z

but I'd be happy to find out @schmee

ghadi 2019-08-04T17:59:29.154700Z

there is something custom about the Vector inlining that only happens in C2

ghadi 2019-08-04T18:07:09.155400Z

I guess if you arrange a fat method body using a bunch of macros, where are the locals are typed Vectors that might work

schmee 2019-08-04T18:12:02.156100Z

yeah, I think I will have to jump through some major hoops to make it work, but that’s the fun in it! πŸ™‚

ghadi 2019-08-04T18:17:45.157Z

candidate inline classes within Clojure are not clear to me yet

ghadi 2019-08-04T18:23:08.158800Z

I wish we could express SIMD crypto routines with the Vector API, but it's not timing-attack safe to do it within a JIT, unless there was a way to tell hotspot not to do timing-unsafe xforms within a region of code

schmee 2019-08-04T18:26:08.159900Z

to be honest I don’t understand how it’s possible to write timing-sensitive code on the JVM at all Β―\(ツ)/Β―

ghadi 2019-08-04T18:26:19.160200Z

you can't πŸ™‚

schmee 2019-08-04T18:27:44.161200Z

and yet we have javax.crypto :thinking_face: πŸ˜„

ghadi 2019-08-04T18:28:46.161900Z

I wonder how that's made safe (haven't peeked under the covers)

schmee 2019-08-04T18:29:49.162900Z

this is a pretty cool talk by the guy who did the ECC implementation in javax.crypto where he talks about timing-dependence etc: https://www.youtube.com/watch?v=5kj_GT6qvYI

ghadi 2019-08-04T18:29:49.163Z

> This relates to my comment that we need a way for the Vector runtime to "crack" the lambdas passed to HOF API points like Vector.reduce. If we had the equivalent of C# expression trees, we could treat chains of vector ops as queries to be optimized, when executing a terminal operation (such as Vector.intoArray or hypothetical Vector.collect). A vector expression could be cooked into some kind of IR, and then instruction-selected into a AVX code.

ghadi 2019-08-04T18:30:03.163500Z

an old post from John Rose re: vector ops ^

ghadi 2019-08-04T18:30:07.163700Z

"lambda cracking"

ghadi 2019-08-04T18:30:48.164600Z

more organized ideas around that ^ "metabytecode" πŸ™‚

schmee 2019-08-04T18:31:10.165Z

cool, I’ll check it out! :thumbsup:

ghadi 2019-08-04T18:31:37.165400Z

it's a Forth-y stack machine embedded into indy bootstrap method arguments

ghadi 2019-08-04T18:32:39.165900Z

@schmee that talk looks cool, definitely will watch

schmee 2019-08-04T18:33:24.166400Z

eventually there will be a second JVM embedded in indy bootstrap methods 😁

2019-08-04T21:46:03.168100Z

Even without using inline classes, I think it might be worth experimenting, at least for Clojure vectors, with trees that have no PersistentVector$Node objects, only Object arrays. It seems there are twice as many levels of indirection as there need to be.

2019-08-04T21:47:59.168300Z

would that make transients harder?

2019-08-04T21:48:56.169200Z

I do not believe so. You would still need the 'edit' fields, but they could be tucked away in an extra array element of the Object arrays, at a fixed index, e.g. index 32.

2019-08-04T21:49:49.170100Z

isn't the length-32 thing special for cache compatibility?

2019-08-04T21:49:56.170400Z

I may do an experiment with this starting from core.rrb-vector's implementation, to see whether it gains any performance.

2019-08-04T21:51:07.171500Z

12 to 16 bytes of Object header at the beginning, plus 32*4 bytes for the 32-element Object array elements themselves, doesn't fit into any cache line sizes I have seen (32 or 64 bytes are common?)

2019-08-04T21:51:52.171700Z

Β―\(ツ)/Β―

2019-08-04T21:53:04.173Z

It's an experiment thing, just based on a hunch that following 2 arbitrary pointers per tree level is likely more expensive in the common case, vs. 1 with the changes I have in mind. It probably will not actually improve things by a 2-to-1 factor in the common case, e.g. small arrays.

2019-08-04T21:54:22.173200Z

The Fingerhut Conjecture

2019-08-04T21:55:32.173600Z

Exactly! I will resist the urge to store data in NaN's πŸ™‚

2019-08-04T21:56:31.173700Z

Which I can't resist playing with words to suggest the name: stenanography

1πŸ˜„
2019-08-04T21:58:06.173900Z

that's terrible

2019-08-04T21:58:33.174100Z

I almost wish to apologize for infecting your brain with that word.

schmee 2019-08-04T21:59:14.174400Z

get out! πŸ˜‚

2019-08-04T22:01:42.174600Z

From the Greek 'stenanos' meaning 'covered in IEEE 754 not a numbers'

2πŸ‘
2019-08-04T22:02:19.174800Z

I don't endorse any of this

alexmiller 2019-08-04T22:10:39.175100Z

Oof

2019-08-04T22:54:38.176300Z

I am pretty sure 32 was a good tradeoff choice - larger would reduce lookup times, but at the cost of increasing assoc/add times.