@jjttjj you might consider circumventing that question entirely and go straight to a walk-kids
macro that works like (walk-kids deep-v [kid] ...)
it looks like merge-kids and add-children could be easily modified to do their iteration that way
(vs iterating through the value produced by vflatten)
parse-args looks a little trickier but it might be doable
actually yeah, definitely doable in the case of parse-args - i think you just need a helper function you'd call in two places, that takes the kids transient and args, and iterates/conj!es using the macro
maybe (dokids [kid v] ...)
is a better name/convention
I just noticed something. There goes the reactive neighborhood: https://github.com/Microsoft/satcheljs (they use MobX underneath)
Speaking of reactive, I just did a Reagent app, treating ratoms and track! like our cells. Feels very similar. Anyone else pick that up.
hmmm i've started attempting to do some profiling to actually figure out where the bottlenecks are.
am I correct that you're saying that we should have something that to prevent multiple iterations from occurring, ie once for vflatten, then again for whatever it is we need to do? Like, a quick and dirty way to check this would be to just add a function argument to vflatten
and apply that to each leaf during the flattening process, that's what you're saying right?
I tried what i described, adding a function argument to vflatten
but it doesn't seem to meaningfully effect the whole performance
add-children
definitely is one of the things taking up significant time on the total page load, but in trying the following change has no significant performance change
(defn- add-children!
[this [child-cell & _ :as kids]]
;;vflatten+map in single pass
(vflattenf #(when-let [x (->node %)]
(-append-child! this x))
kids)
#_(doseq [x (vflatten kids)]
(when-let [x (->node x)]
(-append-child! this x))))
(take the "no significant performance gain" with a grain of salt, i just started attempting to profile hoplon. Also a lot of this depends what is the typical use case of hoplon is to optimize for)
for example i've found that when i use hoplon, most the time elements just have 1-5 children, and any sort of nesting of vectors of children is fairly rare
was saying to flyboarder yesterday that I started looking at "average number of children per element" found in the wild with this js snippet in chrome dev tools:
var all = document.body.getElementsByTagName("*");
var n = 0
for (var i=0, max=all.length; i < max; i++) {
n += all[i].children.length
}
n / all.length
and it's almost always extremely close to 1I see, profile-directed is definitely the way to go
I'll stop suggesting pointless work for you š
haha
actually i have one more maybe-pointless item to suggest lol
are you familiar with compiler-macros?
like, the concept?
you mean like macros that are sort of little compilers?
well macros are little compilers basically, so yeah, but this is a particular kind of macro application semantic that cljs supports (i believe. it did originally)
the idea with them is that for any function not passed by value or apply
'd, its arguments are known at compile-time
for example (+ 1 2 3)
- at compile time we know that +
is being called with 3 arguments
so there's a mechanism to set up a function to be called by the compiler when it encounters a call like this
which makes +
have a kind of dual-nature - when its called inline the compiler macro associated with it (if any) has an opportunity to produce code in place of the expression
but it's also a function at runtime so you can e.g. (reduce + '(1 2 3))
so if you have a compiler macro for +
and it gets an arglist like '(1 2 3), it might see the arguments are constant and do the addition at compile time and return 6
as a substitute for (+ 1 2 3)
the direction i'm going with this is, a lot of any given hoplon app's markup is static
and so in many cases it might not be necessary to represent children at runtime at all. at compile-time turn child arguments into the code to appendchild
theory being this would speed initial app load
oh yeah, that is an interesting idea
@alandipert š¤Æ
ok sweet, yeah, according to mfikes in #clojurescript the way to do it is to make a macro the same name as the function
and the compiler automatically uses the macro when it can
so for us that would mean something like auto-generating a bunch of html ctor macros in hoplon.core on the clj side
so that would basically allow elements (when they are static) to just expand into big chains of just .createElements
and appendChild
without any other in between stuff? or is there something more to it?
that's what i'm thinking, yeah
cool
@hiskennyness +1 ratoms, very cellular feel
(i've only fiddle with demos)
I think that will work well because we only need runtime elements for cells, ie templating
But that means we need a macro for every element tag?
But I wonder if we can have the defelem
generate the macros for custom elements too
yeah i think so
well, maybe
ok i think i might have overestimated the work involved in doing the js array internals yesterday, I got a very rough version working and it seems 100x faster already slightly_smiling_face edit: no not nearly that much faster
Cool, Iām stoked to see more improvements to the core
it seems closer to 5% faster with the stress test i've been using after using an array for the storage of the children
that's an additional 5% after the ~7% speed improvement over the master branch with the new child-vec
and vflatten
fns i have
if it can keep compounding like this it'll be nice š
those are awesome gains, thanks @jjttjj