I’d jump at the chance, btw if you dont mind me asking, how were you offered a Clojure job?
Seeing different ways of solving problems (i.e. lisp, functional, data driven) should make your more valuable in any language. It just gives you more techniques to apply if you ever do have to return to js.
Your biggest problem will not be your CV, but rather that you won’t ever want to work in any other language again. At least that’s how I feel about it.
Would an "advent of software design" be feasible, or would software design exercises be inherently too large/subjective/etc?
@jjttjj I think the hard part is judging how you'd evaluate the designs?
I could see "advent of software integration", where you are given weirder and weirder architectural constraints to plug into a system
that's much more straightforward to evaluate at least..
Yeah it'd be hard to have an answer you could put into a form input and evaluate instantly
I mean, the advent of code is already (implicitly) testing design - it's just that the problems are small enough that you don't need much design per se
I guess another approach is to have increasingly bizarre requirements that escalate from problem to problem, but if you make good architectural decisions you can reuse code?
Oh yeah that could be interesting
it's still subjective though "how many rewrites did I have to do? how hard were they?"
Yeah. I just can't help but feel when trying these advent problems that there's a different muscle that is way more important to exercise at least for me personally
but I wonder if it's possible to package in as fun a way
"advent of software design" sounds fun, but I'm also not sure how that might work. sounds like an interesting design challenge 🙂
Day 1: ✔️ 🙂
How about changing the deliverable to a one-page architecture diagram, something like that?
Or a one page long form text description of a software system
day 1: write design day 2: evaluate someone else's design
I just finished writing a custom web based application for an SMI, took me about half a year. It all works well and the client is happy, but I wish I had way more time for UI/UX, because the data model is a bit intricate (bi temporal, with a reporting feature), so I had a lot of explaining to do and some of the things are not intuitive for a non-technical user. But I just now had a bit of an epiphany. I’ve been doing it all wrong. I was following the standard CRUD Rest-like MVC model, I’ve been taught and doing for years. But it makes no sense at all… Even though the data is relational and has some tricky concepts, it is not that much data. Maybe 10k rows or something and the updates are infrequent (some maybe daily, but often just weekly/monthly). Why didn’t I just send the whole data to the client and used simpler tools and less communication overhead to manipulate them? Forgoing most of the complex queries and checks. Just send a nice data-view, and then show a subset on the client with a well designed navigation and visualisation concept? I feel like I’ve been following paradigms that just don’t make sense for these smallish data-sets, just because I “believed” they should be used, but by doing that I’ve violated the YAGNI/premature optimization principle and missed an opportunity to craft a unique and intuitive user experience instead. Even from a performance perspective it would have been better to make the writes slower, (and heavily cache the whole data-view) and the reads fast maybe de-normalized. Am I completely missing something?
@denis.baudinot No, I think you're right that these days the clients are usually powerful enough to send a large blob of data and then slice and dice it and render it in interesting ways on the client.
Even for plain tabular data, it's reasonable to pour the whole lot into something like "tablesorter" so end users can choose how much data to look at, how to sort it, how to filter it. We do that at work for datasets that are in the thousands of rows (beyond that, the client UI can get a little sluggish, depending on how many columns are being rendered). And it's certainly true for graphical display of "thousands" of raw data points these days too.
Helps avoid a lot of pagination ugliness at the CRUD layer.
@seancorfield thank’s for the perspective. This is helpful because I work alone and in very small teams. So I need to make a lot of decisions, which is sometimes daunting so I appreciate some feedback from other professionals.
@tonsky wrote a good essay outlining that PoV with a demo application in Datascript: https://tonsky.me/blog/the-web-after-tomorrow/
I think CRDTs also have a lot of potential to supersede what were previously considered 'battle-tested' client-server architectures, without even needing to send all the data all the time
@afoltzm like the people at https://replikativ.io/ are doing?
It sounds intriguing! but I would have opted for an even much simpler solution, in retrospect, and focus on the UX. For these clients, the UX is the whole value. Good UX saves their time, makes them happy and lowers their mental drain.
maybe CRDT’s are a good way to scale a “bruteforce” application (like explained above) after it is needed?
https://git.sr.ht/~hiredman/dset/tree/master/src/com/manigfeald/dset.clj is a dcrdt triple store (uses the same sorted set for indices as datascript)
just pushing all the data out to the client works well until you want the client to be able to modify the data, and then you need to sync those modifications across other clients, etc
the problem with crdts is while they can keep your data model in a consistent state, they can't un-ring a bell so to speak
you almost always look at the data, and then act on it, do some side effect, even if the side effect is just displaying the data to a human. if later you merge states (using a crdt) and it turns out that data was superseded, clawing that action back can be a challenge
at least with client server and the database you have a sort of single point where that can occur