powderkeg

cgrand 2017-03-28T10:43:11.861081Z

ah ok, I’ve “fixed” it locally anyway

viesti 2017-03-28T10:45:12.877897Z

was thinking about with-resources, if a setup throws, the body isn’t run and the setup itself would need to cleanup any incomplete state

viesti 2017-03-28T10:46:27.888579Z

try/finally in with-resources guards body so that teardown is run

viesti 2017-03-28T10:46:37.889817Z

was thinking about #26

viesti 2017-03-28T10:48:15.903074Z

now that testing was mentioned 🙂

viesti 2017-03-28T10:49:21.912345Z

rearranged things a bit, but overall would be neat to be able to run all tests against local and docker, both 1.5 and 2.1

cgrand 2017-03-28T11:16:26.142590Z

I broke the build 😕

viesti 2017-03-28T11:26:01.224756Z

hum, didn’t find a way to see build log

viesti 2017-03-28T11:35:04.301727Z

ah, had “my builds” button ticked so didn’t see any at https://circleci.com/gh/HCADatalab/powderkeg

viesti 2017-03-28T11:35:28.305291Z

gah this test fixtures

viesti 2017-03-28T11:37:10.319658Z

something like “lein run-tests-in-docker” but when locally in repl, use local-spark

viesti 2017-03-28T13:04:04.227883Z

realizing that this would fail on a remote cluster https://github.com/viesti/powderkeg/blob/sql/test/powderkeg/sql_test.clj, either two deftests with different name, one with ^:integration meta and different setup or another way of saying the same

viesti 2017-03-28T13:10:23.314339Z

to actually run remotely that is

cgrand 2017-03-28T13:12:18.341317Z

can you provide more context on wy it would fail?

cgrand 2017-03-28T13:13:40.360640Z

this .collect is begging for into support 🙂

viesti 2017-03-28T13:13:42.361045Z

the spec registry

viesti 2017-03-28T13:13:48.362577Z

yep

cgrand 2017-03-28T13:14:03.366020Z

ah stupid me

cgrand 2017-03-28T13:16:44.404165Z

Several suggestions: • keep an eye on all atoms transferred and if changed at next barrier, update them (WeakRef ftw), is it overkill? Is it going to create more bugs than it fixes?

cgrand 2017-03-28T13:17:25.414143Z

• have a whitelist, initially populated with common atoms to migrate

cgrand 2017-03-28T13:17:43.418595Z

• no more ideas

viesti 2017-03-28T13:18:57.436378Z

last one :D

viesti 2017-03-28T13:19:53.449579Z

second one sounds reasonable

cgrand 2017-03-28T13:20:00.451358Z

The first suggestion is my plan for multimethods

cgrand 2017-03-28T13:20:33.459320Z

usually a worker is not going to change a multimethod

cgrand 2017-03-28T13:20:38.460507Z

while it may change an atom

cgrand 2017-03-28T13:21:53.478201Z

and ruining caches stored in atoms at each barrier sounds mean (”hey replace your nice cache that you worked hard to populate with this empty one from this lazy driver")

viesti 2017-03-28T13:26:26.545438Z

true :)

viesti 2017-03-28T13:31:38.627691Z

keeping distributed execution obvious but simple would be neat

cgrand 2017-03-28T13:32:18.638323Z

huh? what do you have in mind?

viesti 2017-03-28T13:36:04.697636Z

just that I've made similar mistakes in Spark with Scala without realizing where code is executed :)

viesti 2017-03-28T19:34:49.853661Z

hmm, actually spec registry might not be a problem with DataSet, at least in the that I made, since a DataSet is returned to the driver, so specs themselves aren’t used by the workers

cgrand 2017-03-28T19:40:57.947309Z

True