I have a shadow-cljs
based react-native project and Iβd like to run some tests with kaocha
however Iβm getting this error:
$ clj -A:test --no-capture-output
[TypeError: Cannot read property 'error__GT_str' of undefined
at Socket.<anonymous> ([stdin]:89:38)
at Socket.emit (events.js:321:20)
at Socket.EventEmitter.emit (domain.js:485:12)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:269:11)
at Socket.Readable.push (_stream_readable.js:214:10)
at TCP.onStreamRead (internal/stream_base_commons.js:186:23)
E]
Any ideas what that might cause? I tested my configuration in a minimal project and it works. After the E]
the process keeps running with no output.
UPDATE: this error only happens when having thheller/shadow-cljs {:mvn/version "2.8.90"}
as a dependency in deps.edn
.
SOLVED: making use of an additional tools.deps
alias solved the problem by not having shadow-cljs
on the test classpath.
Thanks for reading and sorry for the noise πhey @dharrigan thanks for reporting this, I also assumed that defexpect would just work, but they are probably using custom event types which the reporter needs to know about. Not a lot of work and something we can add support for without directly having to have expectations as a dependency
I don't think #ref will be able to work inside of a #kaocha/v1 btw. Aero is unable to figure out the expected ordering.
Thanks @plexus that would be nice!
Not saying I'm gonna do it π got a lot of things competing for my time right now, but if you file an issue I'll pitch in with what needs to happen. Would be helpful if you could run with --reporter kaocha.report/debug
, and look what values the :type
keys have
no problemo π I'll get on to that too
@flowthing never looked into it, I agree those stacktraces aren't great. Kaocha's stacktraces in general could be much improved. Would be useful to file an issue on kaocha-cljs
to track this.
All right, good to know, thanks. I'll file that issue.
thanks!
@dominicm that makes sense. Note that #kaocha/v1
is really just a convenience, you should be able to do kaocha --print-config > tests.edn.new
and go from there (cleaning a few things up, notably the current cli arguments)
it's more verbose, many more namespaced keys, and all defaults need to be filled in explicitly, but it's not a terrible format to work with
Yeah. That's one solution. You might also want to make kaocha/v1 a aero macro, that would give it control over how things are resolved inside of itself. Haven't thought about how you'd write that yet though.
it's already being handled by aero though, right? does it have a separate concept of "macros"?
Gonna expand for now, but might be one to ponder if this comes up again :)
Aero has reader "functions" (normal ones) and reader macros (like #profile and #ref). The macros can do special things like case, etc
@defa great to hear you got that working! shadow-cljs is not terribly well supported right now. If your code is compatible with standard clojurescript then you're all good, but if you use shadow-specific compiler extensions you may be out of luck
@dominicm I see, I guess it gets expanded at an earlier stage. Are there any downsides/risks to switching? anything that would behave differently? seems like a no-brainer to just go ahead with
@plexus no, there is no complier magic going on, just shadow-cljs on the classpath and a minimal test (is false)
. To clarify Iβm not running the tests via shadow-cljs
at all but separate via clj
.
You'd have to think a bit harder to have it work. Presumably users want their tags to expand without #kaocha/v1 having expanded. That's probably the easiest version. Essentially as if their map was the config file itself. The complex version involves trying to expand, then trying to add in the expanded keys if possible, then trying to continue.
Essentially, it's more code :)
ok, in that case it's for future consideration π
Any ideas why there is no output captured even if I pass --no-capture-output
to the rest runner? My test fails and starts with:
(deftest foo
(println "Hello!")
...)
that's not enough context @defa... what does your tests.edn look like? can you create a minimal reproduction?
Sureβ¦
;; tests.edn
#kaocha/v1
{:tests [{:id :unit-cljs
:type :kaocha.type/cljs
:test-paths ["src/test"]
:ns-patterns ["-test$"]
}]
}
;; deps.edn
{:deps {}
:paths ["src/main" "src/test"]
:aliases
{:test
{:extra-deps {lambdaisland/kaocha {:mvn/version "0.0-590"}
lambdaisland/kaocha-cljs {:mvn/version "0.0-71"}}
:main-opts ["-m" "kaocha.runner"]}}}
(ns foo.foo-test
(:require [clojure.test :refer [deftest testing is are]]))
(deftest fail
(println "hello!")
(testing "will fail"
(is false "failed by intention")))
Hi @plexus I saw your comment. I think I can do that π
@plexus I can upload a repo to gihub if that helps? Otherwise the relevant parts above.
ok, so this is for clojurescript tests using node?. I would run with the debug options and see if you see any stdout messages coming back from the JS runtime. One possible explanation would be that the process exits before we receive the stdout. Do you see printlns outside the deftest?
Nope, println
outside of deftest
does not show up. Which debug options do you mean?
(js/console.log "...")
does produce some output, inside and outside of deftest
but not in an ASCII art βboxβ.
β¦ and yes, this is ClojureScript and I suppose node is involved in the background. Iβm testing code that usually gets compiled by shadow-cljs
with a react-native target.
there are debug instructions in the kaocha-cljs README, which should produce some very verbose output.
> (js/console.log "...") does produce some output, inside and outside of deftest but not in an ASCII art βboxβ.
Maybe you just need an (enable-console-print!)
@plexus okay, that helps :thumbsup:
However it does not look like in the docs:
FAIL in sample-test/stdout-fail-test (sample_test.clj:10)
Expected:
:same
Actual:
-:same +:not-same
ββββββ Test output βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Bu yi le hu?
β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2 tests, 2 assertions, 1 failures.
β¦ but Iβm okay with that.The βTest outputβ ASCII-frame is missing.
is that with or without output capturing? you'll only get the frame when output capturing is on
@plexus is it reasonable to pull in expectations in the test alias (in deps.edn) for kaocha so that I can test the =?
macro?
I guess that's acceptable. How big is it?
What does =?
do?
6.5K Mar 3 13:44 clojure-test-1.2.1.jar
under 7KB π
β― unzip -l clojure-test-1.2.1.jar
Archive: clojure-test-1.2.1.jar
Length Date Time Name
--------- ---------- ----- ----
0 2019-12-09 09:41 expectations/
0 2019-12-09 09:41 expectations/clojure/
13888 2019-12-09 09:17 expectations/clojure/test.clj
0 2019-12-09 09:41 META-INF/
57 2019-12-09 09:41 META-INF/MANIFEST.MF
0 2019-12-09 09:41 META-INF/maven/
0 2019-12-09 09:41 META-INF/maven/expectations/
0 2019-12-09 09:41 META-INF/maven/expectations/clojure-test/
114 2019-12-09 09:41 META-INF/maven/expectations/clojure-test/pom.properties
1895 2019-12-09 09:41 META-INF/maven/expectations/clojure-test/pom.xml
--------- -------
15954 10 files
yeah that's fine
np
ta
Would it make sense for kaocha.report/fail-summary to look at t/testing-contexts if there's none in the message? They're only in the message from :summary
And that's because kaocha.history puts it there
what's the use case/problem statement @dominicm ?
I run kaocha like this:
clj -A:test --watch --no-capture-output
β¦ oh shitβ¦ Iβm so stupidβ¦ sorry. I misunderstood the semantics of the command line option π@plexus works just fine.
Kaocha isn't picking up that some of my tests have meta when using focus-meta. I'd like to debug why. The printed plan is way too big. What else can I look at?
I'm writing a custom reporter, and I want to include details about the failure when it happens, as opposed to during the summary. This report has to be this format due to CI constraints.
I've run the kaocha.load stuff that kaocha clojure test does, and I can see that kaocha has found the metadata on the tests, so I'm doubly confused :(
Maybe poke at it from a REPL? (kaocha.repl/test-plan)
, then filter as needed
Sorry if I was wasting your time⦠too much work to few sleep.
do you have other focus/skip config? maybe they're interacting in unexpected ways
All of my test groups have a focus/skip. One of them has skip set to all of the other metadatas
with test group do you mean test suite?
and do you have test suites that have the same test-paths but are differentiated by skip/focus?
Yes, probably :)
Yep, that's exactly right.
ok, that's an anti-pattern and will not work as you would expect
β¦ even without (enable-console-print!)
Will read before I ask my next question
one problem with this is that the same test-id will occur multiple times in the test plan, we should really have a warning for that, as that will confuse plugins like the filtering plugin
That would solve my problem of wanting to make the groups explicitly, especially as I want to define an "inverse" (not :external, not :integration, not :devcards...)
The problem is that for me, unit tests are "the ones that aren't the others"
yeah, wouldn't be exceptionally hard to implement either... I'm guesing a ~100 line plugin
although the real recommendation would be to split your tests on the filesystem test/unit/
/ test/integration
etc
Not sure about getting buy in for that right now. The task would be difficult.
I quite like the idea of having a mega suite, and just defining selectors on it. It's probably impractical, but has a good hierarchy in my brain.
β¦ but not always. In my real project
(enable-console-print!)
is necessary even without --no-capture-output
This does explain why you run all tests if no focus meta matches. I was getting really annoyed with that :)
Hmm, that would still be useful for running a group of tests across suites. Eg run all unit tests, if there's none in cljs then it doesn't matter.
If I put together a PoC would you be able to get a little contribution going on the opencollective?
I did start floating the idea at the client about that. But they're still figuring out how to let people contribute bug fixes to oss.
just to be clear I'm talking about a monetary contribution, not a code contribution
I know. Sorry, I was trying to point out that they all haven't solved the problem of that. They are totally unprepared for funding.
I see... too bad
in that case I would consider adding a plugin inside the project and doing it yourself, you can use the filter plugin for inspiration
it's really just recursively walking the test-plan, and marking testables as :kaocha.testable/skip true
or not
Yeah. Makes sense. I might just inflict a very long cli on people in the meantime..
that also works
Just to check, is it okay for two suites (clj/cljs) to have overlapping paths?
clj / cljs is fine because the ids are different, the cljs testables are prefixed with cljs:
so it's fine to e.g. have a single directory with cljc
files and consider that to be two test suites, one clj one cljs
> Sorry if I was wasting your time⦠No problem at all! I like to make sure people are able to make the best of Kaocha
Ah, great. That's one worry gone.
They can print at the end too. But my error reporter needs to print context.
and so you're calling fail-summary
from your reporter on for instance a :fail
event?
Yep!
I would suggest putting :testing-contexts
in the map when you call it, rather than the other way around
That's what I'm doing for now, although it did feel like reaching as implementation details.
building kaocha's reporting on top of clojure.test is one of the few things I regret. It really should be a separate adapter layer that knows about these clojure.test vars and makes the messages self-contained
so that's the direction in general I would prefer to move to
so think of it as kaocha's reporting API, where the messages are a generalization of clojure.test's messages
Heh, okay :) I guess everywhere else would be using the api wrong by not providing those keys?
Can I disable the assertion exceptions?
I have a particular test which doesn't make assertions yet.
we don't have a setting for that (yet), adding a dummy assertion is not an option?
the other option is removing this association from the hierarchy (derive! :kaocha.type.var/zero-assertions :kaocha/fail-type)
and redefining the report/fail-summary to do nothing
(defmethod report/fail-summary :kaocha.type.var/zero-assertions [{:keys [testing-contexts testing-vars] :as m}])
It is, just wanted to check before I checked 1=1.
Unrelated, the missing assertion test seems to be happening for a case which very obviously has assertions. My best guess is that the test is clojurescript async, and that isn't being handled. But I'm not certain at all. Anything ring a bell?
Sorry, I have lots of questions today. Our test suite is apparently not in the state kaocha expects.
Test isn't async, so not that!
ah it's a clojurescript test? although I guess same advice applies, as it piggiebacks on the event type and defmethod created by kaocha.type.var
questions very welcome! it's good to know what is or isn't working for people, or what isn't clear or obvious
and yeah when bringing on a big legacy test suite you're going to run into some things like this because kaocha is more strict about certain things (because it tries to prevent you from accidentally doing things you didn't intended). We really should have flags to disable these things to make porting to kaocha easier.
yeah not sure what else would be triggering ::zero-assertions
. Run the test with the debugging enabled and see what events you're getting (--focus on just that test). I would expect you are not seeing any assertions over the wire before the test finishes, in which case the question is why the test/runtime behave that way. If you are seeing assertions over the wire then the question is why kaocha.type.cljs
still thinks that (= pass error fail pending 0)
Figured it out. We have a namespace which runs tests as a top level side effect. I'm guessing they were running in parallel
oh yeah that would explain it
@plexus I hope the commit and PR I did is okay. I tested it locally and I get nice output using expecations now π
cool! I'll have a good luck. Thank you for contributing to Kaocha!
you're welcome
I hope my questions aren't too burdensome. Fortunately my employer does allow contributions, so if something was major, I would be able to contribute. I'm hoping that my questions help improve kaocha more quickly under funding. It's a bit of a weak hope, so mostly I just hope I'm not too much trouble :). Open source & time is a hard thing, and I appreciate your time building cool things, and answering my questions.
@plexus thanks! I really like Kaocha, good work! Will be using it daily from now on.