Low volume with occasional bursts of API calls (estimate 10-100 calls an hour when people play with it). It can list recent issues, search via jql, list components, versions, users, comment on issues, log work on issues, update issues, create issues. I expect we’d primary use read only features.
why would the bot comment or update on issues?
who would be "playing" with it?
I'm pretty wary of having a bot make edits that obscure the actual user
why do you need to use this against the Clojure jira? can't you test against a different project?
I didn’t mean that we would use it that way, just that it has the capability.
Does jira provide access controls that would limit it to read only?
I don't remember the details off the top of my head
Why this project: Because if/when I get it on Clojurians it would be fun and useful to query Clojure’s own issues in Slack
But yes I can test against a different project
https://confluence.atlassian.com/jirakb/jira-cloud-how-to-create-a-read-only-user-779160729.html
yeah, I'm not going to do all that
Haha ok
@devth I can head that off straight away as one of the Admins here: we're on a free plan and we have a general rule of not allowing any token requests or any new apps.
(Sean is speaking from the perspective of Clojurians Slack admin here, I'm pretty sure, not Clojure JIRA)
So, if you're building this with the hope of somehow integrating Clojure's Atlassian/JIRA account into this Slack, it's not going to happen. There's already a rock solid Jira Cloud integration for Slack and we've already had to reject several requests to add that to this Slack.
Yes, as Andy says, I'm speaking as part of the Admin team for Clojurians Slack (not Cognitect-associated and not speaking on behalf of the admins of the Clojure Atlassian setup).
So, Stuart Halloway jumps in to a silly discussion @marc-omorain and I were having on twitter with this remark:
>@atmarc @slipset I would love to test moved out to a separate lib.
where “test” refers to clojure.test
.
Would a ticket for such an endeavour be appreciated?
it's already on our list, don't need a ticket
in particular, Sean Corfield and I have been talking about this for a year+ and he's got a list of items
nice!
general notes were archived here https://archive.clojure.org/design-wiki/display/design/clojure.test%2Bimprovements.html
FWIW one of the things I remember really appreciating when picking up Clojure was that it came with stuff like clojure.test and clojure.data.* included.
@slipset Link for that Twitter thread? Curious to read it.
Re: clojure.test
-- I already volunteered to maintain it if it moves out of "core" for 1.11 🙂
note that it would still be included as a dependency, like spec is included, but via a separate lib that could be revved at a faster rate
so it's really about separating the release cycle more than the inclusion part
@seancorfield https://twitter.com/atmarc/status/1189139798566559744?s=20
"I got pinpoint accurate data" - glass half full
I'd be curious how that test is being run (under lein, what clojure version, etc). Not sure that's what you'd see from just clojure.test itself.
And that stack trace is coming from Orchestra which is doing generative testing so there's not even much clojure.test
could do to help.
[circleci.vm_service.vms_test$fn__39445$fn__39446 invoke form-init2759853974423263704.clj 1428]
[com.gfredericks.test.chuck.clojure_test$_testing invokeStatic clojure_test.cljc 102]
[com.gfredericks.test.chuck.clojure_test$_testing invoke clojure_test.cljc 100]
[circleci.vm_service.vms_test$fn__39445 invokeStatic form-init2759853974423263704.clj 1428]
[circleci.vm_service.vms_test$fn__39445 invoke form-init2759853974423263704.clj 1427]
[clojure.test$test_var$fn__9737 invoke test.clj 717]
That's 80 lines into the stack trace and the first mention of clojure.test
[clj_honeycomb.core$with_event_fn invoke core.clj 465]
[clojure.lang.AFn applyToHelper AFn.java 160]
[clojure.lang.AFn applyTo AFn.java 144]
[orchestra.spec.test$spec_checking_fn$fn__29295 doInvoke test.cljc 127]
That's 25 lines into the stack trace.(and the error is happening inside a fixture making the problem even harder to clean up)
I would recommend not even doing generative testing under clojure.test at all tbh
Around line 300, clojure.test
did report what actually failed:
expected: (not-exception? (:result result))
actual: (not (not-exception? #error {
:cause "Validation failed: writeKey must not be null or empty"
so I'm not sure that first 300 lines is even coming from clojure.test
TBH.what disadvantages do you see to doing generative testing under clojure.test? And if not initiating generative tests that way, then you'd recommend simply invoking generative testing functions from the command line?
I've been doing a few generative tests in the last couple months initiated from within clojure.test deftest forms (mostly passing), and haven't noticed any problems, but perhaps because of the 'mostly passing' part?
I think it is just more stuff on the stack (the generation machinery can be complicated) so you get longer stacktraces
generative tests have different characteristics than typical unit tests. wanting to force them both to always run at the same frequency and importance does not imo make sense (and this reason is why there is no clojure.test integration in spec)
In this case, it's not that clojure.test
printed a stacktrace, it printed an exception which happens to include a huge call chain.
^^
and I assume not-exception?
is a custom test-is
predicate
But anyone who knows that can create different clojure.test namespaces and/or deftests to control relative frequency of running example-based vs. generative tests.
(and I am)
the knobs are different too
spec generative tests are not unit tests. why pretend they are?
sure. I just don't see why clojure.test is restricted to unit tests, then.
Not that it necessarily gives any advantages for generative testing, either.
what is it buying you?
the main advantage it gives you is to make all your tests look the same
my point is that they are not all the same, so that's a false advantage
I guess the best I can say is that it isn't hurting me to run them that way 🙂
it is hurting you
How is a generative test not a unit test? In almost all cases they are testing a single unit of code, one function.
you are wasting huge amounts of time retesting things that aren't changing
but I don't run the generative tests when I rerun the unit tests. I initiate them differently from the command line.
well, then you're already split, which I'd say is good
we have a vision for an entirely different kind of test runner that runs remembers what you've tested, and just retests what's affected by your change (in a smart way). I built a prototype of this two years ago but never really pulled it over the hump. hoping to get back to it after spec 2 is out.
I'm not here on a soapbox suggesting people use clojure.test to run generative tests -- just prompted by what you said to see if there was a harm in doing so. Wasting time is bad, agreed, but there have already been ways using clojure.test for years now to run different subsets of your tests via namespace, deftest names, Leiningen doohickies that I forget their names now, etc.
some of that is shared need, some of it is not
in clojure.test, there is organizational and runner infrastructure (some of which could be generic and really has little to do with testing per se), and there are tools for making assertions
Maybe I understand something wrong, but every time you re-run a generative test, it could find a new bug that the last run didn't catch, due to the randomness of the test.
yes
generally, I would say that you should run it once for a sufficiently long time that you are satisfied it is correct. and then not test it again if it doesn't change.
So in my mind, I'd want my CD pipeline to run them on ever deployments
every*
But the likelihood of finding a new bug after running particular generative test for hours goes way down.
as does running it repeatedly
generative tests are a statistical argument for correctness
In practice, one very long run is more cumbersome than multiple small runs over time
should it be?
what if it wasn't?
If there are bugs to find, then after running one generative test for 'sufficiently long' (not always obvious, but practical things like "lots of time I was willing to wait"), then the most likely way to find new bugs is to tweak how the generative tests generate random data, to make them more likely to find new scenarios the old generative test never did.
what if I persisted a "database" (I'll use that very abstractly) that remembered what I had tested
Ya, I think what you have cooking would be better. Especially if it could do something like split samples.over runs. So the next run starts from the last sample, since I believe the samples grow as they run (maybe I'm wrong though).
they do "grow"
but that growth is rarely useful in finding new issues beyond some bound
But since that awesome generative test runner isn't out yet 😋, we have to assume there's a ton of clojure.test tests out there doing generative testing. So for that I think it makes sense for clojure.test to accept that use case as well and improve on it.
well, we disagree :)
Or there has to be a clear alternative. Like what should I do in the meantime to run generative tests? On a team of 10, with multiple packages worth of code?
a test suite is just a program
write a different program
provide a way to run it
What about my custom program would be better than using clojure.test though?
It would still need to run as part of build, and in CD, and it would therefore rerun just as often.
would it?
Unless I go all the way and build this awesome smart diff detecting generative test runner yes
why not run it less often and for longer runs?
this is a subtle and interesting question about the writing of randomized tests that could probably be the source of a thesis or two :)
If a defect was introduced by a code change, it needs to run prior to being released or risk breaking my service.
There are many commercial software dev teams that do nightly / weekly / only-before-release longer tests that are not run pre-commit.
Down side is of course that it isn't always obvious what changes caused the test breakage, but the cost of running those tests on every commit is simply too high to do them more often.
And since we currently can't detect that the code the generative test exercise has or hasn't changed, any code commits could have. So for now, I'd need them to run.
I'm not sure I'm following. I would never trade faster deployment for more brittle code
And we're not talking days of slowdown
You merge your code, and go to sleep, the next morning your generative tests are done running
I am sure. The best hardware ASIC test writer I ever met played ping pong the way he wrote tests: "never the same serve twice" 🙂
I am reacting to your "run as part of build". Is "build" a thing you think of doing on every commit, or less often?
If less often, then perhaps we are in violent agreement.
It happens on every merge to master. But we do continuous deployment, so ever merge to master goes to production.
It won't run locally on your machine
Basically we have them scheduled same as our integration tests
Do you have a shorter set of automated tests you recommend team members run on every commit, even if that commit isn't merged into master? If so, likely those need to be pretty quick, to avoid slowing people down.
Ya, locally we only run our unit tests and some generative tests as well but the sample size is restricted so they run faster
I run tests inside my editor, for code I'm actively developing. Then we have test suites for each subproject and a CI process that runs all tests for all subprojects any given build artifact depends on.
So we essentially have three "levels" of tests -- and we do have a few clojure.test
"unit" tests that run small generative tests but we also have RCFs that contain longer generative tests that we only want to run from time to time.
Cognitect's current test runner lets you include/exclude tests based on metadata but we don't use that right now (we probably should... I should create a JIRA ticket at work for that 🙂 )
I kinda just agree with both of you at the same time. A test runner smart enough to only rerun a test when the code under test has changed would be awesome. Even for unit tests by the way. If the detection is accurate, like based on form instrumentation or something, then even fast tests don't need to be ran again. So if the core team is working on that, that's really awesome. But in the meantime, I don't see why writing my own test runner would be any better than leveraging the clojure.test machinery. It's just more work and can be setup to run just as often or as little as you want.
Like on that note, I use clojure.test for integration tests, some of them take hours to run. Are integ tests not supposed to run on clojure.test as well?
depends on if that's a good match for them
I have certainly written integration test suites that did not use clojure.test
and ones that did
I'm mostly interested in breaking people out of "there is only one way to run tests and it is clojure.test"
broken 🙂
I'm a big believer in doing a little bit of many test approaches
Do you think clojure.test is causing more harm than good?
no
we should aspirationally want more than just that though
it is an old saw that Clojure people don't write tests, which is patently false
what I would want for Clojure as a community is that we are always interested in using an array of testing strategies that is maximally effective
and not doing any one thing dogmatically
it's like investment planning - do a mix of things in your investment mix
test.check
has a defspec
macro that pretty leads people to write generative tests that run with clojure.test
runners 🙂
well, I don't run that project :)
actually, no one runs it right now, so it's a great time to start if anyone wants to :)
It certainly makes it easy, but the library docs also show you how to run all the generative tests interactively, too.
Admittedly, it adds metadata so you could "easily" include/exclude such tests...
I'm all supportive to that, but maybe I fail to see where clojure.test plays into it. I agree we should all embrace multiple strategies to software correctnes, have CRs, have unit tests, integ tests, generative tests, run QA where it make sense, etc. Do you feel having clojure.test hurts that message?
clojure itself has both unit and generative tests (actually written with test.generative) that can be run independently. test.generative actually makes it easy to run generative tests with a time budget (ran than a count budget), which is useful.
Hopefully Gary Fredericks is still willing to give advice/tips on it?
he is
Cool. I was looking into it recently, and will ask him if there are any other sources to learn how the shrinking stuff works. The code is there, I know, but English descriptions are also nice.
I don't think clojure.test hurts anything by itself, this is more of a social question
the main sources for shrinking are the prior art in Erlang, Haskell, etc
surely there is something by John Huges or someone that explains the idea
Yep, will ask Gary what he knows of, in case I've missed it.
Funnily enough, I was just reading this today: https://hypothesis.works/articles/integrated-shrinking/
👋
Howdy. Since you've been summoned, if you happen to know any English descriptions of how test.check and/or QuickCheck implement shrinking, happy to know of those. If they are already in the test.check repo and I've missed it, apologies for the bother.
Hmm
There might not be
But I love explaining things and nobody ever asks me about that, so you should just ask me
And probably a much easier question -- it seems that test.check fmap generator must have a pure function for transforming the generator's output. At least, it shouldn't do its own pseudo-random generation, because that would circumvent the ability of test.check to reproduce test cases from the seed?
Yes, that's the expectation
It'll call the function on shrinking inputs to produce shrinking outputs
My "Building t.c Generators" talk emphasizes this point
I'll take a look at that talk, then.
I'll let you know if I find any other resources about how the shrinking is implemented, but I'm not actively digging into that particular part of things at the moment, so sorry to ask about it then put you off, but I will definitely ask again if I start digging into that topic in depth.
😛 FINE
🙂
reproducibility, growth, and shrinking are the main features you get by buying into the expected way of constructing generators
reproducibility was something I was familiar with from hardware design days long back, but automated shrinking seemed like magic the first time I heard of it.
It's not any more complicated than the combinators themselves
....sort of
I agree. I’d hate to see clojure.test broken out.