Does anyone know of any studies that measure the correlation between generative testing and software quality?
The case for general code coverage as a metric seems to be very weak. But I wonder if it would be a different story for generative tests.
IME problem with code coverage is that people mean different things when talking about it. Some people and tools think that “if I call each function in my tests at least once I have 100% test coverage” which is a quite different than “calling each function with all the possible parameter invariants”.
Generative testing definitely helps you to go towards the latter1👍
> IME problem with code coverage is that people mean different things when talking about it.
Related: coverage should be split depending on the test suite type
curl localhost can get you 65% coverage, which demonstrates the limited accuracy of aggregating test types under a single metric.
Assuming one categorizes tests according to e.g.
functional etc, then each category should generate its own coverage
I don't recall a tool built with that in mind, but haven't used a lot of them tbh
Is there any data I can present to my higher-ups besides my own opinion to show the value of coverage (even different kinds of coverage)?
It’s hard to find empirical evidence that any testing is beneficial, though anecdotally I would swear by it.
Testing is not always beneficial. Good and sensible testing usually is. I’m afraid it may be hard to find studies supporting that because what is ‘good’ and ‘sensible’ depends on the context.
Hillelogram on twitter has some great threads on this
And don’t get me wrong, I am a huge fan of testing. 🙂 It’s just.. complicated. I’ve written and I’ve seen other people write very bad tests that just add waste to the project. OTOH I’ve worked in a project where we developed and maintained a huge web-store with zero tests. It was one of the worst experiences I’ve had.
I think you probably should try to take a step back and ‘sell’
quality as a value to your higher-ups. All though that’s really hard if it’s not baked in to corporations core values somehow… And that’s a smell. 🙂
Well, how do you measure quality?
I’m not convinced measurability is important, but then again I’m not a business person.
How much time you spend fixing bugs instead of developing
quality is measured in incidents of defects
issues filed by users
quality is measured in how much work it takes to improve your system
(there are other metrics ofc but these are eaaaaasy to track)
I add Maintainability in there as well.
you can have a system that works 100% but can never be changed. I would consider it a bad quality system
Unless… It doesn’t need to be changed 😉1👍
well, in my experience, the only thing I can count on is any system I build will need to change at some point
That’s probably 90% of the cases yes.
whether it needs to be moved from AWS to GCP, or needs to support a new feature, or bugs need to be fixed
I’m reading “Software Design X-Rays” which tries to analyze code history and show which parts of the system are the ones that have the technical debt, which is mostly the ones that change often by multiple people. It’s a very interesting read so far.1👍1
OTOH, in my previous job I was doing software interactives for museums — write once, never change. That was an interesting valley — if it works and looks good (try to test that, on a tight budget), you could move on 🙂
IMO all measurements are proxy metrics for how easy it is to change1✔️
I remember seeing a tool that claimed to do something similar with
git. Analyzing history and seeing which files were always edited together and possibly tangled.
git log --format=format: --name-only | egrep -v '^$' | sort | uniq -c | sort -r | head -15
that’s straight from the book — gives you a sorted list of the most frequently changing files
remove the head at the end to pull them all out 🙂
Does the book have a spell for analyzing which files are always edited in same commit?
Perhaps! I’m only in the beginning...
Any data points on how long a new hire takes to get on-boarded fully in a new environment that might have new languages, frameworks, etc?
I’ve seen some comments that new hires can learn enough Clojure to get productive in 2 weeks, but that seems an extremely short period of time.
ah looks like the same author as https://pragprog.com/book/atcrime/your-code-as-a-crime-scene
Yeah the x-rays expands on that. I haven read the crime scene one!
This is great, thanks!
I've seen 1-2 weeks be the norm (for java devs)
That presumes you have an existing codebase and relatively stable dev practices (someway to get the codebase up and running with a handful of commands)
it usually takes a month or two for idiomatic code to be the norm
at least from my observations of about a dozen devs
java + python/ruby experience tends to be the shortest path (outside of actual clojure or lisp experience) to productive
I've only ever compared one experienced CL user, but that person had dabbled in clojure in the past
I kinda suspect that dynamic language workflow experience is slightly more crucial than java experience, but that also is probably project-dependent.
depends on the codebase
probably a few weeks to get some commits in, but then multiply some amount of familiarisation time for each drastically different part of the system that is encountered
I added Zulip Mirror Bot so the discussions here get archived to Zulip and become searchable. I was looking for that link to the long post about why companies can't always hire remote workers outside their country (or even outside their state). Does someone still have that link?
it was a twitter message from a hashicorp person
that HN post chimes with our early experiences with attempts to employ remotely in other EU (we're in the UK) countries - we found we couldn't understand how much it was going to cost us to employ someone in another EU country, and that it was memorably rather more than we expected. as a small company, with limited resource to investigate, it is much lower risk to stay within familiar territories