Not sure if this is a #beginners question:
There is something that bothers me about (my understanding of) spec.
Example: #{“foo” “bar”}
implies string?
.
(s/explain-data #{“foo” “bar”} 42)
(s/explain-data (s/and string? #{“foo” “bar”}) 42)
What bothers/interests me here is that I have to write the second, but my first spec is a subset of string?
.
I know the semantics of the first might very well be heterogenous over time. But at that specific point in time the above sentence is true. It seems useful or interesting to me that a more involved and concrete spec should or could fail on and explain broader invariants if they are not satisfied.
So here is my first question: How feasible would it be to infer a superset of a spec?
The problem I see with this, is that the first spec can be expressed in multiple ways. For example with s/or
or even worse: with reduce
. Maybe the first spec function should have a function spec that checks :args
to be string?
, but that seems clunky.
A better question might be: is this a matter of design? Are these two specs really saying something meaningfully different or is the second strictly superiour?"I have to write the second" - why?
My intuition is that 42 should fail at string?
before it fails at #{"foo" "bar"}
. Or more generally: to explain why an input fails, wouldn’t it be more useful to say that it doesn’t satisfy an implied superset before I explain that it needs to satisfy an arbitrarily concrete predicate?
But from your question I guess this is a matter of designing a spec.
it's important to shift think a type mindset to a "value set" mindset. specs are predicative descriptors of allowed values. Saying #{"foo" "bar"}
means "these are the only two values allowed" vs string?
which means "satisfies the string? predicate". those are semantically a lot different (I would generally hesitate to use the first one in a function signature as time is long and you'll probably come up with a 3rd value later). you might for example, spec it as string?
but then validate a narrower spec inside the function.
it is really a matter of how you choose to design the spec for your data and what should imply to a user
Thank you, this advice is very valuable (sic!), I have to think about it a bit more… I might be stuck in deduction world. I have this nagging feeling that deduction, where possible, gives you “free” stuff. But if I understand correctly this isn’t what spec is about. It’s supposed to be used concretely and specifically. AKA nothing additional should be inferred from a spec.
yes
it's about values, not types
an interesting question though is subsumption - "does this spec subsume (include) all possible values of this other spec?"
and that is something that we hope to potentially provide in the future
b/c it lets you ask questions about spec evolution in tandem with data evolution
So in my case string?
subsumes #{“foo” “bar”}
right?
But you’re not interested in providing that for a specific isolated spec (aka subtyping or similar) but you want be able to ask questions about different specs across a timeline or maybe across different systems.
There seems to be tons of research surrounding the term “subsumption” in CS, robotics, AI, Lisp, Prolog and Datalog. And also in OWL https://en.wikipedia.org/wiki/Web_Ontology_Language.
Thank’s again 🙂