@seancorfield @roberto So in your response to @tetriscodes , are you saying I would probably not ever test a database?
Could you review what I'm about to add to a current project, and give me a little guidance?:
----------------
I have a Postgres Component, and that just manages a connection, but then I have a "Boundary" (ala duct
) to that component that establishes my internal "api" for accessing, ex, my users db (`find-user`, create-user
, ...)
So, the path I'm going down today wouldve been to create a test-db, and some fixtures+setup+breakdown code in my tests for testing this User Boundary. And I want the testing of the test-db to use migrations + rollbacks I have setup for my dev+prod. databases.
What would be a good practice here?
Well, my basic question would be: where’s your business logic?
in an endpoint
What exactly do you want your tests to do for you?
and i would like to make sure that my sql queries, and the yesql
that brought them in, all work as expected against my current migration, or schema
the endpoint is just a compojure route
testing your business logic should take care of that, right? If the sql queries are not working, then your business logic will also break?
So you’d be testing a stateful function? Could you break that down into a query, a pure function, and an update — and then test the pure function?
@roberto I could test just the business logic, but then that would depend on my sql queries, and I'd prefer to test it independently of my database, component, or boundary. (right?)
We use TDD/BDD to support design of APIs — making sure they’re a good experience to call, that they have the right error handling. None of that requires a DB behind it as you can exercise it with canned data. Then there’s the actual business logic, which again you can write tests for.
honestly, I don’t know what you mean by boundary, it seems to be a duct specific thing. But a component should only have 2 responsibilites: starting and stopping
@seancorfield And yes, the state would be the postgres database, and so I could set something up to run a migration against an empty test-db, run my queries, and then roll it back
Consider this code:
(let [data (run-some-query args)
result (perform-the-business-logic data args)]
(perform-any-updates result data args)
result)
Now you can write tests for each part separately.More importantly, you can isolate tests of the business logic from any database.
and a "Boundary" is an idea that duct
+ /u/weavejester champions, which is basically, you can never just do (db/query ...)
, but instead you make a boundary of the specific fns your allowed to call on that component
so, it is just an “object” that wraps around a component?
@roberto yes, basically (to my understanding)
yeah, then you just need to test that “boundary”, the component gets exercised automatically when you are using the functions in that boundary
@seancorfield I haven't been involved at this "systems" level of development before, so honestly, while your example code makes sense, should I even be testing sql queries against a db? I'm embarassed to ask, but, do people do that?
(like, migrate in a fresh test-db and thebn break it down after, on each test)
In general, no, I wouldn’t.
i don’t test sql queries. I test the function that use that query
testing every query is double the work
and double the code
and makes it harder to then change something
That said, I will verify that a complex HoneySQL expression produces the right SQL statement — but I’ll tend to do that in the REPL.
checking by hand, but not automated in a test?
you can add tests that you will delete afterwards if you are doing TDD, to just guide you. But I wouldn’t leave tests for ‘queries’ in the code base.
I’ll also have a DBA eyeball a particularly complex query for performance issues, and maybe have them run it against production.
so, then, if I want to test the business logic, it sounds like I need to test it not as a pure function, but in the context of a db?
When we first started doing Clojure, we tended to mix SQL queries, SQL updates, and business logic all together — horrible to test. Now we’re being more careful to separate out the queries from the business logic and then run the updates at the end, depending on the results. That makes the business logic much easier to test — as a pure function.
you can test logic without the db. But I would also test the integration points. The places where the business logic is passed in the data from the database.
That integration point is where you would exercise the database component.
So if I have a fn that does logic, and an effectful call, I should split it out, and test the logic, and not test that the effectful call is behaving?
i think testing that integration point offers more value than writing individual tests for each query
(or in what @roberto just said, test pure, and test integrated?)
yeah, that is the general approach I take
i’m not much of a test purist, so I’m quite liberal as to what I test. I find that trying to test every function I write adds more cost to maintenance in the long run.
so if I have an integration test that, say, updates a user in the db, would you then check that the rows in the db look as expected? Or just check the return value of the fn?
the return value
because if will fail if something goes wrong. so you don’t need to do another query to the database.
if the sql statement is wrong, for example, it will throw an exception, etc...
ok. so the gist I'm getting is, tests are expensive, so don't do too many, but also don't do too few 😉
hehehehe, all code is expensive 🙂
basically consider that every test you write, you have to also maintain
so I try to only keep those that offer the most value. At times I write tests that offer no value, but only serve as some sort of guidance while I’m testing things out. But those get deleted.
With clojure I do that less and less because I have a good repl, so I try things out there
I used to write tests for every property I added to a data model when I was doing java or some other oo language. It quickly became too much.
@seancorfield @roberto thanks, you just cut out a chunk of my work today, and gave me some piece peace of mind 🙂
I tend not to test queries or updates on their own. I’ll write more extensive tests for the pure business logic (and if you can find properties and do generative testing, even better), and then I’ll write integration tests (of the whole endpoint) sometimes. Or automated “user acceptance” tests which exercise the API itself.
For example, in our API test suite, we typically have tests like
(expect some-result (api-post “endpoint” (auth-token “username” “password”) :some “args”)
And the auth-token
function walks through the whole OAuth2 login / token exchange process.
So that test makes four POSTs altogether, exercising all sorts of stuff.
We can also use that to get a token and then test three or four API calls as a sequence, just like an end-user (client) application would.
In addition, we’ll have more extensive example-based tests for pure business logic functions where possible, or integration-level tests for older code that still has the queries / updates mixed in.
that is beautiful, and that makes a lot of sense, I think I'll try to gravitate toward more general tests - #PickTheRightLevelOfAbstraction 😉 so for those tests, will you even run the api server and test over HTTP calls against your api?
We have tests we can run on the code directly, and tests we need the API server up for.
Technically the API, Authorization, and Login Servers — they’re three separate processes.
Interesting points above.
I’ve settled on testing my system instead of my components individually.
I started writing very small and many tests on the components and then the light came on that I’d be writing tests to simulate system access via http/mqtt.
ie, I’m getting more confidence testing my components after dependency injection than the sum of the parts/tests.
I am building the components up and testing them on the fly as I develop with the repel.
(def configcomponent (com.stuartsierra.component/start (spacon.components.config/make-config-component)))
Perhaps what’s different for us is that we have test suites that can run independently for a number of our “subsystems”, so we have a low-level “system” component and a test suite for that stuff, and an “environment” component and a test suite for that stuff, and so on.