architecture

2019-01-15T17:00:30.007300Z

sounds to me like you’re trying to monitor low level data (server logs) on too high a level (black box testing)

2019-01-15T17:00:53.007800Z

you will probably want to change your testing strategy for this a bit

mathpunk 2019-01-15T22:06:56.010600Z

I've learned that we have a service which escalates certain kinds of events, by their severity

mathpunk 2019-01-15T22:07:13.011Z

but I don't know that I have other higher-level data, out of the current box

mathpunk 2019-01-15T22:07:53.011600Z

note: this is my first testing job and you will not offend me if you ask me any obvious questions or have obvious comments because, they won't be obvious to me πŸ™‚

mathpunk 2019-01-15T22:08:48.012700Z

i do know that i am spending a lot of time at the e2e tippy top of "The Testing Pyramid" but, until our API stabilizes more I don't think that I will be writing tests at a lower layer

jaihindhreddy 2019-01-15T22:13:51.014100Z

@mathpunk just a thought but, because log lines have timestamps (which I'm assuming you can parse and get at), you can just capture all the data the system under test slurps or spits, and do the analysis later, obviating the need to tail or watch files.

jaihindhreddy 2019-01-15T22:15:35.014700Z

Sort of like how simulant does it.

mathpunk 2019-01-15T22:15:40.014900Z

that is a good point

jaihindhreddy 2019-01-15T22:17:17.016300Z

You can model the actions a user can do, and the reactions of the system, and refine these models as you generatively test/simulate them against the system.

jaihindhreddy 2019-01-15T22:17:38.016700Z

I found this talk useful https://www.youtube.com/watch?v=zjbcayvTcKQ

mathpunk 2019-01-15T22:17:46.017Z

thank you!

mathpunk 2019-01-15T22:18:22.017700Z

simulation testing is the goal.... but our app is so big that there is a lot of plain example-based tests to be written, just to make sure that i understand what user actions are available

jaihindhreddy 2019-01-15T22:18:54.018100Z

^ I feel your pain πŸ˜„

jaihindhreddy 2019-01-15T22:22:03.020800Z

It seems to me like doing it generatively is going to save you a lot of effort, because it's very easy to make assumptions about how a system will behave by reasoning inductively, something that generative tests relentlessly point out. So once you've invested in setting up a quick feedback loop doing this, you'll get to a solid model in no time (fingers crossed πŸ™‚)

seancorfield 2019-01-15T23:05:03.021900Z

We have a couple of interactive applications with some fairly complex semantics, and we wrote specs for the possible valid sequences of actions, and then we generatively test that when we get the app into a given state, the expected properties hold true...

seancorfield 2019-01-15T23:10:10.022700Z

It's tricky to do but can be pretty powerful. Mind you, as @mathpunk says, first you need to know a) what actions are available and b) how combinations of actions can be combined!

1βž•