I believe in what Guillermo Rauch said: "Write tests. Not too many. Mostly integration." The problem that's not mentioned is that the higher-level tests you write the higher likelihood they'll be flaky. Testing is all about confidence that your code works and flakiness undermines that. Having tools to counteract this will allow me to ship faster and with better confidence. Very nice.
if teams start generating a significant portion of their codebase testing will become so much more difficult... I wonder how flaky copilot code & tests might be
IMO, if you're rerunning gating tests then there's no point in having the tests in the first place. The whole purpose of tests is to find value in their failing, so when you have flakes and rerun multiple times to pass, your tests have lost their value and you have no trust in your codebase.
also... rerunning is annoying when you have to figure out if the problem is the SUT or a brittle test (one upside for me is having spare time in the afternoons to make coffee while waiting for CI lol)
On small teams or projects, this attitude is probably fine.
One of the most frustrating experiences as an engineer is running up against a deadline and just staring at my CI with my finger on the re-run button because my coworkers can't be bothered to believe me when I tell them their tests are flaky.
Fixing your flaky tests is like putting your cart away - a little effort on your part makes a lot of peoples lives a little better.
I believe in what Guillermo Rauch said: "Write tests. Not too many. Mostly integration." The problem that's not mentioned is that the higher-level tests you write the higher likelihood they'll be flaky. Testing is all about confidence that your code works and flakiness undermines that. Having tools to counteract this will allow me to ship faster and with better confidence. Very nice.
if teams start generating a significant portion of their codebase testing will become so much more difficult... I wonder how flaky copilot code & tests might be
What's the point here over like just hitting rerun and maybe just hitting rerun again?
I mean I feel like you shouldn't have a bunch of flaky tests anyway if you're writing them properly
IMO, if you're rerunning gating tests then there's no point in having the tests in the first place. The whole purpose of tests is to find value in their failing, so when you have flakes and rerun multiple times to pass, your tests have lost their value and you have no trust in your codebase.
also... rerunning is annoying when you have to figure out if the problem is the SUT or a brittle test (one upside for me is having spare time in the afternoons to make coffee while waiting for CI lol)
On small teams or projects, this attitude is probably fine.
One of the most frustrating experiences as an engineer is running up against a deadline and just staring at my CI with my finger on the re-run button because my coworkers can't be bothered to believe me when I tell them their tests are flaky.
Fixing your flaky tests is like putting your cart away - a little effort on your part makes a lot of peoples lives a little better.