Juan Lang wrote:
Just looking at the pretty colors may not make this very obvious, but the state of the tests is APPALLING.
Agreed. I wonder how much of it has to do with not noticing that the tests have failed?
I may just be transforming the problem from an easy one (we shouldn't be lazy about checking the test results) to a hard one, but: what about automatically doing a regression test to find the patch that broke the test, and logging a bug for it?
Amen!!! I have meaning to do this, but I have not been able to find the time.
I suspect the biggest problem is keeping the winetest executable up to date on the systems. If the test system can't compile the tests, it can't easily perform a regression test. What's the biggest obstacle to that?
One could do like Bazaar developers do, they have mailing list robot that snatches patches on their dev list and commits them.
Our robot could build them (on a linux system) and run the resulting winetest.exe on a virtual machined windows.
Then the patch could be blackflagged _before_ it was commited by Alexandre.
regards, Jakob