-
Notifications
You must be signed in to change notification settings - Fork 25
Description
Some of the existing tests were passing by happenstance. In the shift cases tests were false positive and not indicative of a functioning algorithm. It appears the results for the CL and orientation algorithm checks are potentially also similarly happenstance. However, in those cases I believe the algorithm is working, but the problem params were cooked up to yield an appearance things work better than they do in general. The test expectations will fail for setups with different sizes, increased images (should be easier...), different CL shift search requests etc. It makes altering them a bit of a pain/timesink.
I resolved some of this by temporarily increasing images, loosening the shift solving expectation and marking most of the test suite as expensive. The shift tests will pass with the CLSyncVoting defaults but I've kept the tuned up params in the test to keep the existing CL and rotation tests passing. (They do not pass with the defaults at 500 images, an easier request, for example).
The expectations and parametrization of this test probably needs to be reconsidered. Ultimately I'd like a basic test that can always run just to quickly ensure we haven't totally wrecked something, and then the expensive tests to be more realistic/exhaustive so that they are actually indicative. Also I'd like to see if our defaults can be run for one of those two groups of tests.