You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While in theory "turtles all the way down" systems should sort of self-test themselves with just basic usage, I found this not be true in practice (I don't know if because of the project at hand or some deeper reason).
For many refactorings I've done, things have broken in rather subtle ways and I found them quite late.
Hence over the months (and two versions) I've made a test system to checkpoint any change big or small. Being morphic a direct and live system, I though to make a direct and live test system, where the user simply records actions, and the test system will compare "before" vs "after" screenshots taken during such recorded actions. The playback can be totally left unattended. As long as no visual changes occur, the system will test itself unattended. In case that visual changes occur, they are highlighted and can be accepted quickly.
The test time can be reduced: an included script launches 9 windows splitting the tests among them. (Also I'll look into "speedups" of the playback as the system should for most tests react well to "rapid fire" events).
Should all the tests allocated to the window pass, a green signal is provided.
The actions can to some degree abstract from things that change easily such as morphs size and location, insertion/deletion of items in menus, and the presence of other spurious morphs in the background: tests maintenance across code changes is kept within reason. (such abstraction is provided via a couple of ways of identifying the morphs and recording the pointer location within them).
On the other hand, the screenshots would highlight even the most minute Canvas rendering implementation changes across minor browser releases, but luckily these changes can be very quickly accepted - it takes seconds to tell apart "malign" from "circumstantial" changes, as differences are highlighted in red like this one:
The system can also keep several screenshots obtained in different browsers / browser versions and will "pass" the screenshot test if any of the reference screenshots is matched.
The text was updated successfully, but these errors were encountered:
davidedc
changed the title
improvement: test system ( and automations recorder)
improvement: test system (and automations recorder)
Apr 20, 2015
While in theory "turtles all the way down" systems should sort of self-test themselves with just basic usage, I found this not be true in practice (I don't know if because of the project at hand or some deeper reason).
For many refactorings I've done, things have broken in rather subtle ways and I found them quite late.
Hence over the months (and two versions) I've made a test system to checkpoint any change big or small. Being morphic a direct and live system, I though to make a direct and live test system, where the user simply records actions, and the test system will compare "before" vs "after" screenshots taken during such recorded actions. The playback can be totally left unattended. As long as no visual changes occur, the system will test itself unattended. In case that visual changes occur, they are highlighted and can be accepted quickly.
The test harness looks like this right now:
There are right now around 40 tests ( https://github.com/davidedc/Zombie-Kernel-tests/tree/master/tests ) for around 40 minutes of total test time, trying to exercise basic behaviours, and also subtle behaviours.
The test time can be reduced: an included script launches 9 windows splitting the tests among them. (Also I'll look into "speedups" of the playback as the system should for most tests react well to "rapid fire" events).
Should all the tests allocated to the window pass, a green signal is provided.
The actions can to some degree abstract from things that change easily such as morphs size and location, insertion/deletion of items in menus, and the presence of other spurious morphs in the background: tests maintenance across code changes is kept within reason. (such abstraction is provided via a couple of ways of identifying the morphs and recording the pointer location within them).
On the other hand, the screenshots would highlight even the most minute Canvas rendering implementation changes across minor browser releases, but luckily these changes can be very quickly accepted - it takes seconds to tell apart "malign" from "circumstantial" changes, as differences are highlighted in red like this one:
The system can also keep several screenshots obtained in different browsers / browser versions and will "pass" the screenshot test if any of the reference screenshots is matched.
The text was updated successfully, but these errors were encountered: