The calculations in my code are well-tested, but because there is so much GUI code, my overall code coverage is lower than I\'d like. Are there any guidelines on unit-testin
Designs like MVP and MVC typically try to abstract as much logic out of the actual GUI as possible. One very popular article about this is "The Humble Dialog Box" by Michael Feathers. Personally I've had mixed experiences with trying to move logic out of the UI - sometimes it's worked very well, and at other times it's been more trouble than it's worth. It's somewhat outside my area of expertise though.
My approach to GUI testing is evolving, as is the industry consensus. But I think a few key techniques are beginning to emerge.
I use one or more of these techniques, depending on the situation (e.g. what kind of GUI it is, how quickly it needs to be built, who the end-user will be, etc.).
Manual testing. You always have the GUI running while working on the code, and ensure it is in sync with the code. You manually test and re-test the part that you work on as you work on it, switching between the code and the running application. Every time you complete some significant piece of work, you give the whole screen or area of the application an overall test, to ensure there are no regressions.
Unit testing. You write tests for functions or small units of GUI behaviour. For example, your graphs may need to calculate different shades of a colour based on a 'base' colour. You can extract this calculation to a function and write a unit test for it. You can search for logic like this in the GUI (especially re-usable logic) and extract it into discreet functions, which can be more easily unit tested. Even complex behaviour can be extracted and tested in this manner – for example, a sequence of steps in a wizard can be extracted to a function and a unit-test can verify that, given an input, the correct step is returned.
Component explorer. You create an 'explorer' screen whose only role is to showcase each of the re-usable components that make up your GUI. This screen gives you a quick and easy way to visually verify that every component has the correct look & feel. The component explorer is more efficient than manually going through your whole application, because A) you only have to verify each component once, and B) you don't have to navigate deep into the application to see the component, you can just view and verify it immediately.
Automation testing. You write a test that interacts with the screen or component, simulating mouse clicks, data entry, etc., asserting that the application functions correctly given these manipulations. This can be useful as an extra backup test, to capture potential bugs that your other tests might miss. I tend to reserve automation testing for the parts of the GUI that are most prone to breaking and/or are highly critical. Parts where I want to know as early as possible if something has broken. This could include highly complex interactive components that are vulnerable to breaking or important main screens.
Diff/snapshot testing. You write a test that simply captures the output as a screenshot or as HTML code and compares it with the previous output. That way, you're alerted you whenever the output changes. Diff tests may be useful if the visual aspect of your GUI is complex and/or subject to change, in which case, you want quick and visual feedback on what impact a given change has on the GUI as a whole.
Rather than heavy-handedly using every possible kind of test, I prefer to pick and choose the testing technique based on the kind of thing I'm working on. So in one case I'll extract a simple function and unit-test it, but in another case I'll add a component to the component explorer, etc. It depends on the situation.
I haven't found code coverage to be a very useful metric, but others may have found a use for it.
I think the first measure is the number and severity of bugs. Your first priority is probably to have an application that functions correctly. If the application functions correctly, there should few or no bugs. If there are many or severe bugs, then presumably, you are either not testing or your tests are not effective.
Other than reducing bugs, there are other measures such as performance, usability, accessibility, maintainability, extensibility, etc. These will differ, depending on what kind of application you're building, the business, the end-user, etc.
This is all based on my personal experience and research as well as a great write-up on UI Tests by Ham Vocke.
Window Licker for Swing & Ajax
Of course, the answer is to use MVC and move as much logic out of the GUI as possible.
That being said, I heard from a coworker a long time ago that when SGI was porting OpenGL to new hardware, they had a bunch of unit tests that would draw a set of primatives to the screen then compute an MD5 sum of the frame buffer. This value could then be compared to known good hash values to quickly determine if the API is per pixel accurate.
There is Selenium RC, which will automate testing a web based UI. It will record actions and replay them. You'll still need to walk through the interactions with your UI, so this will not help with coverage, but it can be used for automated builds.
It's not your job to test the GUI library. So you can dodge the responsibility to check what is actually drawn on screen and check widgets' properties instead, trusting the library that they accurately represent what is drawn.