April 1, 2015

Faking PhantomJS's userAgent in Nightwatch.js

While working on some odd behavior with a PhantomJS driven Nightwatch.js test I had a need to change the userAgent that PhantomJS was reporting. Turns out there is a simple way to do this in the configuration file:

"desiredCapabilities": {
    "browserName": "phantomjs",
    "javascriptEnabled": true,
    "acceptSslCerts": true,
    "phantomjs.page.settings.userAgent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36"
}

I suspect a similar technique can be used for many of the PhantomJS settings.

Tags: nightwatch phantomjs programming testing

May 23, 2010

Static Const of the reference not the content

In the middle of helping a co-worker debug a unit test, we had a test method that ran fine by itself but failed when run with the rest of the tests in the same class. Looking over the code there was no obvious reasons why when both test methods were run the second one failed.

Turned out to be a duh moment. At the top of the unit test there were a bunch of XML fragments declared with private static const. Problem was one of the two test methods was modifying the children of the XML. XML is a complex Object, unlike an int or a String, so the constant keyword doesn't prevent changes to the structure within the XML, it just prevents the reference from changing. A couple of simple fixes:

  1. Make the XML fragments be class instance variables, since each unit test creates a new instance of the test class
  2. Before using the XML use its xml.copy() function

Tags: e4x flex testing xml

March 24, 2009

JUnit, Jetty, HtmlUnit, and Automated Testing

I'm a firm believer in automated testing. The more testing you can automate the less work you need to do to be sure you didn't break stuff each time you make a change. While most people think of unit testing when talking about automated testing, the higher up the interaction chain you can go while not making your tests brittle the better off you will be. On my current project unit test coverage is great, but given that ultimately all of the services get exposed as a web application, being able to automate testing at that level would be beneficial. Below is the approach I took to automate testing our web application.

The key ingredients to the approach are JUnit (testing framework), HtmlUnit (web request handler), and Jetty (integrated application server). Each automated test is written as a JUnit test with a call to a centralized function to startup the web application. To fully isolate each test method you could switch this to an @Before annotation. I've found though that the automated tests are most accurately defined as integration tests. Having a single instance running for all tests simulates the ultimate environment better so I only ever startup one server. Each test method is responsible for setting up the environment (.i.e. test data) to match what it needs so that the order of test methods never matters.

public class SampleTest {
    @BeforeClass
    public static void startServer() throws Exception {
        ObjectMother.startServer();
    }
}

The startServer() call in ObjectMother is responsible for starting the server if not already running. I startup the server on a random port to test that the configuration isn't port specific. The base URL for the web application is exposed so that tests making requests know what URL to start the call at. One issue I ran into is that you might need to play with the physical directory passed to WebAppContext as running from Eclipse versus Ant might have you starting in different working directories. To get around this issue there is a a little directory helper that searches for the target directory based on current, parent, and a couple other locations it could be found out.

import org.mortbay.jetty.Server;
import org.mortbay.jetty.webapp.WebAppContext;
public class ObjectMother {
    public static String baseUrl;
    private static Server server;

    public static void startServer() throws Exception {
        if (server == null) {
            server = new Server(0);
            server.addHandler(new WebAppContext("build/testapp", "/sample-server"));
            server.start();

            int actualPort = server.getConnectors()[0].getLocalPort();
            baseUrl = "http://localhost:" + actualPort + "/sample-server";
        }
    }
}

The creation of the "build/testapp" referenced above is done by Ant. When running within Eclipse an external tool builder is called to stage the web application like it would if it were to package it as a war. In particular the staging done for unit testing adjusts any deployment parameters so that they tuned for the test environment.

Now that the server is running requests can be made against it. This is where HtmlUnit gets used. A typical test looks something like this:

@Test
public void testBasic() throws FailingHttpStatusCodeException, IOException
{
    List<NameValuePair> parameters = new ArrayList<NameValuePair>();
    parameters.add(new NameValuePair(SampleCommand.CUSTOMER, ObjectMother.CUSTOMER));
    parameters.add(new NameValuePair(SampleCommand.URI, ObjectMother.URI));
    WebRequestSettings webRequestSettings = new WebRequestSettings(new URL(ObjectMother.baseUrl + "/sample/"));
    webRequestSettings.setRequestParameters(parameters);
    WebClient webClient = new WebClient();
    webClient.setThrowExceptionOnFailingStatusCode(false);
    Page page = webClient.getPage(webRequestSettings);

    assertEquals(HttpServletResponse.SC_OK, page.getWebResponse().getStatusCode());
    // additional request specific assertions
}

Frequently the response is XML based, so for these the XmlPage wrapper is used instead.

assertEquals(ServerUtil.XML_CONTENT_TYPE, page.getWebResponse().getContentType());

XmlPage xmlPage = (XmlPage) page;
NodeList nodeList = xmlPage.getXmlDocument().getElementsByTagName(ObjectMother.FAILURE_ELEMENT_NAME);
assertTrue(nodeList.getLength() > 0);

Since Jetty is starting up within the JVM, instead of as a separate process, using Eclipse's debug as JUnit test just works. The web application startup catches Spring configuration errors quickly and ensures that all paths are what they should be.

Tags: htmlunit java jetty junit testing

January 31, 2009

Flex Automation Framework (BFUG Jan 2009)

Eric Hilfer from Tom Synder Productions is speaking about Flex Automation Framework and automated testing.

Automation Framework: records and playbacks UI events (mouse and keyboard), identifies controls by context to make them less brittle, can read object properties, delegate objects separate automation logic from core component (allows runtime and compile time inclusion).

How it works: Flex includes SWC libraries, compiled as [Mixin] (during compilation the code gets pulled in), Agent then interacts with AutomationManager (can be 3rd party), testing tool communicates with Agent, tests, suites, etc. QA uses tool developer has to expose to Agent.

Good: Smart UI element identification, decouple automation from code, no bloat.
Bad: Not fully AIR-aware (no good support for multiple top-level windows), assumes everything is a UIComponent.
Can programtically detect presence of AutomationManager and behave differently, but not recommended.

Using it: HP QuickTest Pro (expensive, must run in IE browser), RIATest (inexpensive agent) (will be demoing), build your own. Flex Monkey is another alternative.

When you compile your project, include all of the automation SWCs. Alternative is if your application can be loaded just as a SWF, load it into a wrapper that has the automation libraries.

Anything derived from UIComponent will just work. For custom extensions, register with an XML file. Automation name, real classname, events to watch, properties to expose. Delegates register themselves with the AutomationManager for which class they handle.

Making automation easier: Give each button a unique name so that automation can find it even if it is moved. Don't use UIDs.

RIATest demo. Can be run interactively to record tests and setup verifications. Windows native application but can have instances running on other machines reporting to it. Has command line run mode. Scripts supports loops, limited external data file reading.

Customization of "get automationName()". Names can be context unique, doesn't need to be application unique. Need to wrap custom events with automation events (should be placed in delegate object). Need special handling for native window.

XRay Tool very useful for debugging.

Tags: automation flex testing ui

December 28, 2008

Unit Testing Flex Application with Fluint (Flex Camp Boston 2008)

Unit Testing Flex Application with Fluint
Mike Nimer, Digital Primates (presented for Jeff Tapper who was unable to make it)

Testing is like source control. You don't know how much you need until you start using it.

Cold Fusion started out without much testing.

Ever bug has to have a unit test that exercises the bug before fixing it. Regression bugs discovered as part of testing are show stoppers.

Testing methods
Manual testing: humans create and run, low cost to start high cost to maintain, testing happens less often
Automated testing: humans create and computers run, higher cost to develop but lower to run, more often

Testing as part of development
Code until the red goes away
Developers create code and write tests, focus on test driven development
Automated build checkout, build, and test running, various tools to do that

Test types
Unit tests: individual method testing
integration tests: combination of test components

Tools for testing
FlexUnit: Original testing framework, based on JUnit
Flunit: Better asynchronous testing, better UI testing support, and sequences

Integartion testing is hard: Visual Flex Unit does bitmap comparison,
Fluint checks sequences and events

Test methods , test cases, test suites, and test runner. Various types of assertions. Comes with runner and has file watcher to pickup modules and run tests on change.

Asynchronous Testing
Events can occur at any time so you need to handle it sometime later. Uses asyncHandler which is a function that you pass as the event listener. Also want to pass data through to handler. Second argument to event handler. Can have asyncHandler as part of setup step.

UIComponent Testing
TestCase is a UIComponent so it can have components added to it as a child. Still need to what events to listen for.

Sequences
String together many sets and event waiting.

Test meta-data flag
Meta-data is copied over and available in the reports. Also produces XML like file in a JUnit like format.

Test filter and sorting
Only run certain tests, headless test runner

Configuring Fluint and test setup: Writing libraries can have another test application. Flex projects can't link so

Migrating from Flex Unit to Fluint? Best to ask on the forumns.

Mocking data? Nothing tied into Flinut, framework can support it.

Code coverage: using Flex Cover.

AIR test runner: looks for directory changes and runs it.

Tags: flex flexcampboston2008 flexunit fluint testing

May 28, 2008

Adobe's David Coletta on Buzzword's Automated Testing Framework

This past Tuesday the Boston Flex User Group held its second meeting. David Coletta from Adobe's Buzzword team spoke about Automated Testing Framework. This was a deeper dive into the topic he presented at Flex Camp Boston 2007. I counted about 50 people at the meeting. It was a very interactive session with a bunch of good questions coming from the audience. Below are my rough notes from the presentation.

David's has a long background in development but hasn't programmed in Flash. Sees work done in Flash and wonders how that was possible, gets same reaction for work he's done in Flex.

Rough Buzzword time line: Jan 2006, started coding; Oct 2006, First public demo; Fall 2007, opened up for real use.

He gave a quick demo of Buzzword. On startup that an issues with new unsupported browsers. They have support for latest Safari but it isn't released yet. Big advantage of online word processor is like Google Docs you can get at it from anywhere. Better than Google Docs because you aren't confined to email only like editing capabilities.

Document browser with support for sorting by last changed, last viewed, role on document, date, number of pages.
Google can't do pagination. Showing picture import, resize, and text wrapping around it. Layout is all done on the fly. Has a nice ruler that no one knows about.

Three models (in the MVC sense used):
document (persistent user content, hierarchical, moving text is just a tree operation), layout (transient position of content, updated after every keystroke), and display (flash/flex objects on the display list, mostly tracking what is part of the flash player).

Primary loop is: hit a character, inserts into document, fires events, triggers layout, visuals, and redraw.

Document internals: Containment (document) versus Inheritance (run).

Buzzword has a built-in inspector. View and set properties manually. Exposes raw data structure.
Typically when using a tree component you can hook it up to XML. With ITreeDataDescriptor you can plug directly into your own custom data structures. Your application should have debugging tools built into it. Flex's easy to use out of the box components without much styling are a good enough interface to create these tools quickly.

Question about why should someone trust Buzzword for documents?: Basically just need to "trust us". SWFs are not over SSL but all documents are. Adobe practices good data security. Need to ask "how bad would a security breach be for Adobe?" and understand they don't want that to happen.

Created a way to test the real application through a system called "LiveTest". Needed to test correctness of document model after every command. Not a replacement for QTP as it is focused on state of internal models. Internally everything is done via command pattern.

To test some feature it records document model after changes to create known good state. Windows/IE used as platform using ActiveX control to write out state into files. These can later be rerun to compare that the same results are produced each time.

Question about how much effort it took to build the test harness? Not sure, maybe 10% of total time.

Comment made about inability to type in full screen mode. It's a security concern which is why its disallowed.

Commands run during a live test know how to output an ActionScript version of themselves that can be pasted into a .as file to create the test file. Could send to server and compile and send back a SWF, but wanted a light-weight solution. Pieces of test are broken into anonymous function, one for each part. Done to work around issue that if ActionScript runs for too long Flash player freezes. Cooperative multiple tasking. Separate functions can run and return control back to UI.

Question about generating ActionScript code versus interrupting XML? Going through API on top of Buzzword to make testing possible. Built this way to ensure you have a good API. Being bale to set breakpoints in code versus XML is very handy.

New code needs to be referenced so they have a file that pulls in all of the tests so they are compiled in.

Manual training of the test. Took XML snapshots of document and layout and wrote them to disk. XML documents have representations of model objects. Has green thumbs up or red thumbs down. Good discipline about running all test before doing a check in.

Test success based on final XML matching checked in XML. On test failure outputs error files and do text compare of expected and actual. Almost no test is visually analyzed, mostly look at raw XML.

Question about sharing the tests? Yes, every developer has all tests, checked into source control.

Question about if you started over today would you change anything? Wouldn't use IE to write files, probably use AIR instead. Might consider an XML representation for capturing test steps. Do use FlexUnit for lower level API testing.

Question about why not QTP? Why doesn't Adobe provide more testing frameworks? Testing tool capturing at a much lower level than QTP. Could refactor testing tool to be more generic.

Question about how something gets retrained? New kind of break, causes document model to change. May cause new XML to be written out and tests will fail. Now need to generate new XML.

Question about how much of the code is covered by the tests? Look to FlexCover.

Question about handling documents from older versions? Can upgrade documents when opened. Have tests that exercise that process. May have fringe cases that cause problems but hasn't been an issue, yet.

Document organizer testing is harder, need to get into correct state, initialize test user with known set of documents.

Browser tests are only running in IE. Haven't really looked at memory profile under multiple browsers.

Tags: buzzword flex testing

December 31, 2007

Automated Visual Testing of Components

I'm happy to relay the announcement of the first public release of a new visual testing framework built on FlexUnit and AIR known as Visual FlexUnit. Some of you may have heard me make reference to this framework at my recent talks on Continuous Integration with Flex, FlexUnit, and Ant. Douglas McCarroll a recent inter at Allurent (the company I work for) developed the release. Here is what he had to say:

Visual FlexUnit (VFU) is a FlexUnit extension that allows you to do automated testing of components’ visual appearance using “visual assertions”. In a nutshell, a visual assertion asserts that a component’s appearance is identical to a stored baseline image file. These visual tests can be run in either a GUI or an Ant-based build process.

Another great tool to help with automating your testing.

Tags: flexunit testing vfu

March 11, 2007

Asynchronous Testing with FlexUnit

When testing components that have asynchronous behavior you need to use FlexUnit's addAsync() method in order to correctly handle the events that fire. The reason for this is that unless you tell FlexUnit that you are expecting an asynchronous event, once your test method finishes FlexUnit will assume that the test is done and there were no errors. This can lead to false positives and annoying popup error dialogs with an assert fails. Below are some examples of how to use addAsync().

I'll start off with what not to do so you can get an idea of why you need addAsync(). Let's write a simple test that verifies that the flash.utils.Timer class fires events the way we think it should. The first attempt might look like this:

package com.example {
    import flexunit.framework.TestCase;
    import flash.utils.Timer;
    import flash.events.TimerEvent;

    public class TimerTest extends TestCase {
        private var _timerCount:int;

        override public function setUp():void {
            _timerCount = 0;
        }

        public function testTimer():void {
            var timer:Timer = new Timer(3000, 1);
            timer.addEventListener(TimerEvent.TIMER, incrementCount);
            // the next line should not be written like this, it can produce false positives
            timer.addEventListener(TimerEvent.TIMER_COMPLETE, verifyCount);
            timer.start();
        }

        private function incrementCount(timerEvent:TimerEvent):void {
            _timerCount++;
        }

        private function verifyCount(timerEvent:TimerEvent):void {
            assertEquals(1, _timerCount);
        }
    }
}

Nothing fancy here, we declare a single test method, create the Timer, and then start it. It should run once and then stop. If you add this test to your test suite and run it, you get a nice green bar. This is a false positive! The assertEquals() in the verifyCount() method didn't really contribute to the test passing. Try changing it to:

assertEquals(2, _timerCount);

When you rerun the test you'll get another green bar. Then a few seconds later an error dialog box pops up with the following message:

Error: expected:<2> but was:<1> at flexunit.framework::Assert$/flexunit.framework:Assert::failWithUserMessage()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:209] at flexunit.framework::Assert$/flexunit.framework:Assert::failNotEquals()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:62] at flexunit.framework::Assert$/assertEquals()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:54] at com.example::TimerTest/com.example:TimerTest::verifyCount()[C:\Work\Eclipse3.2\Fresh\AsyncTest\src\com\example\TimerTest.as:26] at flash.events::EventDispatcher/flash.events:EventDispatcher::dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.utils::Timer/flash.utils:Timer::tick()

But, but, the test bar was green! FlexUnit didn't know you where waiting for an event, as a result when the testTimer() method finished, that test was clean. The error dialog pops up because FlexUnit isn't around to catch the error and turn it into a pretty message. This also shows that even though the test finished our object was still running in the background. We can fix this first issue by adding a tearDown() method and use addAsync() to wrap the function that should be called along with specifying the maximum time to wait for that function to be called. The function returned by addAsync() is used in place of your original function. In the example above, the second listener function passed into the addEventListener() call is where addAsync() will come into play. The changes to TimerTest look like this:

private var _timer:Timer;

override public function tearDown():void {
    _timer.stop();
    _timer = null;
}

public function testTimer():void {
    _timer = new Timer(3000, 1);
    _timer.addEventListener(TimerEvent.TIMER, incrementCount);
    _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 3500));
    _timer.start();
}

I've taken the original function and wrapped it in an addAsync() and added the maximum time that the function can take to be called. Since my Timer is set to run in 3000ms I gave myself a little buffer. With this change and the assert testing for a count of 2, I'll get a red bar when the test runs. Additionally I added a tearDown() method so that regardless of what happens in the test the Timer will stop running. When doing asynchronous testing it is important to clean up like this otherwise you can get objects hanging round in memory doing things you don't want. On that note I'd also recommend that the event listeners added to the timer use weak references. That way in the tearDown() function nulls out the Timer that object can be garbage collected. The update code would look like this:

_timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
_timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 1500), false, 0, true);

Now that we added in that timeout to the verifyCount() call, you'll also get a red bar if the function isn't called in the timeout specified. Drop the timeout to 1500ms and rerun the test. You should now get the following error:

Error: Asynchronous function did not fire after 1500 ms at flexunit.framework::Assert$/fail()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:199] at flexunit.framework::AsyncTestHelper/runNext()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\AsyncTestHelper.as:96] at flexunit.framework::TestCase/flexunit.framework:TestCase::runTestOrAsync()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestCase.as:271] at flexunit.framework::TestCase/runMiddle()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestCase.as:192] at flexunit.framework::ProtectedMiddleTestCase/protect()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\ProtectedMiddleTestCase.as:54] at flexunit.framework::TestResult/flexunit.framework:TestResult::doProtected()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestResult.as:237] at flexunit.framework::TestResult/flexunit.framework:TestResult::doContinue()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestResult.as:109] at flexunit.framework::TestResult/continueRun()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestResult.as:79] at flexunit.framework::AsyncTestHelper/timerHandler()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\AsyncTestHelper.as:121] at flash.utils::Timer/flash.utils:Timer::_timerDispatch() at flash.utils::Timer/flash.utils:Timer::tick()

That's the quick introduction to addAsync(). Now onto some of the additional features of addAsync() that can be helpful. Say we wanted to test that a Timer is correctly called twice within a specified period of time. The incrementCount() function can easily be reused but the verifyCount() method has a hard coded assertEquals() in it. The optional 3rd argument to addAsync() is an argument called passThroughData, which can be anything. In this case I'm going to pass what I expect the count to be when the Timer finishes allowing me to reuse the same verify function. With these changes the functions now looks like this:

public function testTimer():void {
    _timer = new Timer(1000, 1);
    _timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
    _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 1500, 1), false, 0, true);
    _timer.start();
}

public function testTimer2():void {
    _timer = new Timer(500, 2);
    _timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
    _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 1500, 2), false, 0, true);
    _timer.start();
}

private function verifyCount(timerEvent:TimerEvent, expectedCount:int):void {
    assertEquals(expectedCount, _timerCount);
}

In both cases I'm passing along the expected count and have created a generic verify method. The object that you pass can be anything which makes it a very powerful way to write generic asynchronous event verifiers. But wait there's more!

The last optional argument to addAsync() is a way to verify that the event didn't happen. Instead of getting a red bar and the "Error: Asynchronous function did not fire after 1500 ms", you can verify that based on the event not firing, everything is as it should be. For example I could verify that after 1500ms a 1000ms Timer only fired one event and the complete event never fired. This last version of the test class does that. I also modified things a little since if you include passThroughData, it gets passed to both the regular function and the failFunction. This isn't a bad thing as it provides a way to write a generic failFunction handler like the way the generic verify handler was written. Note that the failFunction handler doesn't need to worry about getting an event passed in like the normal listener function since the event didn't fire.

package com.example {
    import flexunit.framework.TestCase;
    import flash.utils.Timer;
    import flash.events.TimerEvent;

    public class TimerTest extends TestCase {
        private var _timerCount:int;
        private var _timer:Timer;

        override public function setUp():void {
            _timerCount = 0;
        }

        override public function tearDown():void {
            // cleanup to make sure the Timer doesn't keep running
            _timer.stop();
            _timer = null;
        }

        public function testTimer():void {
            runTimer(1000, 1, 1500, 1);
        }

        public function testTimer2():void {
            runTimer(500, 2, 1500, 2);
        }

        public function testNotDone():void {
            runTimer(1000, 2, 1500, -1, 1);
        }

        private function runTimer(delay:int, count:int, timeout:int, goodCount:int, badCount:int = -1):void {
            _timer = new Timer(delay, count);
            // use weak references to make sure the Timer gets GC when we are done
            _timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
            // pass both a good an bad handler along with counts so the verify methods
            // can detect what happened
            _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyGoodCount, timeout, {goodCount: goodCount, badCount: badCount}, verifyBadCount), false, 0, true);
            _timer.start();
        }

        private function incrementCount(timerEvent:TimerEvent):void {
            _timerCount++;
        }

        private function verifyGoodCount(timerEvent:TimerEvent, extraData:Object):void {
            if (extraData.goodCount == -1)
            {
                fail("Should have failed");
            }
            assertEquals(extraData.goodCount, _timerCount);
        }

        private function verifyBadCount(extraData:Object):void {
            if (extraData.badCount == -1)
            {
                fail("Should not have failed");
            }
            assertEquals(extraData.badCount, _timerCount);
        }
    }
}

Feel free to play with this final version to test how different asynchronous scenarios are handled in FlexUnit.

Some additional notes:
Make sure the listener function name doesn't start with test, otherwise FlexUnit will think that it is a test method and try to run it.

I've noticed some odd behavior with multiple addAsync()s set at the same time. In general you only want to have one outstanding addAsync() at a time.

If you specify passThroughData and a failFunction, the passThroughData will get passed to both the function and the failFunction.

Make sure that you cleanup any objects that could still fire events in your tearDown() method and either remove event listeners or make them weak. There is more to this topic but I'll cover that in another post.

Tags: as3 flex flexunit testing

June 30, 2006

AS3 Code Coverage

Note A code coverage tool is now available at http://code.google.com/p/flexcover/.

This maybe a little crazy but I'm going to throw it out there. I've been thinking about a way to get code coverage reports for Actionscript 3. We make heavy use of FlexUnit at work, but I'm always curious just how much of our code is really covered with the tests. I've not found anything about code coverage tools with the few Google searches that I've run hence my search for other solutions.

My current idea is to leverage the Flex debugger. Something along the lines of creating a break point on every line in ever file that you want to check coverage of. As the break points get hit, remove them. Once all of the test code is done running, see if there are any break points that haven't been hit.

This isn't elegant by any means but it should be possible to test the theory with a few hours work. If someone knows of a tool that already does this I'd love to hear about it. Otherwise I'll see how my hacking goes.

Tags: as3 fdb flex testing