December 28, 2008

Unit Testing Flex Application with Fluint (Flex Camp Boston 2008)

Unit Testing Flex Application with Fluint
Mike Nimer, Digital Primates (presented for Jeff Tapper who was unable to make it)

Testing is like source control. You don't know how much you need until you start using it.

Cold Fusion started out without much testing.

Ever bug has to have a unit test that exercises the bug before fixing it. Regression bugs discovered as part of testing are show stoppers.

Testing methods
Manual testing: humans create and run, low cost to start high cost to maintain, testing happens less often
Automated testing: humans create and computers run, higher cost to develop but lower to run, more often

Testing as part of development
Code until the red goes away
Developers create code and write tests, focus on test driven development
Automated build checkout, build, and test running, various tools to do that

Test types
Unit tests: individual method testing
integration tests: combination of test components

Tools for testing
FlexUnit: Original testing framework, based on JUnit
Flunit: Better asynchronous testing, better UI testing support, and sequences

Integartion testing is hard: Visual Flex Unit does bitmap comparison,
Fluint checks sequences and events

Test methods , test cases, test suites, and test runner. Various types of assertions. Comes with runner and has file watcher to pickup modules and run tests on change.

Asynchronous Testing
Events can occur at any time so you need to handle it sometime later. Uses asyncHandler which is a function that you pass as the event listener. Also want to pass data through to handler. Second argument to event handler. Can have asyncHandler as part of setup step.

UIComponent Testing
TestCase is a UIComponent so it can have components added to it as a child. Still need to what events to listen for.

Sequences
String together many sets and event waiting.

Test meta-data flag
Meta-data is copied over and available in the reports. Also produces XML like file in a JUnit like format.

Test filter and sorting
Only run certain tests, headless test runner

Configuring Fluint and test setup: Writing libraries can have another test application. Flex projects can't link so

Migrating from Flex Unit to Fluint? Best to ask on the forumns.

Mocking data? Nothing tied into Flinut, framework can support it.

Code coverage: using Flex Cover.

AIR test runner: looks for directory changes and runs it.

Tags: flex flexcampboston2008 flexunit fluint testing

August 27, 2008

Odd addAsync() behavior when handling asynchronous events

There is some subtle behavior in FlexUnit's addAsync() that can lead to false positives and false negatives. In each case code that one expects to be called fails to run, which means that your tests may not be covering everything that you hoped they would. To kick off this discussion I'll present some code that should fail, but doesn't.

package com.neophi.test {
    import flash.events.Event;
    import flash.events.EventDispatcher;
    import flexunit.framework.TestCase;
    import mx.core.Application;

    public class AsyncTest extends TestCase {
        private var count:int;
        private var eventDispatcher:EventDispatcher;

        public function testAsyncLater():void {
            eventDispatcher = new EventDispatcher();
            eventDispatcher.addEventListener(Event.COMPLETE, addAsync(stage1, 100));
            Application.application.callLater(eventDispatcher.dispatchEvent, [new Event(Event.COMPLETE)]);
        }

        private function stage1(event:Event):void {
            count++;
            if (count == 1) {
                // This code will not cause the listener function to be called again
                Application.application.callLater(eventDispatcher.dispatchEvent, [new Event(Event.COMPLETE)]);
            } else {
                fail("Fail");
            }
        }
    }
}

This example creates an addAsync() wrapped function to listen for Event.COMPLETE. The application's callLater() is used to simulate the asynchronous firing of that event at some point in the future. The first time stage1() is called the count increments and another event is fired. When this second event is handled the test should fail but doesn't. The reason is that another addAsync() wasn't registered with FlexUnit. When the second event fires FlexUnit quietly ignores it, leading to a false positive. To correct this situation we need to register a new addAsync() that listens for the second event. Luckily FlexUnit's addAsync() always returns the same function reference so there is no need to remove the first listener, calling addEventListener() again just updates the existing listener. The rewritten stage1() function looks like this:

private function stage1(event:Event):void {
    count++;
    if (count == 1) {
        // This code works correctly
        eventDispatcher.addEventListener(Event.COMPLETE, addAsync(stage1, 200));
        Application.application.callLater(eventDispatcher.dispatchEvent, [new Event(Event.COMPLETE)]);
    } else {
        fail("Fail");
    }
}

This test now correctly fails when run. The false negative happens if we forgot to use callLater() to fire the second event. An unexpected failure message results if stage1() was changed as follows:

private function stage1(event:Event):void {
    count++;
    if (count == 1) {
        eventDispatcher.addEventListener(Event.COMPLETE, addAsync(stage1, 200));
        // This code will cause an asynchronous function timeout error
        eventDispatcher.dispatchEvent(new Event(Event.COMPLETE));
    } else {
        fail("Fail");
    }
}

The test fails with the message: "Error: Asynchronous function did not fire after 200 ms". Internally FlexUnit makes a check to see if it should be waiting for an event, if it isn't expecting one, the new event is discarded. In this case I find the failure message misleading. It wasn't that the asynchronous function wasn't called, just that it wasn't called asynchronously. This scenario can easily crop up if you are using mock objects and forgot to delay an operation that normally fires events asynchronously, like simulating an HTTP GET request. Using the application's callLater() is a quick way to introduce the needed asynchronous behavior.

Tags: addasync asynchronous flexunit

August 27, 2008

New Event Testing Framework in FlexUnit

Part of the new FlexUnit release is a new TestCase subclass called EventfulTestCase. It provides convenience methods for testing objects that emit events. This new class can streamline your testing code and help catch unexpected events. Below is some sample code that demonstrates this new framework.

To use the new event testing framework subclass EventfulTestCase instead of TestCase.

package com.neophi.test {
    import flexunit.framework.EventfulTestCase;

    public class EventTest extends EventfulTestCase {
        // Test code here
    }
}

In a particular test method once you have the object that you want to test for assertions against, you call a new helper function called listenForEvent(). It takes three arguments:

  • source: the object that is to be listened on for the dispatched event
  • type: the type of event that the source object might dispatch
  • expected: whether the event is expected to be dispatched or now; defaults to EVENT_EXPECTED, the other choice is EVENT_UNEXPECTED (both defined in EventfulTestCase)

After your code has exercised the source to emit events, you make a call to assertEvents() with an optional message to make sure the events happened as planned. A quick example of a positive test for an event would look like this:

public function testEventExpectedPass():void {
    var eventEmittingObject:EventDispatcher = new EventDispatcher();

    listenForEvent(eventEmittingObject, Event.COMPLETE, EVENT_EXPECTED);
    eventEmittingObject.dispatchEvent(new Event(Event.COMPLETE));

    assertEvents("Expecting complete event");
}

After the event source is setup, we notify the testing framework what event we are expecting, emit the event, and then assert that is happened. In a more typical example the dispatchEvent() call would be part of the code under test. Also you aren't limited to listening for just a single event at a time. This example is just demonstrating the test framework API. When setting up an expected event via listenForEvent() the third argument defaults to EVENT_EXPECTED.

If an expected event doesn't get dispatched, the test harness will report an error as this next test demonstrates.

public function testEventExpectedFail():void {
    var eventEmittingObject:EventDispatcher = new EventDispatcher();

    listenForEvent(eventEmittingObject, Event.COMPLETE, EVENT_EXPECTED);

    assertEvents("Expecting complete event");
}

Testing for unexpected events works the same way but the fail and pass conditions are reversed.

public function testEventUnexpectedPass():void {
    var eventEmittingObject:EventDispatcher = new EventDispatcher();

    listenForEvent(eventEmittingObject, Event.COMPLETE, EVENT_UNEXPECTED);

    assertEvents("Not expecting complete event");
}

In this case the COMPLETE event is unexpected so by not firing the event the assertion passes, which is what we want. Just to flesh out the example, if an event is unexpected but does fire the assertEvents() call will fail as in this example:

public function testEventUnexpectedFail():void {
    var eventEmittingObject:EventDispatcher = new EventDispatcher();

    listenForEvent(eventEmittingObject, Event.COMPLETE, EVENT_UNEXPECTED);
    eventEmittingObject.dispatchEvent(new Event(Event.COMPLETE));

    assertEvents("Not expecting complete event");
}

In this case because an unexpected event fired it is an error.

Besides listenForEvent() and assertEvents(), two properties are exposed for examining the events that are listened for: lastDispatchedExpectedEvent holds the last event captured by listenForEvent() and dispatchedExpectedEvents holds all of the events captured. This last example shows the ability to listen for multiple events in different states, how events not registered with the listener are ignore, and uses these two convenience properties.

public function testMultipleEventsPass():void {
    var eventEmittingObject:EventDispatcher = new EventDispatcher();

    listenForEvent(eventEmittingObject, Event.INIT, EVENT_EXPECTED);
    listenForEvent(eventEmittingObject, Event.COMPLETE, EVENT_EXPECTED);
    listenForEvent(eventEmittingObject, ErrorEvent.ERROR, EVENT_UNEXPECTED);

    eventEmittingObject.dispatchEvent(new Event(Event.INIT));
    eventEmittingObject.dispatchEvent(new Event(Event.CHANGE));
    eventEmittingObject.dispatchEvent(new Event(Event.COMPLETE));

    assertEvents("Multiple events");

    assertEquals(Event.COMPLETE, lastDispatchedExpectedEvent.type);
    assertEquals(2, dispatchedExpectedEvents.length);
}

Some closing notes:


  • The order that the events are added by listenForEvent() doesn't matter. The asserts just check that events were or were not fired.

  • The event assertion framework will work with asynchronous events. You'll just want to wait to call assertEvents() in the final handler for your test.

  • The number of times that an event is dispatched doesn't matter.

  • assertEvents() can be called multiple times.

Tags: as3 events flexunit

August 27, 2008

FlexUnit Moved to Adobe's Open Source Site

Adobe has decided to move FlexUnit to their Open Source site. You can now download and find FlexUnit at http://opensource.adobe.com/wiki/display/flexunit/Flexunit. As part of this move, they have also released a new version. The change history is a little sparse so if I find anything of note I'll write up a post.

Tags: flexunit

August 27, 2008

New FlexUnit UI

The newest version of FlexUnit released on Adobe's Open Source site includes a spiffy new UI. It's much easier to see what test failed, where, and why. It also includes a feature to help identify tests that don't make any assertions! Below are a couple of screen shots showing the new UI in action.

Identify tests that don't make any assertions:

More easily see how a test failed and where:

Tags: flexunit

January 9, 2008

New FlexUnit Flex Cookbook Entries

I've added some more advanced FlexUnit cookbook recipes that cover testing visual components and automated TestSuite generation. These can be found at Testing Visual Components with FlexUnit and Automated TestSuite Generation respectively.

Tags: antennae flexcookbook flexunit

December 31, 2007

Headless Linux Automated FlexUnit Testing Idea

When using Antennae or some other tool to run automated FlexUnit tests to Linux, it should be possible to use xvfb to create the display that the standalone Flash needs to run. With this approach you avoid the need to have a full X instance running while still being able to use the standalone Flash player to run the test SWF. I won't claim that this is any great insight, but I found a piece of paper on my desk with xvfb scribbled on it and I figured this was as good a place as any to record it :)

Tags: flex flxunit linux

December 31, 2007

Flex Camp Boston 2007: FlexUnit Testing with Cairngorm

Rough draft notes

Thomas Burleson is talking about FlexUnit Testing with Cairngorm

Looking at StormWatch and TestStorm used to test that information.

Showing demonstration of StormWatch. Loading XML data of stock market information.

Unit Testing: Test all the functionality of a single unit.
Continuous Testing: Test all units each cycle (each commit). Locally.
Continuous Integration: Test all units created by other developers.

Test API server exposed, transformations in Flex, business logic, event processing, and model updates. Not testing UI (hard to do).

Custom extensions to FlexUnit and Cairngorm to aid testing. Should be live on Google code early next year. Universal Mind (UM) Cairngorm.

Traditional testing approach requires extracting stuff. Bad because: Tests performed outside MVC, requires many test shells, and boundary conditions are not tested well. Focus on human issue (functional test) versus tool issue (unit test).

Uses parallel but different base package naming com..commands.* versus tests..commands.*. 1-to-1 for package, classes, functions.

UM Cairngorm adds: view responders, command event dispatching, delegate queues, batch events, command logic aggregation, and FlexUnit support.

Different unit testing frameworks include: ASUnit, FlexUnit, dpUInt, and UM Wrappers (builds on FlexUnit).

Cairgorm Explorer is a handy application.

Showing how the unit tests are constructed and run. Importance of testing with events to mimic what the UI is doing as much as possible.

Have Synchronizer that watches for code changes and creates stubs automatically. May not be opened sourced but good to think about the ideas.

Tags: cairngorm flex flexcampboston2007 flexunit

December 31, 2007

Automated Visual Testing of Components

I'm happy to relay the announcement of the first public release of a new visual testing framework built on FlexUnit and AIR known as Visual FlexUnit. Some of you may have heard me make reference to this framework at my recent talks on Continuous Integration with Flex, FlexUnit, and Ant. Douglas McCarroll a recent inter at Allurent (the company I work for) developed the release. Here is what he had to say:

Visual FlexUnit (VFU) is a FlexUnit extension that allows you to do automated testing of components’ visual appearance using “visual assertions”. In a nutshell, a visual assertion asserts that a component’s appearance is identical to a stored baseline image file. These visual tests can be run in either a GUI or an Ant-based build process.

Another great tool to help with automating your testing.

Tags: flexunit testing vfu

November 30, 2007

FlexUnit Flex Cookbook Entries

I've posted round one of some Flex Cookbook entries that focus on FlexUnit. I welcome any feedback on have about them.

Tags: flexcookbook flexunit

November 30, 2007

More FlexUnit Flex Cookbook Entries

I added a couple more introductory FlexUnit recipes to the Flex Cookbook. They are titled Understanding FlexUnit Output and Run code before and/or after every test. Next up will be some intermediate recipes.

Tags: flexcookbook flexunit

October 24, 2007

Continuous Integration with Flex, FlexUnit, and Ant

I've posted a version of my slides (0.5MB PDF) that were used for my Flashforward talk and MAX talk under the titles "Automation of Flex Building and Unit Testing" and "Continuous Integration with Flex, FlexUnit, and Ant" respectively.

The demonstration files used in the talk are mostly equivalent to the tutorial and template directories that are included in Antennae.

The resources mentioned in the talk are listed below:

Tags: ant flashforward2007boston flex flexunit max2007chicago

September 26, 2007

Flashforward Boston Talk Time Change: Automation of Flex Building and Unit Testing

Due to a scheduling conflict the time of my talk has been moved up a slot. The new details: Grand Ballroom, Salon E: Friday, September 21, 2007 1:30 PM-2:45 PM.

Tags: ant antennae flashforward2007boston flex flexunit

March 11, 2007

Asynchronous Testing with FlexUnit

When testing components that have asynchronous behavior you need to use FlexUnit's addAsync() method in order to correctly handle the events that fire. The reason for this is that unless you tell FlexUnit that you are expecting an asynchronous event, once your test method finishes FlexUnit will assume that the test is done and there were no errors. This can lead to false positives and annoying popup error dialogs with an assert fails. Below are some examples of how to use addAsync().

I'll start off with what not to do so you can get an idea of why you need addAsync(). Let's write a simple test that verifies that the flash.utils.Timer class fires events the way we think it should. The first attempt might look like this:

package com.example {
    import flexunit.framework.TestCase;
    import flash.utils.Timer;
    import flash.events.TimerEvent;

    public class TimerTest extends TestCase {
        private var _timerCount:int;

        override public function setUp():void {
            _timerCount = 0;
        }

        public function testTimer():void {
            var timer:Timer = new Timer(3000, 1);
            timer.addEventListener(TimerEvent.TIMER, incrementCount);
            // the next line should not be written like this, it can produce false positives
            timer.addEventListener(TimerEvent.TIMER_COMPLETE, verifyCount);
            timer.start();
        }

        private function incrementCount(timerEvent:TimerEvent):void {
            _timerCount++;
        }

        private function verifyCount(timerEvent:TimerEvent):void {
            assertEquals(1, _timerCount);
        }
    }
}

Nothing fancy here, we declare a single test method, create the Timer, and then start it. It should run once and then stop. If you add this test to your test suite and run it, you get a nice green bar. This is a false positive! The assertEquals() in the verifyCount() method didn't really contribute to the test passing. Try changing it to:

assertEquals(2, _timerCount);

When you rerun the test you'll get another green bar. Then a few seconds later an error dialog box pops up with the following message:

Error: expected:<2> but was:<1> at flexunit.framework::Assert$/flexunit.framework:Assert::failWithUserMessage()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:209] at flexunit.framework::Assert$/flexunit.framework:Assert::failNotEquals()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:62] at flexunit.framework::Assert$/assertEquals()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:54] at com.example::TimerTest/com.example:TimerTest::verifyCount()[C:\Work\Eclipse3.2\Fresh\AsyncTest\src\com\example\TimerTest.as:26] at flash.events::EventDispatcher/flash.events:EventDispatcher::dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.utils::Timer/flash.utils:Timer::tick()

But, but, the test bar was green! FlexUnit didn't know you where waiting for an event, as a result when the testTimer() method finished, that test was clean. The error dialog pops up because FlexUnit isn't around to catch the error and turn it into a pretty message. This also shows that even though the test finished our object was still running in the background. We can fix this first issue by adding a tearDown() method and use addAsync() to wrap the function that should be called along with specifying the maximum time to wait for that function to be called. The function returned by addAsync() is used in place of your original function. In the example above, the second listener function passed into the addEventListener() call is where addAsync() will come into play. The changes to TimerTest look like this:

private var _timer:Timer;

override public function tearDown():void {
    _timer.stop();
    _timer = null;
}

public function testTimer():void {
    _timer = new Timer(3000, 1);
    _timer.addEventListener(TimerEvent.TIMER, incrementCount);
    _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 3500));
    _timer.start();
}

I've taken the original function and wrapped it in an addAsync() and added the maximum time that the function can take to be called. Since my Timer is set to run in 3000ms I gave myself a little buffer. With this change and the assert testing for a count of 2, I'll get a red bar when the test runs. Additionally I added a tearDown() method so that regardless of what happens in the test the Timer will stop running. When doing asynchronous testing it is important to clean up like this otherwise you can get objects hanging round in memory doing things you don't want. On that note I'd also recommend that the event listeners added to the timer use weak references. That way in the tearDown() function nulls out the Timer that object can be garbage collected. The update code would look like this:

_timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
_timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 1500), false, 0, true);

Now that we added in that timeout to the verifyCount() call, you'll also get a red bar if the function isn't called in the timeout specified. Drop the timeout to 1500ms and rerun the test. You should now get the following error:

Error: Asynchronous function did not fire after 1500 ms at flexunit.framework::Assert$/fail()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\Assert.as:199] at flexunit.framework::AsyncTestHelper/runNext()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\AsyncTestHelper.as:96] at flexunit.framework::TestCase/flexunit.framework:TestCase::runTestOrAsync()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestCase.as:271] at flexunit.framework::TestCase/runMiddle()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestCase.as:192] at flexunit.framework::ProtectedMiddleTestCase/protect()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\ProtectedMiddleTestCase.as:54] at flexunit.framework::TestResult/flexunit.framework:TestResult::doProtected()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestResult.as:237] at flexunit.framework::TestResult/flexunit.framework:TestResult::doContinue()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestResult.as:109] at flexunit.framework::TestResult/continueRun()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\TestResult.as:79] at flexunit.framework::AsyncTestHelper/timerHandler()[C:\Documents and Settings\mchamber\My Documents\src\flashplatform\projects\flexunit\trunk\src\actionscript3\flexunit\framework\AsyncTestHelper.as:121] at flash.utils::Timer/flash.utils:Timer::_timerDispatch() at flash.utils::Timer/flash.utils:Timer::tick()

That's the quick introduction to addAsync(). Now onto some of the additional features of addAsync() that can be helpful. Say we wanted to test that a Timer is correctly called twice within a specified period of time. The incrementCount() function can easily be reused but the verifyCount() method has a hard coded assertEquals() in it. The optional 3rd argument to addAsync() is an argument called passThroughData, which can be anything. In this case I'm going to pass what I expect the count to be when the Timer finishes allowing me to reuse the same verify function. With these changes the functions now looks like this:

public function testTimer():void {
    _timer = new Timer(1000, 1);
    _timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
    _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 1500, 1), false, 0, true);
    _timer.start();
}

public function testTimer2():void {
    _timer = new Timer(500, 2);
    _timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
    _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyCount, 1500, 2), false, 0, true);
    _timer.start();
}

private function verifyCount(timerEvent:TimerEvent, expectedCount:int):void {
    assertEquals(expectedCount, _timerCount);
}

In both cases I'm passing along the expected count and have created a generic verify method. The object that you pass can be anything which makes it a very powerful way to write generic asynchronous event verifiers. But wait there's more!

The last optional argument to addAsync() is a way to verify that the event didn't happen. Instead of getting a red bar and the "Error: Asynchronous function did not fire after 1500 ms", you can verify that based on the event not firing, everything is as it should be. For example I could verify that after 1500ms a 1000ms Timer only fired one event and the complete event never fired. This last version of the test class does that. I also modified things a little since if you include passThroughData, it gets passed to both the regular function and the failFunction. This isn't a bad thing as it provides a way to write a generic failFunction handler like the way the generic verify handler was written. Note that the failFunction handler doesn't need to worry about getting an event passed in like the normal listener function since the event didn't fire.

package com.example {
    import flexunit.framework.TestCase;
    import flash.utils.Timer;
    import flash.events.TimerEvent;

    public class TimerTest extends TestCase {
        private var _timerCount:int;
        private var _timer:Timer;

        override public function setUp():void {
            _timerCount = 0;
        }

        override public function tearDown():void {
            // cleanup to make sure the Timer doesn't keep running
            _timer.stop();
            _timer = null;
        }

        public function testTimer():void {
            runTimer(1000, 1, 1500, 1);
        }

        public function testTimer2():void {
            runTimer(500, 2, 1500, 2);
        }

        public function testNotDone():void {
            runTimer(1000, 2, 1500, -1, 1);
        }

        private function runTimer(delay:int, count:int, timeout:int, goodCount:int, badCount:int = -1):void {
            _timer = new Timer(delay, count);
            // use weak references to make sure the Timer gets GC when we are done
            _timer.addEventListener(TimerEvent.TIMER, incrementCount, false, 0, true);
            // pass both a good an bad handler along with counts so the verify methods
            // can detect what happened
            _timer.addEventListener(TimerEvent.TIMER_COMPLETE, addAsync(verifyGoodCount, timeout, {goodCount: goodCount, badCount: badCount}, verifyBadCount), false, 0, true);
            _timer.start();
        }

        private function incrementCount(timerEvent:TimerEvent):void {
            _timerCount++;
        }

        private function verifyGoodCount(timerEvent:TimerEvent, extraData:Object):void {
            if (extraData.goodCount == -1)
            {
                fail("Should have failed");
            }
            assertEquals(extraData.goodCount, _timerCount);
        }

        private function verifyBadCount(extraData:Object):void {
            if (extraData.badCount == -1)
            {
                fail("Should not have failed");
            }
            assertEquals(extraData.badCount, _timerCount);
        }
    }
}

Feel free to play with this final version to test how different asynchronous scenarios are handled in FlexUnit.

Some additional notes:
Make sure the listener function name doesn't start with test, otherwise FlexUnit will think that it is a test method and try to run it.

I've noticed some odd behavior with multiple addAsync()s set at the same time. In general you only want to have one outstanding addAsync() at a time.

If you specify passThroughData and a failFunction, the passThroughData will get passed to both the function and the failFunction.

Make sure that you cleanup any objects that could still fire events in your tearDown() method and either remove event listeners or make them weak. There is more to this topic but I'll cover that in another post.

Tags: as3 flex flexunit testing