Wednesday, November 17, 2010

Burning up

I am growing increasingly fond of the Agile development method. It really has a transformational effect on how the teams operate and relate to their work.

Now working on a team of 26 people in four scrums spread across four geographies and four timezones, we have stepped up our reliance on Rational Team Concert dashboards to keep track of how each team is progressing. Each scrum team still has daily standup meetings and scrum of scrum meetings with the product owner, scrum masters, and stakeholders. Though the emphasis during the scrum of scrums is on the reporting from each scrum master, we have started to appreciate and use “burnup” charts as the backdrop for these meetings.

Disclaimer: I am an IBM employee with access to an internally hosted instance of Rational Team Concert. For small teams of up to 10 people, and with some functional restrictions, RTC is freely available at

There is ample literature on why “burnup”  charts are superior to “burndown” charts, but RTC combines project tracking and team information to provide significantly enhanced visibility on project progress. I produced a couple of heavily data-altered versions of our actual “burnup” charts as examples.

RTC Burnup Data Series


  • “Total Capacity” indicates how much work the team can execute on a given period. It is always a flat, straight, line. RTC uses information provided by each team member on their availability in terms of hours per day, vacation, and holidays.
  • “Planned Work” indicates how much work is actually planned for a given period. For any sane project, it should be under the “ Total Capacity” line at all times, otherwise low-priority work should be moved from that period immediately. In the example chart above, this can be observed on 11/16.
  • “Expected Complete” indicates how much work should be completed  over time so that all planned work is completed at the end of the given period. Always a straight line starting from 0 hours and ending at the total amount of planned work hours.
  • “Completed Work” indicates how much work the team is logging in the system against the tasks assigned to each individual. In very simple terms, "completed work should be at or above the “Expected Complete” line, indicating the team is on schedule or ahead of schedule. In the example chart above, it is possible to see the team recovering from a slow start and exceeding expectations after 11/10.
  • “Linear Complete” is a liner regression of data points in the “Completed Work” series. It projects how much work will be done at the end of the given period. This line should be at or above the “Expected Complete”  line, otherwise remediation is required, such as moving distracting low-priority work from that period of time.  In the example chart above, that line is slightly above the “Expected Complete” line, a good sign, indicating all work being completed one or two days ahead of schedule (where it crosses above the “Planned work”  line) .
  • “Capacity Burnup” indicates how much capacity the team is “burning” during a given period. It goes up whether the team is working or not, because the time available to work in that period is reduced every minute. Ideally you want the “Completed Work”  line to be at or above the “Capacity Burnup” line, indicating that team is burning up its capacity on actually planned work. In the above example chart, notice how the first half of the period has less completed work than the capacity being spent, a clear indication that people are being diverted from the planned work.

A non-ideal example

The first example was somewhat benign in that execution would have been well planned and completed ahead of time. Now lets look at another hypothetical example, with some planning and execution challenges:


The first warning sign is a number of data points *above* the “ Total Capacity” line, specially the “ Planned Work”  line. In a real project, this chart would be telling the project owner to move content outside the given period right at the beginning. Notice how the “Planned Work”  line does go down on 11/3, only to steadily climb up again. RTC does have an “Estimated vs Actual Work” chart that can clarify whether that increase in “Planned Work” is the result of new work being added to the period or the result of planned work taking longer than planned. I like to have both side by side in the same dashboard.

The second warning sign is the “Completed Work” line being significantly above the “Capacity Burnup”  chart line, an indication that the team worked a fair amount of overtime during the period. Though the “Linear Complete”  projection is slightly above the “Expected Complete”  work, the actual data points for “Completed Work” show the team running out of steam towards the last days of the interval (complete work goes from above expected to below expected around 11/10) .

In summary

A quick glance at a “burnup” chart, assuming somewhat accurate reporting by the team, will immediately point at any action required by the scrum master and product owner. This is the cheat-sheet I share with others reading these charts, which takes a lot of the guess work about how teams are doing.

  1. The “Capacity Burnup” line should always be at the top of all other lines or content must be moved out of the period in question.
  2. The “Linear Complete”  and “Completed Work” lines must be on top of or above the “Expected Complete” line, otherwise they indicate the team will overrun its time budget.
  3. The “Linear Complete” line must be somewhat matched to the “Capacity Burnup”  line. If it is significantly below that line, it indicates the team is working on something else other than planned content (e.g. spending 2 days reimaging failed hardware) ; if it is significantly above the capacity burnup, the team is working overtime and may run out of steam at the end of the period.
  4. Steady, rather than abrupt, increases in the “Planned Work”  line typically indicate the team taking longer on tasks than originally planned.
  5. Abrupt, rather than steady, changes in the “Planned Work”  line typically indicate work items being moved in and out of the time period. The product owner should always be aware of the causes for those changes.

Friday, June 25, 2010

STAX automation: Converting a property file into a Python dict() object

Coming back to test automation, where I reiterate my fondness of the STAX/STAF framework.

Trying to read the contents of a properties file containing “key=value” pairs, I found this article on how to write the entire thing in Python (the underlying scripting language for STAX scripts) : A few minutes trying to visualize how to embed it into a STAX “script” construct, I realized I did not have the same constraints of not being able to use Java and could write the far more simple construct:


<function name="read_properties" scope="local">

        Returns a Python dict object representing the keys found inside a properties file.

        <function-required-arg name="properties_file">
            Properties file to be read. 

            from java.util import Properties
            from java.lang import String
            from import File
            from import FileInputStream
            from import IOException       

            env_exception = None
            props = Properties()
                fis = None
                    file = File(properties_file)
                    fis = FileInputStream(file)
                    props_dict = {}
                    prop_names = props.propertyNames();
                    for i in range(props.size()):
                        prop_name = prop_names.nextElement()
                        prop_value = props.getProperty(prop_name)
                        props_dict.setdefault(prop_name, prop_value)

                except IOException, e:
                    env_exception = e.getMessage()
                if fis != None:

        <if expr="env_exception != None">                
                    err_msg = 'Attempt to load properties file %s resulted in exception %s'  % (properties_file, env_exception)
                <message log="STAXLogMessage">err_msg</message>                                                    
                <throw exception="'STAXException'">err_msg</throw>
                <message log="STAXLogMessage">
                    'Environment [%s] is %s' % (properties_file, props_dict)



Thursday, April 8, 2010

Handling process launches in STAF / STAX

I have been a big fan of STAF test automation framework for many years. STAF is open-source and runs on all mainstream platforms (including IBM System z and Mac) . Its key strengths are distributing and coordinating tasks across multiple machines.

Launching processes in distributed machines is really well-implemented as a STAF service and also receives special treatment in the companion STAX execution engine. Process management is always a thorny discipline, dealing with process output, error codes, deciding whether or not to wait until they are done, and many other concerns.

I compiled a personal list of STAF/STAX options in launching new processes that always trick newcomers to the tool.

Waiting for a process launched through STAF

Running a command on any given machine running the STAF process is as simple as executing “staf PROCESS START ”.

What often derails people are the default launch settings, which instructs STAF to launch the process **asynchronously* and moving on, often breaking follow on commands that depend on the results of the first. If you must wait until the process completes its execution, you need to use the “WAIT” flag.


The next relevant flag is “SAMECONSOLE”, which prevents STAX from launching a new console (default behavior on Windows) . For most runs of unattended commands, SAMECONSOLE is your option of choice.


The last important flags are the ones indicating that you want STAF to collect the command results, namely RETURNSTDERR and RETURNSTDOUT, as follows:


Results from asynchronous launches

If you really must run a command asynchronously, you need to store the command handle returned by STAF and query its results later on.


Inside STAX

Launching the same process using the STAX engine requires more effort, which is understandable since STAX is a more sophisticated execution engine with support for programmatic constructs.

One of the most important observations about the call inside STAX is that it always waits for command invocation and it does not have the concept of consoles. In other words, STAX element always uses the equivalent of “WAIT SAMECONSOLE”  in the STAF PROCESS parameters.





                'dir \\tmp'

                "Result code: %i " % STAXResult[0][0]

                "Result data: %s " % STAXResult[0][1]

While observing the results in the Job monitor, we will see:


In conclusion

As usual, the excellent user guides shipped with both STAF and STAX are the most comprehensive material for learning about the intricacies of launching remote processes and manipulating their results. I just missed a short guide like the one above showing the STAF / STAX equivalence side-by-side.

Tuesday, March 2, 2010

Ant targets with different classpaths

I recently ran into a problem where an Ant task could not find a given Java class. There are a number of solutions listed in the Ant FAQ page, all passing through adding the corresponding JAR files to the Ant lib directory or the the classpath before invoking Ant.

It turns out the particular task from the Ant target I imported was invoking an XSL transformation containing function defined in an external class, like this:

<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl=""

The original ExternalTask task was defined in A.jar, but com.package.ExternalXsltFunction was defined in external.jar. Since com.package.ExternalXsltFunction is dynamically loaded only when the internal XSL transformation is invoked, Ant simply ignores and discards its containing JAR file while trying to load ExternalTask.

<taskdef name="externaltask" classname="com.package.ExternalTask">
<pathelement name=”A.jar”/>
<pathelement name=”external.jar”/> <—Makes no difference, ExternalXsltFunction is not loaded when ExternalTask is loaded.

The solution was to create an extension of the venerable “antcall” task, adding a “classpath” element to it. This new task, let’s call it “antcallwithclassloader”, sets the classpath to the thread of execution, invokes the target, then resets the thread classpath back to its original value, like this (all comments stripped out):

package com.myproject.ant;


public class CallTargetWithClasspath extends CallTarget {

private Path classpath;

private Reference classpathRef;

public void execute() throws BuildException {
Path resolvedClassPath = null;
if (classpathRef != null) {
resolvedClassPath = (Path)classpathRef.getReferencedObject();
} else {
resolvedClassPath = classpath;
AntClassLoader acl = new AntClassLoader(getProject(), resolvedClassPath);

public Path getClasspath() { return classpath; }

public void setClasspath(Path targetClasspath) { this.classpath = targetClasspath; }

public Path createClasspath() {
classpath = new Path(getProject());
return classpath;

public Reference getClasspathRef() { return classpathRef; }

public void setClasspathRef(Reference classpathRef) { this.classpathRef = classpathRef; }


With this task, now added to a “myanttools.jar” file, I could successfully refactor my Ant script so that the problematic task was moved to its own target and was invoked with its required classpath, like this:

    <taskdef name="my.antcall" classname="com.myproject.ant.CallTargetWithClasspath" >
<pathelement location="myanttools.jar"/>

    <target name="" depends="init">
<my.antcall target="">
<pathelement path=”external.jar”/> <—- Adds B.jar to the classpath before invoking target

    <target name="">
<!—- The caller set external.jar in the classpath –>
<!-- and ExternalXsltFunction is visible to the classloader -–>

<externaltask …/>

Thursday, January 14, 2010

Optimizing Atom feed parsing with Apache Abdera

I chose Apache Abdera as my Atom processor of choice for a number of small projects. Skipping the processing of unwanted XML elements inside an Atom feed is the most basic optimization for these applications.

For one of these applications, a statistics aggregator of sorts, there was no need to look into the summary and raw contents of each entry. Enter the Apache Abdera built-in filter support, through which one can instruct the parser to only accept or ignore certain entry elements.

The samples in the Abdera wiki didn’t quite match the public Javadocs, so I ended up writing my own version of what the wiki described as a black list filter:

Abdera abdera = new Abdera();
Parser abderaParser = abdera
ParserOptions defaultParserOptions = abderaParser.getDefaultParserOptions();

FavoriteParseFilter fpf = new FavoriteParseFilter();


where FavoriteParseFilter is defined like this:

public class FavoriteParseFilter implements org.apache.abdera.filter.ParseFilter{

private static final QName CONTENT_QNAME =
new QName("", "content");

private static final QName SUMMARY_QNAME =
new QName("", "summary");

* (non-Javadoc)
* @see org.apache.abdera.filter.ParseFilter#acceptable(javax.xml.namespace.QName)

public boolean acceptable(QName n) {
result = !(n.equals(CONTENT_QNAME) ||
return result;

Results may vary, but I observed a gain of at least 25% in overall throughput using a simple application fetching a remote feed with entries about 2Kb in size.

Monday, January 11, 2010

Polygonal menus?

Jury is still out on the efficiency of these kinds of contextual menus instead of regular tabular format.

For now, they are filed under the uncoveted  “because we could” folder in my brain.