How to Rescue Distressed Projects and Teams

If you have worked in the software development world long enough, it has likely been your privilege (tongue firmly in cheek) to work on a project and with a team that has been taken to or even driven over the brink of failure. A project like this usually involves an unhappy client, a frustrated management and a very discouraged delivery team. It generally involves an “interrupt-driven” task and workflow prioritization process with a fixed delivery schedule, a once fixed but changing requirements set, and estimates and assumptions that failed to consider the full lifecycle of a feature, story, or task.

Often such projects are cancelled and teams dismantled. Sometimes they push through to a bitter end with something that works but everyone unhappy. It took too long. It cost too much. It works but not well. Clients are lost. Teams suffer from unnecessary attrition. Blame and resentment prevail. But there is a better way. Teams and projects can be rescued.

To rescue a distressed project and team is not as hard as one might think. Many have written about this. Some of us have even experienced it first hand. One excellent case study was published two years ago by Steve Andrews on InfoQ. There are many other stories like the one he shares and they all have several common aspects that can come to your rescue.

Analyze and Decide Using Facts
Working from facts and data, such as defect counts and other available metrics, can help to eliminate the emotional element and engage the team’s analytical talents.

Drive Quality with Acceptance Tests
Make quality and testing come first. Create acceptance tests for a given feature or story before you begin coding. Acceptance tests should clearly define “done” and support validation.

Eliminate Waste—Control Flow—Decrease Batch Size
Long established principles of quality manufacturing, these can be applied to software development. Creating very large and complex requirements documents that will be invalidated shortly after development begins is waste. Managers pushing large sets of tasks and assigning specific work items to specific team members creates waste. “Fix all the bugs” creates a batch that may overwhelm any team. But when team members pull work from a queue (aka backlog) and a team’s total work-in-progress (WIP) is limited, individual and team work flows efficiently.

Allow Teams to Self-Organize
Coach teams in Scrum and Kanban and let them choose which works best for them to control flow and achieve individual and team efficiency. Some teams may choose a combination. In any case, self-organized teams pull work and progress more efficiently than those who wait for management to assign out tasks. Management is then free to focus on grooming the backlog.

Manage the Backlog
Change control and therefore control of the backlog is critical to the success of a project. A manager with one of any number of titles controls what gets added to the backlog and when. The manager gathers details from stakeholders and delivery team members for each item on the backlog to provide sufficient detail for an estimate to be made. Based on input from stakeholders, the manager prioritizes items. Delivery team members add estimates before items can be taken off the backlog and put into a ready state or work-in-progress state. Estimates can be in abstract “points” or “ideal days” or some other common unit of measure to allow tracing metrics as work proceeds. Once estimates are provided, the manager works with stakeholders to finalize backlog priorities.

Work as a Team
Even if you are not using Kanban, you still need to eliminate bottlenecks and prevent individuals from working too far ahead of the team. If analysts are unable to keep up with writing acceptance tests, re-task other team members to avoid starving or bunching up of the team’s work-in-progress. If the delivery team lacks a well groomed and ready backlog, you should alter your planning cadence, decoupling it from your delivery cadence.

The primary factor in rescuing a distressed project and team is the motivation of management. If you believe in your people and give them the tools, processes and coaching they need to achieve great things, you can turn around a troubled project and team.

Priority Queue with Heap Algorithms in C#

Continuing with my C# Algorithms series, I’ve just completed a rather lengthy effort to implement and test min and max heap algorithms, including heap sort, along with a priority queue, something not provided by the BCL. While the heap sort is not always the most efficient, the algorithms required to accomplish the sort, specifically the heap data structure functions, supply the requisite functionality to make the implementation of a priority based queue quite easy to implement.

This work is a part of my continuing effort to work through all of the common algorithms found in Introduction to Algorithms 3rd Edition, I highly recommend this work to anyone who wishes to study classic computer science algorithms. For me, the exercise of implementing them in C# is a great learning experience.

First, let’s look at how it works. You have a job scheduler that needs to execute jobs in order of priority but jobs are enqueued with different priorities but need to be dequeued according to that priority. Here’s the Job class. Note the implementation of the MinValue and MaxValue which is needed for the heap based ordering required after a dequeue.

public class Job : IComparable
{
   public string JobId { get; set; }
   public double Priority { get; set; }

   public int CompareTo(object obj)
   {
      var other = obj as Job;
      if (null == other) return 1; //null is always less
      return this.Priority.CompareTo(other.Priority);
   }

   private static Job _min;
   private static Job _max;

   public static Job MinValue
   {
      get
      {
         if (_min == null)
         {
            _min = new Job { JobId = null, Priority = double.MinValue };
         }
         return _min;
      }
   }

   public static Job MaxValue
   {
      get
      {
         if (_max == null)
         {
            _max = new Job { JobId = null, Priority = double.MaxValue };
         }
         return _max;
      }
   }
}

And now here’s the test code that demonstrates the use of PriorityQueue<T> with the Job class.

[TestMethod]
public void JobTest()
{
   IList<Job> jobs = new List<Job> 
   {
      new Job { JobId = "test1", Priority = 45.0 },
      new Job { JobId = "test2", Priority = 25.0 },
      new Job { JobId = "test3", Priority = 4.0 },
      new Job { JobId = "test4", Priority = 88.0 },
      new Job { JobId = "test5", Priority = 96.0 },
      new Job { JobId = "test6", Priority = 18.0 },
      new Job { JobId = "test7", Priority = 101.0 },
      new Job { JobId = "test8", Priority = 7.0 }
   };
   var jobQueue = new PriorityQueue<Job>(jobs, PriorityOrder.Max);
   jobQueue.Enqueue(new Job 
                   { 
                      JobId = "test8", 
                      Priority = 232.0 
                   },
                   // min and max needed for MaxInsert or MinInsert
                   Job.MinValue, Job.MaxValue);
   Assert.IsTrue(jobQueue.Count == 9);
   var val = jobQueue.Dequeue();
   Assert.IsTrue(val.Priority == 232.0);
   Assert.IsTrue(jobQueue.Count == 8);
   Assert.IsTrue(jobQueue.Size == 9); //heapSize not same
   val = jobQueue.Peek();
   Assert.IsTrue(val.Priority == 101.0);
   jobQueue.TrimExcess();
   Assert.IsTrue(jobQueue.Count == jobQueue.Size);
}

I’m posting the PriorityQueue<T> class here along with the static Heap class that provides the underlying heap structure algorithms. I hope you get some use out of them.

public class PriorityQueue<T> : IEnumerable<T>, 
   ICollection, IEnumerable where T : IComparable
{
   private readonly PriorityOrder _order;
   private readonly IList<T> _data;
   private int _heapSize = 0;

   public PriorityQueue(PriorityOrder order)
   {
      _data = new List<T>();
      _order = order;
   }

   public PriorityQueue(IEnumerable<T> data, PriorityOrder order)
   {
      _data = data as IList<T>;
      if (_data == null) _data = new List<T>(data);
      _order = order;
      _heapSize = _data.Count;
      if (_order == PriorityOrder.Max)
         Heap.BuildMaxHeap(_data);
      else
         Heap.BuildMinHeap(_data);
   }

   public PriorityQueue(int initialCapacity, PriorityOrder order)
   {
      _data = new List<T>(initialCapacity);
      _order = order;
   }

   public void Clear()
   {
      _data.Clear();
   }

   public bool Contains(T item)
   {
      if (_order == PriorityOrder.Max)
         return Heap.MaxContains(_data, item, 0, _heapSize);
      else
         return Heap.MinContains(_data, item, 0, _heapSize);
   }

   public T Dequeue()
   {
      if (_heapSize == 0) throw new InvalidOperationException();
      if (_order == PriorityOrder.Max)
         return Heap.ExtractMax(_data, _heapSize--);
      else
         return Heap.ExtractMin(_data, _heapSize--);
   }

   public void Enqueue(T item, T minItem, T maxItem)
   {
      if (_order == PriorityOrder.Max)
         Heap.MaxInsert(_data, item, maxItem, _heapSize++);
      else
         Heap.MinInsert(_data, item, minItem, _heapSize++);
   }

   public T Peek()
   {
      return _data[0];
   }

   public void TrimExcess()
   {
      // trim remove items in _data beyond _heapSize
      while (_heapSize < _data.Count)
      {
         _data.RemoveAt(_data.Count - 1);
      }
   }

   public T[] ToArray()
   {
      return _data.ToArray();
   }

   public IEnumerator<T> GetEnumerator()
   {
      return _data.GetEnumerator();
   }

   IEnumerator IEnumerable.GetEnumerator()
   {
      return _data.GetEnumerator();
   }

   public void CopyTo(Array array, int index)
   {
      _data.CopyTo((T[])array, index);
   }

   public int Count
   {
      get { return _heapSize; }
   }

   public int Size
   {
      get { return _data.Count; }
   }

   public bool IsSynchronized
   {
      get { return false; }
   }

   public object SyncRoot
   {
      get { return _data; }
   }
}

And here’s the Heap code. Not light reading. For help in walking through the code, review the unit tests on GitHub.

public static class Heap
{
   /* What a max heap looks like from 45, 25, 4, 88, 96, 18, 101, 7:
    * 
    *                    0
    *                  /   \
    *                 1     2                       
    *                / \   / \
    *               3   4 5   6
    *              /
    *             7
    *                   101
    *            96             45
    *       88        25    18       4
    *    7
    */

   /// <summary>
   /// Convert IList to a max-heap from bottom up such that each node maintains the
   /// max-heap property (data[Parent[index]] >= data[index] where Parent = index / 2).
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   public static void BuildMaxHeap<T>(IList<T> data) where T : IComparable
   {
      var heapSize = data.Count;
      for (int index = (heapSize / 2) - 1; index > -1; index--)
      {
         MaxHeapify(data, index, heapSize);
      }
   }

   /* What a min heap looks like from 45, 25, 4, 88, 96, 18, 101, 7:
    * 
    *                 0
    *               /   \
    *              1     2                       
    *             / \   / \
    *            3   4 5   6
    *           /
    *          7
    *             
    *                 4
    *           7          18
    *       25    96    45    101
    *    88       
    */

   /// <summary>
   /// Convert IList to a min-heap from bottom up such that each node maintains the
   /// min-heap property (data[Parent[index]] <= data[index] where Parent = index / 2).
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   public static void BuildMinHeap<T>(IList<T> data) where T : IComparable
   {
      var heapSize = data.Count;
      for (int index = (heapSize / 2) - 1; index > -1; index--)
      {
         MinHeapify(data, index, heapSize);
      }
   }

   /// <summary>
   /// Maintain max-heap property for data at index location for specified heap size 
   /// such that data[Parent[index]] >= data[index] where Parent = index / 2.
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   /// <param name="index"></param>
   /// <param name="heapSize"></param>
   public static void MaxHeapify<T>(IList<T> data, int index, int heapSize) where T : IComparable
   {
      var largest = index;
      var left = HeapLeft(index);
      var right = HeapRight(index);
      if (left < heapSize
         && (data[left] != null
            && data[left].CompareTo(data[index]) > 0))
      {
         largest = left;
      }
      if (right < heapSize
         && (data[right] != null
            && data[right].CompareTo(data[largest]) > 0))
      {
         largest = right;
      }
      if (largest != index)
      {
         //exchange data[index] with data[largest}
         var tempRef = data[index];
         data[index] = data[largest];
         data[largest] = tempRef;
         //recurse
         MaxHeapify(data, largest, heapSize);
      }
   }

   /// <summary>
   /// Maintain min-heap property for data at index location for specified heap size
   /// such that data[Parent[index]] <= data[index]
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   /// <param name="index"></param>
   /// <param name="heapSize"></param>
   public static void MinHeapify<T>(IList<T> data, int index, int heapSize) where T : IComparable
   {
      var smallest = index;
      var left = HeapLeft(index);
      var right = HeapRight(index);
      if (left < heapSize
         && (data[left] == null
            || data[left].CompareTo(data[index]) < 0))
      {
         smallest = left;
      }
      if (right < heapSize
         && (data[right] == null
            || data[right].CompareTo(data[smallest]) < 0))
      {
         smallest = right;
      }
      if (smallest != index)
      {
         //exchange data[index] with data[largest}
         var tempRef = data[index];
         data[index] = data[smallest];
         data[smallest] = tempRef;
         //recurse
         MinHeapify(data, smallest, heapSize);
      }
   }

   /// <summary>
   /// Extrax max and re-heapify with decremented heapSize. 
   /// Caller must remember to decrement local heap size.
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   /// <param name="heapSize"></param>
   /// <returns></returns>
   public static T ExtractMax<T>(IList<T> data, int heapSize) where T : IComparable
   {
      heapSize--;
      if (heapSize < 0) throw new IndexOutOfRangeException();
      T max = data[0];
      data[0] = data[heapSize];
      if (heapSize > 0) MaxHeapify(data, 0, heapSize);
      return max;
   }

   /// <summary>
   /// Extrax min and re-heapify with decremented heapSize. 
   /// Caller must remember to decrement local heap size.
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   /// <param name="heapSize"></param>
   /// <returns></returns>
   public static T ExtractMin<T>(IList<T> data, int heapSize) where T : IComparable
   {
      heapSize--;
      if (heapSize < 0) throw new IndexOutOfRangeException();
      T max = data[0];
      data[0] = data[heapSize];
      if (heapSize > 0) MinHeapify(data, 0, heapSize);
      return max;
   }

   public static void MaxIncrease<T>(IList<T> data, int index, T item) where T : IComparable
   {
      if (null == item || item.CompareTo(data[index]) > 0) 
         throw new ArgumentException("new item is smaller than the current item", "item");

      data[index] = item;
      var parent = HeapParent(index);
      while (index > 0 
         && (data[parent] == null
            || data[parent].CompareTo(data[index]) < 0))
      {
         //exchange data[index] with data[parent}
         var tempRef = data[index];
         data[index] = data[parent];
         data[parent] = tempRef;
         index = parent;
         parent = HeapParent(index);
      }
   }

   public static void MinDecrease<T>(IList<T> data, int index, T item) where T : IComparable
   {
      if (null == item || item.CompareTo(data[index]) < 0)
         throw new ArgumentException("new item is greater than the current item", "item");

      data[index] = item;
      var parent = HeapParent(index);
      while (index > 0
         && (data[index] == null
            || data[index].CompareTo(data[parent]) < 0))
      {
         //exchange data[index] with data[parent}
         var tempRef = data[index];
         data[index] = data[parent];
         data[parent] = tempRef;
         index = parent;
         parent = HeapParent(index);
      }
   }

   /// <summary>
   /// Insert item into max heap. Caller must remember to increment heapSize locally.
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   /// <param name="item"></param>
   /// <param name="minOfT"></param>
   /// <param name="heapSize"></param>
   public static void MaxInsert<T>(IList<T> data, T item, T minOfT, int heapSize) 
      where T : IComparable
   {
      heapSize++;
      if (heapSize < data.Count)
         data[heapSize] = minOfT;
      else
         data.Add(minOfT);
      MaxIncrease(data, heapSize - 1, item);
   }

   /// <summary>
   /// Insert item into min heap. Caller must remember to increment heapSize locally.
   /// </summary>
   /// <typeparam name="T"></typeparam>
   /// <param name="data"></param>
   /// <param name="item"></param>
   /// <param name="maxOfT"></param>
   /// <param name="heapSize"></param>
   public static void MinInsert<T>(IList<T> data, T item, T maxOfT, int heapSize) 
      where T : IComparable
   {
      heapSize++;
      if (heapSize < data.Count)
         data[heapSize] = maxOfT;
      else
         data.Add(maxOfT);
      MinDecrease(data, heapSize - 1, item);
   }

   public static bool MaxContains<T>(IList<T> data, T item, int index, int heapSize) 
      where T : IComparable
   {
      if (index >= heapSize) return false;
      if (index == 0)
      {
         if (data[index] == null)
         {
            if (item == null) return true;
         }
         else
         {
            var rootComp = data[index].CompareTo(item);
            if (rootComp == 0) return true;
            if (rootComp < 0) return false;
         }
      }
      var left = HeapLeft(index);
      var leftComp = 0;
      if (left < heapSize)
      {
         if (data[left] == null)
         {
            if (item == null) return true;
         }
         else
         {
            leftComp = data[left].CompareTo(item);
            if (leftComp == 0) return true;
         }
      }

      var right = HeapRight(index);
      var rightComp = 0;
      if (right < heapSize)
      {
         if (data[right] == null)
         {
            if (item == null) return true;
         }
         else
         {
            rightComp = data[right].CompareTo(item);
            if (rightComp == 0) return true;
         }
      }

      if (leftComp < 0 && rightComp < 0) return false;

      var leftResult = false;
      if (leftComp > 0)
      {
         leftResult = MaxContains(data, item, left, heapSize);
      }
      if (leftResult) return true;

      var rightResult = false;
      if (rightComp > 0)
      {
         rightResult = MaxContains(data, item, right, heapSize);
      }
      return rightResult;
   }


   public static bool MinContains<T>(IList<T> data, T item, int index, int heapSize)
      where T : IComparable
   {
      if (index >= heapSize) return false;
      if (index == 0)
      {
         if (data[index] == null)
         {
            if (item == null) return true;
         }
         else
         {
            var rootComp = data[index].CompareTo(item);
            if (rootComp == 0) return true;
            if (rootComp > 0) return false;
         }
      }
      var left = HeapLeft(index);
      var leftComp = 0;
      if (left < heapSize)
      {
         if (data[left] == null)
         {
            if (item == null) return true;
         }
         else
         {
            leftComp = data[left].CompareTo(item);
            if (leftComp == 0) return true;
         }
      }

      var right = HeapRight(index);
      var rightComp = 0;
      if (right < heapSize)
      {
         if (data[right] == null)
         {
            if (item == null) return true;
         }
         else
         {
            rightComp = data[right].CompareTo(item);
            if (rightComp == 0) return true;
         }
      }

      if (leftComp > 0 && rightComp > 0) return false;

      var leftResult = false;
      if (leftComp < 0)
      {
         leftResult = MinContains(data, item, left, heapSize);
      }
      if (leftResult) return true;

      var rightResult = false;
      if (rightComp < 0)
      {
         rightResult = MinContains(data, item, right, heapSize);
      }
      return rightResult;
   }


   private static int HeapParent(int i)
   {
      return i >> 1; // i / 2
   }

   private static int HeapLeft(int i)
   {
      return (i << 1) + 1; //i * 2 + 1
   }

   private static int HeapRight(int i)
   {
      return (i << 1) + 2; //i * 2 + 2
   }
}

If you find any flaws, please do let me know. I will be working on improving this library over time, so look for updates and check GitHub for latest code. Enjoy.

Run Single Instance of Process Using Mutex

Recently I needed to assure that a process could not be started for a second time on the same machine. I had done this before with a mutex but rather than rummaging through old code on my own personal machine, I did the expedient thing and found this question on Stack Overflow.

I liked the answer as many others did and after a few minutes of tweaking came up with this useful adaptation. I share it here in part to help others but mostly to create a permanent “note to self” as this will surely come up again in the future.

private static void Main(string[] args)
{
  // assure only one instance running
  RunExclusively(() =>
  {
    // do your exclusive stuff
  });
}

static void RunExclusively(Action action)
{
  // global mutex to prevent multiple instances from being started
  // get application GUID as defined in AssemblyInfo.cs
  string appGuid = ((GuidAttribute)
           Assembly.GetExecutingAssembly()
           .GetCustomAttributes(typeof(GuidAttribute), false)
           .GetValue(0))
           .Value;

  // unique id for global mutex - Global prefix means it is global to the machine
  string mutexId = string.Format("Global\\{{{0}}}", appGuid);

  using (var mutex = new Mutex(false, mutexId))
  {
    // set security settings for mutex - allow everyone
    var allowEveryoneRule = new MutexAccessRule(
      new SecurityIdentifier(WellKnownSidType.WorldSid, null),
      MutexRights.FullControl, AccessControlType.Allow);
    var securitySettings = new MutexSecurity();
    securitySettings.AddAccessRule(allowEveryoneRule);
    mutex.SetAccessControl(securitySettings);

    var hasHandle = false;
    try
    {
      try
      {
        // wait to acquire for up to five seconds
        if (!mutex.WaitOne(5000, exitContext: false))
        {
          throw new TimeoutException("Timeout waiting for exclusive access");
        }
      }
      catch (AbandonedMutexException)
      {
        // mutex abandoned in another process
        // it will still get acquired
        hasHandle = true;
      }
      action(); //execute work
    }
    finally
    {
      if (hasHandle) mutex.ReleaseMutex();
    }
  }
}

If you know of a better way or see any flaws in this one, please do share.

The 4+1 Architectural View Model v2

You can find the well written brief on Phillippe Kruchten’s 4+1 architectural view model on Wikipedia along with a link to the original white paper published nearly 20 years ago. Kruchten worked for Rational, as I understand it, and so this is often referred to as the RUP 4+1 model.

4p1Architectural_View_Model
Courtesy of Wikipedia

Although I have used this model or some variant of it at times, I had quite forgotten its name and origin until someone reminded me of it the other day. Of course, we often forget that UML also came out of Rational’s famous Grady Booch, Ivar Jacobson and James Rumbaugh..

And yet today I find it difficult to find software developers who know much of anything about the 4+1 model and only a few regularly use UML and even then only for simple class diagrams. So why the disconnect?

Perhaps a Version 2 of these approaches is required. What would you get if you mixed the 4+1 model with more clearly defined non-functional requirement scenarios as I’ve previously discussed and the C4 model presented by Simon Brown along with just a bit more UML that most developers are not currently accustomed to using? I’m thinking this through and will post more thoughts on it soon.

Venn-like View of Software Quality Attributes

Two days ago I posted Non-Functional Requirements for the Software Architect in which I suggested that quality attributes or non-functional requirements could be categorized into a two by two grid, operational and non-operational by internal and external.

Thinking further along those lines led me to picture the illustration below and hop out of bed to put it to virtual paper.

qualattribvenn

Most quality attributes will fit into one or two of the big four: security, performance, usability and modifiability. And each of these largely corresponds to its depicted quadrant.

Security is largely an internal concern. Often external stakeholders will profess their concern with this attribute, and hence the Venn-like spill over into the external quadrant. Generally the concern for security and assuring security falls within the domain of internal stakeholders and the implementation team.

External stakeholders are mostly concerned with performance and usability. There are a number of other quality attributes such as availability that fall within or at least are closely related to one or both of those.

Modifiability is almost exclusively a concern of internal stakeholders and implementation teams. Interoperability and testability are largely related to modifiability while interoperability nearly always shares at least some security concerns.

You may mix these differently to match your circumstances and priorities, but the illustration covers the majority of software development efforts but of course does not spell out the details. For guidance on that, I recommend my previous post. If you are concerned with software architecture, perhaps this will help you to visualize how these quality attributes or non-functional requirements compete and compliment one another.

To paraphrase Bass in Software Architecture in Practice, an architecture will either inhibit or enable the achievement of a system’s desired quality attributes. Understanding your desired quality attributes well will drive the critical decisions in your architecture. And your architecture will then provide the requisite containers into which you will place your functionality. To quote, “Functionality is not so much a driver for the architecture as it is a consequence of it.”

While I am not completely certain that is always the case, it is a principle that is well worth considering when creating and refactoring the architectural elements of your system. And it is my hope that by seeing this illustrated, it will get you thinking about how your own desired quality attributes interact with internal and external teams, which are operational and non-operational, and how they may compete with or compliment on another.

How Agile Software Development is Like Farming

I grew up on a small farm in eastern Utah a few miles west of Roosevelt. My dad is still farming but on a smaller scale these days. He and my mom dropped by for a visit the other day. They asked about work and as I described to them what I do, it struck me that there are many similarities between agile software development and farming.

Here are a few that came to mind:

Agile Architecture and Development Aspects Farming Agile Software Development
Strategy The strategic decisions for the next growing season are often made in advance of the current season’s harvest. Will a new well be required to water the south forty? Strategies to improve existing systems and even to plow them under and rebuild them are often hatched before the existing system goes live. Will we move this system to cloud for improved availability?
Non-Functional Requirements Before building a new barn or corral, thought must be given to the usability of the gate, the reliability of the roof, and the performance of the cattle chute. Before building that new system or drastically modifying an existing one, much of the same thought process needs to occur.
Budget While we could plow more acres per day with a new tractor, can we get what we need out of the tractor we have even if we have to work forty more hours? What opportunity will those forty additional hours afford us? Should we buy new big iron or enter into that hosting contract for improved performance? Or should we invest in improved engineering practices to improve code performance and storage requirements?
Milestones On a hay farm, progress through the summer is measured in hay crops. For me that was 3 crops before going back to school. All work revolved around watering the hay, cutting the hay, baling the hay, and hauling the hay. Then repeat. Each milestone or set of sprints in agile development often take on a repetitive structure allowing the team to achieve a rhythm or cadence that moves the software toward completion.
Sprints 6 days (Monday through Saturday) – Only  after I started my professional career did I realize most people don’t work on Saturday. I’ve never been able to completely accept that pattern in my own life. 2 weeks (most often) – sometimes a different interval works better. It probably depends on what you’re growing.
Sprint Planning 1 day (Sunday) 1 day (part of the 10 days in the 2 weeks)
Stand Up Every day at 6am. Lasts less than 15 minutes is identified and then tasks are begun as work from the previous day was most often discussed at dinner after dark. This includes identifying what equipment broke down the day before and who will fix it and when but rarely includes much discussion of how. The how is known. Usually at a more reasonable time. Since we didn’t eat dinner together, we review quickly what happened and discuss what will happen today. We also identify problems that occurred, but this works best if the resolution of such problems are taken offline.
Retrospectives At the end of the week or when a crop was put up, we took a little time out for a picnic or a day trip to the city. We reflected on how we could work together better—or how my brother and I could stop fighting long enough to get some work done. When done properly in the agile / scrum process, the retrospective has the power to improve a team’s ability identify and keep what worked and improve what didn’t.
Tactical Action When the baler breaks down in the middle of the wee hours of the morning while you’re baling hay that needs to be baled now, you get off the tractor and you fix it right then if at all possible. Sometimes, when its broken, you need to get up in the middle of the night and fix it. And then you figure out how to avoid that scenario in the future. Bugs are a part of software life but if you’re killing the same bug over and over again, it may be time to get a new baler.
Preventive Maintenance If you don’t grease the baler before you start chewing up forty acres of hay that needs to be harvested, you will find yourself fixing the baler at the most inopportune time. Far better to replace that worn out knotter and grease up the plunger before you put it to work. Systems deployed and left to their own devices have a tendency to break when you can least afford it. But systems and software that receive regular attention and care will provide longer service and fewer headaches. That extra tube of grease can be invaluable.

There is one very big difference between software and farming. The former pays better and is easier.

Non-Functional Requirements for the Software Architect

Countless failed software development projects have been kicked off with non-functional requirements delivered to the implementation team with little to no detail. Here are a few of the worst but most common examples:

  • Security – The software must be secure.
  • Performance – The software must be fast.
  • Usability – The software must be easy to use.

Most software professionals have been taught that non-functional requirements are important, but many projects skip over them in order to get to functional use cases and writing code. The result can be profound, leaving the implementation team without sufficient input to make critical design decisions that will be very costly to change when the non-functional requirement is later clarified.

What Every Non-Functional Requirement Needs

For every non-functional requirement, the software architect should assure that the following questions have been adequately answered.

  • To whom is this quality important?
    • Users and integrators
    • Management team
    • Implementation team
    • Operations team
  • Who will assure this quality is met?
    • Implementation team
    • Operations team
    • Management team
  • How will this quality be met?
    • Cross cutting constraints in software
    • System and network constraints
    • Log analysis and oversight
  • How will we know this quality is met?
    • Scenarios with measures
    • Monitoring and review
    • Acceptable tolerance percentiles

The items below each question are not meant to be an exhaustive list but simply to give you an idea of what may be involved in answering those questions.

Classification of Non-Functional Software Quality Requirements

Clarifying and prioritizing non-functional software quality requirements may be easier when you classify them into one of four groups by answering two questions: operational or non-operation, and internal or external. The following table is anything but exhaustive but it will give you the general idea.

Quality Classification Internal External
Operational Latency
Capacity
Fault tolerance
Performance
Security
Availability
Non-Operational Maintainability
Portability
Testability
Correctness
Usability
Accessibility

 
Business stakeholders are generally more interested in and will support efforts to meet external qualities. Implementation and IT teams sometimes have to work a little more to garner support for time and effort and expense for internal qualities.

It is often easier to build into an implementation the cross cutting concerns to measure operational qualities. Collecting performance, reliability and security metrics from executing code is always possible with well planned constraints early on in the development effort. If these qualities are defined later, the refactoring process can be challenging.

For non-operational qualities, other systems such as those used to manage support issues and ongoing development efforts are often helpful in measuring the cost of change to the system or whether usability goals are being met. Sometimes time series log analysis can be utilized to extract measures for non-operational qualities, especially those most important to external parties.

Use an Agile Approach to Non-Functional Requirements

However you choose to collect and document non-functional software quality requirements, you should continue to improve and tweak them throughout the development process just as you would with functional requirements, grooming your backlog and prioritizing based on ongoing feedback from stakeholders, users and developers.