Non-Functional Requirements for the Software Architect

Countless failed software development projects have been kicked off with non-functional requirements delivered to the implementation team with little to no detail. Here are a few of the worst but most common examples:

  • Security – The software must be secure.
  • Performance – The software must be fast.
  • Usability – The software must be easy to use.

Most software professionals have been taught that non-functional requirements are important, but many projects skip over them in order to get to functional use cases and writing code. The result can be profound, leaving the implementation team without sufficient input to make critical design decisions that will be very costly to change when the non-functional requirement is later clarified.

What Every Non-Functional Requirement Needs

For every non-functional requirement, the software architect should assure that the following questions have been adequately answered.

  • To whom is this quality important?
    • Users and integrators
    • Management team
    • Implementation team
    • Operations team
  • Who will assure this quality is met?
    • Implementation team
    • Operations team
    • Management team
  • How will this quality be met?
    • Cross cutting constraints in software
    • System and network constraints
    • Log analysis and oversight
  • How will we know this quality is met?
    • Scenarios with measures
    • Monitoring and review
    • Acceptable tolerance percentiles

The items below each question are not meant to be an exhaustive list but simply to give you an idea of what may be involved in answering those questions.

Classification of Non-Functional Software Quality Requirements

Clarifying and prioritizing non-functional software quality requirements may be easier when you classify them into one of four groups by answering two questions: operational or non-operation, and internal or external. The following table is anything but exhaustive but it will give you the general idea.

Quality Classification Internal External
Operational Latency
Capacity
Fault tolerance
Performance
Security
Availability
Non-Operational Maintainability
Portability
Testability
Correctness
Usability
Accessibility

 
Business stakeholders are generally more interested in and will support efforts to meet external qualities. Implementation and IT teams sometimes have to work a little more to garner support for time and effort and expense for internal qualities.

It is often easier to build into an implementation the cross cutting concerns to measure operational qualities. Collecting performance, reliability and security metrics from executing code is always possible with well planned constraints early on in the development effort. If these qualities are defined later, the refactoring process can be challenging.

For non-operational qualities, other systems such as those used to manage support issues and ongoing development efforts are often helpful in measuring the cost of change to the system or whether usability goals are being met. Sometimes time series log analysis can be utilized to extract measures for non-operational qualities, especially those most important to external parties.

Use an Agile Approach to Non-Functional Requirements

However you choose to collect and document non-functional software quality requirements, you should continue to improve and tweak them throughout the development process just as you would with functional requirements, grooming your backlog and prioritizing based on ongoing feedback from stakeholders, users and developers.

Software Architecture for Developers

I have been enjoying an e-book called Software Architecture for Developers by Simon Brown that was very well worth the price. I also just finished watching the author’s presentation at the 2014 GOTO Conference. A very thought provoking presentation.

Here are a few things that I like very much from the book and presentation.

  • “The code is the single point of truth. It is the embodiment of the architecture.”
  • TDD does not replace architecture. Do TDD inside a set of boundaries and frameworks provided by the architecture. (see Why Most Unit Tests Are a Waste by James O Coplien.)
  • If the diagrams don't reflect the code, the diagrams are basically pointless. We're just deceiving ourselves.
  • Component testing is preferable over unit testing but unit tests and mocks for testing against async systems are still useful.
  • Layered architecture can lead to a big ball of mud because too much functionality is exposed for public use by other layers.
  • Component organization over layered is preferred. Components have limited public interfaces (aka ports) and may have layers within the component that are not improperly publicly accessible.
  • If a system has a very large number of unit tests, we may have an out of control layered architecture.
  • “If your software system is hard to work with, change it. This is entirely within your hands.”

C4: Context, Containers, Components, Classes

I especially like Brown’s use of what he calls C4. In essence, every software system can be broken down into a simple hierarchy:

  • Contexts (systems) made up of
    • Containers (web server, app server, database, browser, file system, etc.) which host
      • Components which expose one or more interfaces (sometimes referred to by others as ports) and contain
        • Classes which implement those interfaces and the layers behind them.

By creating architecture diagrams that follow this hierarchy, it is possible to create code that matches it. It is an architecture that developers can use directly. (Side note: I have often used a Visual Studio solution to lay out a project in similar terms directly in code. I have also gone down the layered road only to regret it later and end up pulling those layers apart and encapsulating them into a component-like approach to ensure that the responsibilities and behaviors exposed to the outside world are used properly and that the system remains well ordered.)

Here is my simplification of what the author already makes rather simple with respect to these constructs. I recommend you buy the book and get full details and example diagrams, but the ideas are what is most important.

  • Context Diagram - A context diagram answers what are we building, who is using it and how does it fit into our IT environment and business. It is important to note at a high level how users will interact with the system.
  • Container Diagram - In the container diagram, you define the shape of the software, high-level technology decisions, how responsibilities are grouped and separated, how each container communicates with other containers, and where the code lives for each container.
  • Component Diagram - The component diagrams answers what course grained building blocks of functionality are required, what are their responsibilities, and how will they interact with other components (interface and communication mechanisms such as sync, async, batched, etc.).
  • Class Diagram – Brown does not explicitly discuss this level, as far as I’ve read. I assume this is because it is rather intuitive to most architects and in part because the design at the class level is often better left to the implementation team. However, from my own perspective, the architect should consider defining the public interfaces and primary classes to name and separate key responsibilities into distinct code structures that will be intuitive for the implementation team to complete—naming conventions that separate what may be thought of as traditional layers and that the implementation team will understand is crucial here.

Some of what Brown proposes breaks with what others may consider traditional software architecture. I am excited about Brown’s challenging of the status quo, questioning our assumptions. I believe his ideas will help us narrow the model-code gap (see Just Enough Software Architecture by George Fairbanks—one I’ve just added to my Kindle collection for some fun future reading).

Measurable Non-Functional Quality Scenarios

Brown covers the importance of quality scenarios, but on this topic I prefer the more pedantic but measurable approach of Len Bass in Software Architecture in Practice. I believe his emphasis on defining metrics driven quality scenarios is well worth pursuing, especially to the point of implementing logging and monitoring systems that allow you to constantly measure non-functional quality and improve against those measures. Here’s the structure that Bass recommends:

  • Source – where does the input come from?
  • Stimulus – what is the input?
  • Artifact – what container and component are affected?
  • Environment – in what context does this occur?
  • Response – how and with what did the software respond?
  • Measure – how much time, how many clicks, how many errors, were proper notifications were sent?

Creating quality scenarios that developers understand and can incorporate into the software they are building is critical to long term success of the software. To paraphrase the quality gurus, if you can measure it, you can understand and improve it.

Most important, if your implementation team and your stake holders understand your architecture diagrams and documents, you are more likely to succeed. And if your code does not mirror your diagrams, no amount of code reviewing will tell you whether your architecture has been implemented. And that is Brown’s greatest point in my opinion.

.NET Software Architect

Platform or technology stack specific modifiers on the title of Software Architect are common. Most software architects know that the platform, like a framework, is an interchangeable implementation choice and not really part of the software architecture (see my post entitled Practical Agile Software Architecture).

Why the distinction then? Two reasons:

  • Architect as Implementer  - Many software architects are involved in guiding and contributing to platform specific implementation which requires specialization that is not specific to the architecture itself. I prefer this combination.
  • Platform Specific Language - The platform specific software architect may be most comfortable or even required to produce artifacts that adopt the vernacular of that platform in order to make those artifacts more easily consumed by the implementation team.

The most common platform and implementation specific language elements in architecture artifacts replace the more generic terms of module, component, and software. Here’s an overly simple conversion table that I have found helpful for Java and .NET.

Architecture Generic Java .NET
Module package with public classes defining "port" and  sometimes an entire jar namespace with public classes defining "port" and sometimes whole assembly
Component sometimes jar only but usually whole container or daemon sometimes an independent assembly but usually a single process/web app
Software (physical allocation) app servlet engine for container or independent daemon IIS web app for one or more components or an independent Windows service

.
Connectors are generally common across these stacks, with a few exceptions such as WCF and JAX. More often connectors are literally specified as a SOAP or REST or even a custom TCP based protocol. Message based connectors are very often technology specific, indicating the specific message queue but be careful to avoid limiting your architecture by specifying implementation choices. The implementation choices should be, as much as possible, left to the design and implementation team effort.

The majority of my implementation experience has been in the .NET stack. But software architecture should be the same across implementation stacks. Of course, there may be some things easier said than done in a specific platform and technology stack, and that may have an impact on your software architecture choices, but minimally so.

Origins of My Inner Geek

In elementary school, I loved being an AV (audio-visual) library assistant and running the mimeograph machine. I knew all the tricks to getting that film strip projector to work. I was an expert overhead projector operator. And I could thread a 16mm projector faster than anyone.

I was the master of my domain. I was a geek before the pocket protector became the defacto standard geek identification badge.

Fast forward to a time when I had suppressed the geek within to become a lawyer. I even took an English undergrad degree. I was married when I received my Bachelor of Arts, so I’m not sure it counted. But they gave it to me anyway. Then having had a chance to work for a lawyer for a while, I realized I could never be a lawyer—I hated the work too much to study for the LSAT. And so I became a tech writer. What else.

commodore_pet4016_3And a few years later, while I furiously scribbled notes on my legal pad, the ancient primitive predecessor to the iPad, I overheard a software engineer say, “It’s not supposed to do that,” while looking at the screen of a computerized simulation going very wrong. At that moment, my mind darted back to my junior high and high school days of banging out BASIC on a Commodore PET, translating the Atari BASIC from the Creative Computing magazine, so that my friends and I could play Adventure.

You are in a deep dark cave. There is a lamp here. What do you want to do?

The microsecond burst of nostalgia closed and I knew then that if I had written the code for that software, it would be doing exactly what I told that computer to do. It took a few years to make the transition, but I let the inner geek out and consumed every computer programming book I could get my hands on. Finally I landed my first professional programming job. And have been doing that for nearly fourteen years now.

And just today, in stand up, I overheard a team member say those immortal words, “It’s not supposed to do that.” My brain seized on the phrase and compelled me to write this post before I could sleep again.

Where did your geek come from?

Merge Algorithm for Multiple Sorted IEnumerable<T> Sources

This evening I was asked to write a merge algorithm to efficiently merge multiple iterator sources, yielding a merged iterator that would not require the algorithm to read all of the data into memory should the sources be very large. I’ve never written such an algorithm nor can I recall seeing one, so I didn’t have a very good good answer. Of course that left a simmering thread of though on the back burner of my brain.

After letting it rattle around a bit and without resorting to old fashioned Googling, I sat down and banged out the following code. It was fun to write and works but it took me much too long to write from scratch—about 90 minutes. It may be time to refresh and reload, perhaps by writing a series of posts that implement C# versions of selected algorithms found in a book I recently purchased but have since spent no time reading: Introduction to Algorithms 3rd Edition.

Updated Code (9/6/2014)The original code gets a big performance boost with this refactoring:

public static IEnumerable<T> SortedMerge<T>
  (params IEnumerable<T>[] sortedSources)
  where T : IComparable
{
  if (sortedSources == null || sortedSources.Length == 0)
    throw new ArgumentNullException("sortedSources");

  //1. fetch enumerators for each sourc
  var enums = (from n in sortedSources
         select n.GetEnumerator()).ToArray();

  //2. create index list indicating what MoveNext returned for each enumerator
  var enumHasValue = new List<bool>(enums.Length);
  // MoveNext on all and initialize enumHasValue
  for (int i = 0; i < enums.Length; i++)
  {
    enumHasValue.Add(enums[i].MoveNext());
  }

  // if all false, nothing to iterate over
  if (enumHasValue.All(x => !x)) yield break;

  //3. loop through
  while (true)
  {
    //find index with lowest value
    var lowIdx = -1;
    T lowVal = default(T);
    for (int i = 0; i < enums.Length; i++)
    {
      if (enumHasValue[i])
      {
        // must get first before doing any compares
        if (lowIdx < 0 
		    || null == enums[i].Current //null sorts lowest
		    || enums[i].Current.CompareTo(lowVal) < 0)
        {
          lowIdx = i;
          lowVal = enums[i].Current;
        }
      }
    }

    //if none found, we're done
    if (lowIdx < 0) break;

    //get next value for enumerator chosen
    enumHasValue[lowIdx] = enums[lowIdx].MoveNext();

    //yield up the lowest value
    yield return lowVal;
  }
}

Here’s the original code. I hope you enjoy it. And if you see ways to improve on it, please let me know.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Merger
{
  class Program
  {
    static void Main(string[] args)
    {
      int[] a = { 1, 3, 6, 102, 105, 230 };
      int[] b = { 101, 103, 112, 155, 231 };

      var mm = new MergeMania();

      foreach(var val in mm.Merge<int>(a, b))
      {
        Console.WriteLine(val);
      }
      Console.ReadLine();
    }
  }

  public class MergeMania
  {
    public IEnumerable<T> Merge<T>(params IEnumerable<T>[] sortedSources) 
      where T : IComparable
    {
      if (sortedSources == null || sortedSources.Length == 0) 
        throw new ArgumentNullException("sortedSources");
      
      //1. fetch enumerators for each sourc
      var enums = (from n in sortedSources 
             select n.GetEnumerator()).ToArray();
      
      //2. fetch enumerators that have at least one value
      var enumsWithValues = (from n in enums 
                   where n.MoveNext() 
                   select n).ToArray();
      if (enumsWithValues.Length == 0) yield break; //nothing to iterate over
       
      //3. sort by current value in List<IEnumerator<T>>
      var enumsByCurrent = (from n in enumsWithValues 
                  orderby n.Current 
                  select n).ToList();
      //4. loop through
      while (true)
      {
        //yield up the lowest value
        yield return enumsByCurrent[0].Current;

        //move the pointer on the enumerator with that lowest value
        if (!enumsByCurrent[0].MoveNext())
        {
          //remove the first item in the list
          enumsByCurrent.RemoveAt(0);

          //check for empty
          if (enumsByCurrent.Count == 0) break; //we're done
        }
        enumsByCurrent = enumsByCurrent.OrderBy(x => x.Current).ToList();
      }
    }
  }
}

And if this answers any questions for you, please do drop me a line to let me know.

TechEd: ASP.NET and C# Bonanza

Sadly I was unable to attend, but watching a few videos over the weekend is enough to get my juices flowing for what is coming down the pike for .NET and ASP.NET and C#. Here are a few notes and links:

Amazingly cool stuff coming up for .NET:

Next Generation of .NET for Building Applications

  • .NET Native: (22:40) will eventually be available beyond Win 8 RT apps
  • SIMD: (36:00) up to 8x performance improvement in parallel operations on multiple data streams

Future of Visual Basic and C#

  • Roslyn
    • super fast in-memory next generation .NET compiler
    • inline renaming - wow!
    • open source - on codeplex
  • C#
    • primary ctor: public class Point(int h, int w)
    • getter only auto property: public int Height { get; } = h;

Even more amazingly cool stuff coming for ASP.NET:

The Future of .NET on the Server (intro)

The Future of .NET on the Server (deep dive)

ASP.NET vNext

Oh so much to learn and play with. It’s a good time to be alive!

Aspect Oriented Programming Not Worth It

I have said before that I like Uncle Bob's way with words. This includes his Clean Code Discussion:

"When aspects first came out, I was intrigued by the idea. But the more I learned, the less I liked it. The problem is that the pointcuts are tightly coupled to the naming and structure of the code. Simple name changes can break the pointcuts.  This, of course, leads to rigidity and fragility. So, nowadays, I don't pay a lot of attention to AOP. 

"For my money, the OO techniques that I demonstrated for logging can be pretty easily used for many other cross-cutting concerns. So the tight name-coupling of AOP isn't really necessary."

I too once thought that AOP was a great idea. A carefully crafted AOP solution can even overcome some of the coupling issues that Uncle Bob mentions. But eventually the dependency entanglements even with nice clean injection techniques, pick your poison, you end up with a hodge-podge of rigid and fragile. Eventually you find yourself writing code to serve your AOP solution rather than your use case.

My money is with Bob.

.NET Goes Native

I am very excited about the .NET Native project. And while this is just for WinRT app store apps right now, I was encouraged by the commentary in the Channel 9 presentation. I am eager to learn more about it.

For a long time, the speed of desktop processors has spoiled us, allowing us to depend on heavier and heavier JIT workloads and dynamic utilization of multiple core libraries. With the challenges of mobile coming along, we get the benefit of heavy duty platform specific optimization in .NET Native now.

Because I often write server applications in C# that could really benefit from that extra platform optimization, I am hoping and looking forward to being able to utilize .NET Native in my work as well.