Third Party UI Libraries Can Be Overkill

The question of the value of third party UI libraries has often come up in my career. At one time I was an eager advocate of using a comprehensive UI library. Now my attitude is a bit more pragmatic.

Third party UI libraries are sometimes a boon and sometimes a bane. Even for well seasoned UI frameworks. UI libraries for web applications tend to be particularly troublesome because they so often have a clumsy, heavyweight configuration and API that end up requiring more time to use than they save while the vendors race to win the feature count contest.

For some years, the trend seemed to lean toward the use of UI libraries because building a UI by hand was hard regardless of whether you were working with spaghetti script or with Win32 and MFC. Now Microsoft's toolset supports a much richer set of UI paradigms making the delivery of a great UI much easier right out of the box. The trend seems to be gravitating away from reliance on third party controls toward a more simplified user interface style with a more elegant framework relying more on convention than a feature bloated configuration and black box API.

One area where UI libraries seem to continue to do well, in my view of course, is with Silverlight and WPF (XAML) applications. These UI frameworks and their supporting third party UI control sets seem to have been made for each other. Both rely heavily on the powerful UI declarative language XAML. The rich user interfaces that can be spun up using Silverlight or WPF can be intoxicating.

Ultimately, we need to use UI libraries judiciously, being careful not to overuse them where requirements and the effort needed to learn and configure them are not warranted. One could argue that the limited way in which many third party controls are used in a variety of applications provide us with a bountiful set of examples of the gratuitous overuse of these libraries. Sometimes all you need is a good closet and the kitchen sink that comes with that room builder toolkit is just plain old overkill.

iPad is Ender’s Desk or Its Precursor

I love Orson Scott Card’s books. My first and still all time favorite is Ender’s Game.

If you’re not a fan, go read the book anyway. If you are, you already know what I’m talking about. The iPad is Ender’s desk, or perhaps its precursor.

Here’s a composite I just made thinking of this concept. One image from Apple and the other from a very innovative early childhood education software company called Imagine Learning.

endersdesk

I’ve never wanted to own an Apple produce product (slip of the finger, but an apple is a fruit after all) or develop software for it until now.

When I think of the amazing ways that educational content and interactivity could be delivered to students using this device, I get very excited. And I imagine a very smart kid hacking the device and sending rude messages to another student. That makes me more excited for the future of technology than anything I’ve seen lately.

Silverlight 4 Enterprise Stack: WCF RIA Services, MVVM and MEF

The .NET technology map keeps expanding, but I have my eye on one particular continent, a little piece of the .NET 4 world I’m calling the Silverlight 4 Enterprise Stack. There seems to be focus coalescing on this important piece of technology real estate.

The Patterns & Practices team blog has a great post looking into the enterprise crystal ball. Be sure to check out their Prism (Composite Application Guidance) on CodePlex.

The primary pieces of the Silverlight 4 Enterprise Stack are:

Other supporting players in the stack are:

With the eminent release of these technologies on April 12, we the Enterprise Software rank and file have much to look forward to in terms of new toys to play with while delivering some amazing new user experiences in the enterprise world.

If you want to keep tabs on the Silverlight 4 Enterprise Stack, be sure to set your RSS reader to tap into these key bloggers:

For us enterprise software geeks, the year 2010 is shaping up to be a very good year!

User Experience Principles for Developers

Let’s face it. Most of us code slingers have no innate understanding of what makes a great user experience (UX) design. We spend so many hours in front of multiple user interfaces that navigating and using software becomes virtually intuitive, instinctual. But we are not “normal” users.

I’ve just finished reading and mean to re-read Whitney Hess’s “25 Guiding Principles for UX Designers” on Inside Tech. It’s a great piece with some very good references. I recommend you read the entire article multiple times. Here are some of my favorites (my number is not the same as Whitney’s):

1. Understand the underlying problem before attempting to solve it.
How often do we just begin coding and throwing a user interface together without having spent much, if any, time with the target audience to understand their needs, challenges, skill level and approach to solving the problem currently?

2. Make things simple.
How often do we try to put every feature we can pack into a single view thinking to make the software powerful and reduce round trips to the server or some other resource?

3. Provide context.
How often do we place controls or information in a user interface that we consider convenient but in reality is out of context and will confuse a user who does not understand what is going on “under the covers” so to speak?

4. Be consistent.
How often do we design one page in a certain way only to bounce the user to another page in a web application that uses a different design paradigm, making the user spend some time just to figure out where the OK or Cancel button or link is now?

You can read about these and the 21 principles at Whitney’s article (see link above). I also recommend the following resources, a selection from those Whitney recommends:

UX Principles and Design Guides

If you have some favorite principles of your own, please share them here.

Why The Software (or Any) Team Fails

I’ve been giving the question of why software teams fail some considerable thought in the past few days. Reading Brad Abrams’ post Don’t Waste Keystrokes and his statement that “By far the biggest problem I see on teams of > 1 is communication” led me to compile the following list. Here are some of the reasons, in addition to the most important one that Brad pointed out already, that a software team, or any team really, fails:

  1. The team does not practice regularly, no coordinated learning.
  2. The coach does not know the strengths and weaknesses of the players.
  3. The players do not know their role, their zone or the plays.
  4. The players do not get along, they are not one in purpose.
  5. The players do not trust or respect the coaching staff.
  6. The coaching staff puts players with no skill on the starting lineup for unknown reasons causing resentment amongst the other players and guaranteeing a loss at game time.
  7. The players do not believe the coaching staff understand the game.
  8. The players are more focused on individual agendas, they do not work together to win.
  9. The rules of the game are not well understood and change during the game.
  10. The coaching staff and team captains disagree on how the game should be played.
  11. The coaching staff recruits new players looking for players who will agree with their ideas rather than seeking out players who can actually play.
  12. The players fail to improve their skills on their own time.
  13. The players lack motivation and fail to come to practice and give only a half-hearted effort in the game.
  14. The team captain spends more time arguing with the coaching staff than he does leading and motivating the players.
  15. Winning becomes secondary to just finishing the season.

If you can think of any others, please let me know. And if you have ideas for how to fix these situations, I would love to hear from you as well.

Behavior Driven Design Documentation in Software Development

Documentation neglect is a chronic problem in most enterprise application development efforts. This problem is unrelated to the selected development methodology but there are some who assume that agile methods eliminate documentation despite the agile manifesto's declaration that we value "working software over comprehensive documentation." It does not claim that documentation is not needed.

In my own work, I've found that comprehensive documentation authored by well meaning individuals, or worse a committee, who possess very little technical expertise, is often written in confusing, lengthy narrative style that requires a linguistic anthropologist to decipher. Too often development teams spend days pulling out the real features and requirements and expected behavior from such documents, generally resulting in many unanswered questions, the answers to which just cannot be found in the comprehensive documentation.

Recently we have found greater value in documentation that has relatively strict structures that guide authors to produce specific, detailed and comprehensible documentation. The format is simple and adopted from the increasingly popular behavior driven design (BDD) approach.

The goal of this approach is to define desired behavior and clear acceptance criteria for specific requirements without mashing them all together in a narrative style that requires time consuming decomposition. Here's the format:

+++++++++
P[n]: Process Title
Two line description of a process (akin to epic user story)

F[n]: Function Title (a feature or step in the workflow)
As a role
I want action/process description
so that benefit/value of feature.

AC[n]: Acceptance Criteria or Scenario Title (p1)
Given some initial context (the givens),
when an event occurs,
then ensure some outcomes.

+++++++++

It's really not different than BDD. There is only one subtle difference which I've found can make all the difference in the world. the terms used (process, function and acceptance criteria) are far less scary to the business than terms like epic, user story and scenario.

Of course, this won't work for every situation, but where you can break documentation down into logical units that focus on one feature, one thing at a time, you can eliminate the time consuming deconstruction of comprehensive documentation that can only be processed by the most advanced computer in the world: the human brain.

Computer Models – Magical Scripture of the Climate Change Disciples

I’ve not used this blog for politics in the past, but I’m going to start making some exceptions where I think there is a technology tie-in. George Will’s piece on the Copenhagen summit was very interesting. I recommend that you read it. I am not a climatologist but I am a skeptic of all science based on computer models, especially models that cannot accurately predict the present observable state of a system.

The recently hacked emails of the Climate Research Unit (CRU) in Britain reveal a pattern of behavior that would be more consistent with the corrupt leaders of a cult whose proclaimed tomes of divinely inspired scripture cannot withstand scrutiny should certain facts be revealed. In the minds of the true believing disciples or the corrupt leadership of the cult, the ends justify the means. And truth is not a consideration.  

The software models and data upon which all climate change disciples rely are written by flawed human beings. Whether a software engineer expertly writes the software to implement his best understanding of the requirements of the scientist or the real scientist writes the software with a less than perfect knowledge of software engineering and design, the outcome is the same. (Hey, not even a PhD can know everything.) All software is flawed. It is the nature of our art.

Can computer models be a good thing? Sure. Especially when they work. Can they be a bad thing? Well, consider that a climate model must model the entire earth and its atmosphere. That’s a few million data points (colossal understatement). These models must have historical data. And there’s the rub. It’s not there. Not really. So we extrapolate the data using tree cores and ice cores and, wait for it, more computer models.

Any software engineer knows that such a model will be inherently complex and that complex systems are inherently flawed and that very complex systems are inherently very flawed. No software engineer will declare her (or his) faith in such a model or its output, but more importantly, they would never bet a week’s salary on it’s accuracy without full testing and confirmation against known observable data and repeatable tests. Yet, we are preparing to bet trillions of tax payer dollars on these flawed models. “Hey, Sam, keep your hands out of my pocket!”

The problem we have is that scientists have put their faith in software models and data produced by software models as the magical source of all truth and knowledge. They are either the corrupt leaders of a cult (see the CRU emails) or its blind disciples insisting on the truth of their models even when observable facts contradict and invalidate those assertions.

The climate change models and extrapolated data have become scripture. The scientists who preach daily from the pages of that holy writ are held in prophetic awe and reverence by the ignorant masses of well intentioned politicians and citizens of the earth. Except for software engineers and the “deniers” of course.

So back to the question. Can computer models be a bad thing? Yes, when the ignorant or the corrupt use them as an unquestionable, magical affirmation of their own political agenda or emotional response to the idea that man is killing the planet and that unless we do something about it, we will all die. Well nobody wants that.

Oddly, we ridicule and persecute religious nuts who do the same thing. I guess they just weren’t smart enough to get a PhD and call themselves scientists rather than prophets. Stupid nuts.

Silverlight WCF RIA Services Beta Released

Silverlight WCF RIA Services Beta Released was released recently, replacing the preview bits I’ve been playing with. You can pick up the new beta here. I'm still using VS 2008 SP1, but I am using Windows 7, so I download directly from here.

WARNING! If you’re not on Windows 7 or Server 2008 R2, you’ll need the hotfix mentioned. If you're still on XP or Vista, let this be the final reason to upgrade and do it. You won't regret it.

I learned first about the beta release from Dan Abrams blog post. Some coolness he mentions:

  • DataSources window. Drag and drop "tables" exposed by Domain Service onto the form.
  • Simplified error handling on client and server.
  • Data model inheritence flows through the Domain Service.
  • Presentation model hides DAL model with CRUD support.
  • Optimized binary channel by default.
  • Integrated into Silverlight 4 installer.
  • Handling of compositional hiearchy in data models.
  • GAC and bin deployment, with bin taking precedence.
  • Globalization support, user state and persisted sign in with updated Business Application Template.
  • Go-Live bits for .NET 3.5 SP1 and Silverlight 3.

Another item of note is the name change with the WCF moniker. RIA Services is now part of the WCF services family along with ADO.NET Data Services. This seems like a convergence of technologies in an enterprise ready set of tools and services that will bring Silverlight into the forefront of enterprise application development and delivery.

I'll be working on pulling these new bits together and getting my "Aventure" blog sample back on track with the new Azure SDK bits and these new WCF RIA Services bits. Given the plethora of changes, I'll likely start over with fresh new project templates and pull what little customized code that might be needed from my previous blog post on the topic.