System.Data.SQLite Write Comparison of SQLite 3 vs Berkeley DB Engines

In my last post, I covered getting the Berkeley DB SQL API running under the covers of the System.Data.SQLite library for .NET. Another weekend has come and I’ve had some time to run some tests. The whitepaper Oracle Berkeley DB SQL API vs. SQLite API indicates that the Berkeley engine will have relatively the same read speed as SQLite but better write speed due to it’s page locking vs. file locking.

Berkeley Read Bug
Originally, my code would write a certain number of rows in a given transaction on one or more threads and then read all rows on one or more threads, but I had to give up on my read code because the Berkeley implementation of System.Data.SQLite threw a "malloc() failed out of memory" exception whenever I ran the read test with more than one thread. You can read the code below and decide for yourself whether I made a mistake or not, but the test ran fine with the SQLite assembly from the sqlite.org site.

The Tests
There were four tests, each inserting similar but different data into a single table with a column representing each of the main data types that might be used in a typical application. Two single threaded tests, the first inserting 6,000 rows in a single transaction and the second 30,000 rows. The third and fourth tests were multithreaded: 6 threads inserting 1,000 rows each and then 10 threads inserting 500 rows each. 

Here’s the SQL for the table:

CREATE TABLE [TT] 
(
  [src] TEXT, 
  [td] DATETIME, 
  [tn] NUMERIC, 
  [ti] INTEGER, 
  [tr] REAL, 
  [tb] BLOB
);

The Results
The results do not show a vast difference in insert speed for Berkeley as I might have expected. The tests were run on an AMD 6 core machine with 8GB of RAM. I’m writing to an SSD drive which may or may not affect results as well.

sqlitevbdb

You may choose to interpret the results differently, but I don’t see enough difference to make me abandon the more stable SQLite 3 engine at this time.

The Code
I invite you to review the code. If I’ve missed something, please let me know. It’s pretty straightforward. With each test, I wipe out the database environment entirely, starting with a fresh database file. I use the System.Threading.Tasks library.

class Program
{
  static void Main(string[] args)
  {
    Console.WriteLine("SQLite Tests");
    var tester = new Tester();
    tester.DoTest("1 thread with 6000 rows", 1, 6000);
    tester.DoTest("6 threads with 1000 rows each", 6, 1000);

    tester.DoTest("1 thread with 30000 rows", 1, 30000);
    tester.DoTest("10 threads with 500 rows each", 10, 500);

    Console.WriteLine("tests complete");
    Console.ReadLine();
  }
}

class Tester
{
  public void DoTest(string testName, int threadCount, int rowCount)
  {
    DataMan.Create();
    Console.WriteLine("");
    Console.WriteLine(testName + " started:");

    List<Task> insertTasks = new List<Task>();
    DateTime beginInsert = DateTime.Now;
    for (int i = 0; i < threadCount; i++)
    {
      insertTasks.Add(Task.Factory.StartNew(() => InsertTest(i, rowCount), TaskCreationOptions.None));
    }

    Task.WaitAll(insertTasks.ToArray());
    DateTime endInsert = DateTime.Now;
    TimeSpan tsInsert = endInsert - beginInsert;

    //List<Task> readTasks = new List<Task>();
    //DateTime beginRead = DateTime.Now;
    //for (int i = 0; i < threadCount; i++)
    //{
    //   readTasks.Add(Task.Factory.StartNew(() => ReadTest(i), TaskCreationOptions.None));
    //}

    //Task.WaitAll(readTasks.ToArray());
    //DateTime endRead = DateTime.Now;
    //TimeSpan tsRead = endRead - beginRead;

    double rowsPerSecondInsert = (threadCount * rowCount) / tsInsert.TotalSeconds;
    //double rowsPerSecondRead = (threadCount * rowCount) / tsRead.TotalSeconds;
    Console.WriteLine("{0} threads each insert {1} rows in {0} ms", threadCount, rowCount, tsInsert.TotalMilliseconds);
    //Console.WriteLine("{0} threads each read all rows in {1} ms", threadCount, tsRead.TotalMilliseconds);
    Console.WriteLine("{0} rows inserted per second", rowsPerSecondInsert);
    //Console.WriteLine("{0} rows read per second", rowsPerSecondRead);
  }

  void InsertTest(int threadId, int rowCount)
  {
    DateTime begin = DateTime.Now;
    DataMan.InsertRows(rowCount);
    DateTime end = DateTime.Now;
    TimeSpan ts = end - begin;
    //Console.WriteLine("{0} lines inserted in {1} ms on thread {2}", rowCount, ts.TotalMilliseconds, threadId);
  }

  void ReadTest(int threadId)
  {
    DateTime begin = DateTime.Now;
    int count = DataMan.ReadAllRows();
    DateTime end = DateTime.Now;
    TimeSpan ts = end - begin;
    //Console.WriteLine("{0} rows read in {1} ms on thread {2}", count, ts.TotalMilliseconds, threadId);
  }
}

internal static class DataMan
{
  private const string DbFileName = @"C:\temp\bdbvsqlite.db";
  private const string DbJournalDir = @"C:\temp\bdbvsqlite.db-journal";

  public static void Create()
  {
    if (File.Exists(DbFileName)) File.Delete(DbFileName);
    if (Directory.Exists(DbJournalDir)) Directory.Delete(DbJournalDir, true);

    //Console.WriteLine(SQLiteConnection.SQLiteVersion);
    using (SQLiteConnection conn = new SQLiteConnection())
    {
      conn.ConnectionString = "Data Source=" + DbFileName + ";";
      conn.Open();
      using (SQLiteCommand cmd = conn.CreateCommand())
      {
        cmd.CommandTimeout = 180;
        cmd.CommandText = "DROP TABLE IF EXISTS [TT]; CREATE TABLE [TT] ([src] TEXT, [td] DATETIME, [tn] NUMERIC, [ti] INTEGER, [tr] REAL, [tb] BLOB);";
        cmd.CommandType = System.Data.CommandType.Text;
        int result = cmd.ExecuteNonQuery();
      }
      conn.Close();
    }
  }

  public static void InsertRows(int rowCount)
  {
    SQLiteParameter srParam = new SQLiteParameter();
    SQLiteParameter tdParam = new SQLiteParameter();
    SQLiteParameter tnParam = new SQLiteParameter();
    SQLiteParameter tiParam = new SQLiteParameter();
    SQLiteParameter trParam = new SQLiteParameter();
    SQLiteParameter tbParam = new SQLiteParameter();
    Random rnd = new Random(DateTime.Now.Millisecond);

    using (SQLiteConnection conn = new SQLiteConnection())
    {
      conn.ConnectionString = "Data Source=" + DbFileName + ";";
      conn.Open();
      using (SQLiteTransaction trans = conn.BeginTransaction())
      using (SQLiteCommand cmd = conn.CreateCommand())
      {
        cmd.CommandTimeout = 180;
        cmd.CommandType = CommandType.Text;
        cmd.CommandText = "INSERT INTO [TT] ([src],[td],[tn],[ti],[tr],[tb]) VALUES (?,?,?,?,?,?);";
        cmd.Parameters.Add(srParam);
        cmd.Parameters.Add(tdParam);
        cmd.Parameters.Add(tnParam);
        cmd.Parameters.Add(tiParam);
        cmd.Parameters.Add(trParam);
        cmd.Parameters.Add(tbParam);

        int count = 0;
        while (count < rowCount)
        {
          var data = MakeRow(rnd);
          srParam.Value = data.src;
          tdParam.Value = data.td;
          tnParam.Value = data.tn;
          tiParam.Value = data.ti;
          trParam.Value = data.tr;
          tbParam.Value = data.tb;
          int result = cmd.ExecuteNonQuery();
          count++;
        }
        trans.Commit();
      }
      conn.Close();
    }
  }

  public static int ReadAllRows()
  {
    List<DRow> list = new List<DRow>();
    using (SQLiteConnection conn = new SQLiteConnection())
    {
      conn.ConnectionString = "Data Source=" + DbFileName + ";";
      conn.Open();
      using (SQLiteCommand cmd = conn.CreateCommand())
      {
        cmd.CommandTimeout = 180;
        cmd.CommandText = "SELECT [rowid],[src],[td],[tn],[ti],[tr],[tb] FROM [TT];";
        using (SQLiteDataReader reader = cmd.ExecuteReader())
        {
          while (reader.Read())
          {
            var row = new DRow
            {
              rowid = reader.GetInt64(0),
              src = reader.GetString(1),
              td = reader.GetDateTime(2),
              tn = reader.GetDecimal(3),
              ti = reader.GetInt64(4),
              tr = reader.GetDouble(5)
              //tb = (byte[])reader.GetValue(6)
            };
            list.Add(row);
          }
          reader.Close();
        }
      }
      conn.Close();
    }
    return list.Count;
  }

  private static DRow MakeRow(Random rnd)
  {
    int start = rnd.Next(0, 250);
    byte[] b = new byte[512];
    rnd.NextBytes(b);
    return new DRow
    {
      src = spear.Substring(start, 250),
      td = DateTime.Now,
      tn = rnd.Next(2999499,987654321),
      ti = rnd.Next(3939329,982537553),
      tr = rnd.NextDouble(),
      tb = b
    };
  }

  private const string spear = @"FROM fairest creatures we desire increase,That thereby beauty's rose might never die,But as the riper should by time decease,His tender heir might bear his memory:But thou, contracted to thine own bright eyes,Feed'st thy light'st flame with self-substantial fuel,Making a famine where abundance lies,Thyself thy foe, to thy sweet self too cruel.Thou that art now the world's fresh ornamentAnd only herald to the gaudy spring,Within thine own bud buriest thy contentAnd, tender churl, makest waste in niggarding.Pity the world, or else this glutton be,To eat the world's due, by the grave and thee.";
}

internal class DRow
{
  public long rowid { get; set; }
  public string src { get; set; }
  public DateTime td { get; set; }
  public decimal tn { get; set; }
  public long ti { get; set; }
  public double tr { get; set; }
  public byte[] tb { get; set; }
}

How to Get Berkeley DB SQL API into the .NET System.Data.SQLite Provider

My last post covered getting Berkeley DB up and running with .NET. Now it’s time to take it one step further and build the open source System.Data.SQLite ADO.NET library, replacing the SQLite 3 engine with Oracle’s version of the SQLite.Interop C library that gets embedded into the .NET System.Data.SQLite assembly.

In other words, when you complete the steps below, you’ll have a System.Data.SQLite library that can, supposedly, drop into your .NET projects that currently use the ADO.NET library found on the sqlite.org site. The real difference is that instead of SQLite with single threaded writes, or file locking for writes rather, you will be using the latest Berkeley DB storage engine which supports page level locking to allow more writers, assuming the writers are writing to different pages.

For more information on Oracle’s implementation of the SQLite API which they call the Berkeley DB SQL API and to learn where they differ, you should read the whitepaper: Oracle Berkeley DB SQL API vs. SQLite API.

  1. Download the Berkeley DB dbsql-adodotnet-5.2.28.zip source.
  2. Unlock the zip file (Right-click and select Properties and click the Unlock button.)
  3. Extract the contents to an empty directory of your choice.
  4. Open the SQLite.NET.2010.sln solution file.
  5. Choose Release and then build.
  6. Optionally show all files in the System.Data.SQLite.2010 project and open the Assembly.cs file and change the AssemblyProduct string to something like “System.Data.SQLite-BDB” so that it will show up in file Properties Details tab. This is the only way you will know that this assembly has the Berkeley DB interop engine built into rather than the SQLite engine.

If you’re curious, do a diff between the SQLite.Interop project files in the Oracle version and the original sqlite.org version. Clearly very different animals. Now the only thing that is left is to write up a nice little test in C# to compare the two libraries. The subject of a future post.

UPDATE #1 (9/5/2011): Tests today show that this “new” System.SQLite.Data library DOES NOT create a Berkeley DB database. At least as far as I can tell. So while there is a libdb_sql50.dll in the main Berkeley DB build that implements the sqlite3.dll API, there is no way that I can find so far to use that library from C#. More experimentation to come.

UPDATE #2 (9/5/2011): Modify the UnsafeNativeMethods.cs with line 34 as

private const string SQLITE_DLL = “libdb_sql52.dll”;

and then change the solution’s projects conditional compilation symbols to “SQLITE_STANDARD” and then build. This prevents the SQLite interop embedded DLL from being used. Now when you run your test app, you’ll need to copy to the bin directory of your test app the new built DLLs (see previous post):

libdb_sql52.dll
libdb_stl52.dll
libdb52.dll

I’ll post more on my tests later. So far, I’ve got the basics working but when I push the limits, I get some nasty crashes, so I’m skeptical of this scheme to use the SQLite API over the Berkeley DB engine.

How to Get Berkeley DB 5.2.28 and .NET Examples Working in Visual Studio 2010 SP1

Over the years, I’ve looked into using Berkeley DB for my own C# applications because it has a solid reputation as a very fast, reliable key/value database. Every time I’ve walked away disappointed with the .NET API wrapper—until now. My interest was renewed a few days ago when I noticed an item on StackOverflow comparing SQLite performance to Berkeley DB.  Since acquiring Berkeley DB, Oracle has been busy making it better, including adding better support for the .NET development community.

WARNING: read and understand the license terms and conditions for Berkeley DB before you choose to use it.

I began this most recent review by downloading the MSI Windows installer. DO NOT DO THIS! When I tried to compile the VS2010 solutions (see below), I got all kinds of errors. Next I tried downloading the 45MB zip file. This worked like a charm, except the .NET examples projects had a broken reference which was easily fixed. Follow these steps and you’ll be up and running your .NET app using Berkeley DB in no time.

  1. Download db-5.2.28.zip file.
  2. Right click the zip file and select Properties and click the "Unlock" button. This will unlock and make usable all the files in the zip file for your local machine. DO NOT SKIP this step.
  3. Extract the contents of the zip file to a directory, e.g. C:\Code so that you will have a C:\Code\db-5.2.28 folder.
  4. In that db-5.2.28 folder, you will find a build_windows folder. This will be your home for the next steps. Be sure to follow them in order.
  5. Open Berkeley_DB_vs2010.sln
    1. build debug win32 and x64
    2. build release win32 and x64
  6. Open Berkeley_DB_examples_vs2010.sln
    1. build debug win32 and x64
    2. build release win32 and x64
  7. Open BDB_dotNet_vs2010.sln (allow default conversion)
    1. build debug win32 and x64
    2. build release win32 and x64
  8. Open BDB_dotNet_examples_vs2010.sln
    1. In each project, delete the missing reference to "db_dotnet" and add reference to ..\AnyCPU\Release\libdb_dotnet52.dll
    2. build debug win32 and x64
    3. build release win32 and x64

That’s it. Now you can run the example projects and you have a .NET library and x86 (Win32) and x64 binary engines for that library to use. Enjoy!

Why I Would Work for Eric Sink

As I sit here nursing a head plagued with a migraine, I read with great interest Eric Sink’s latest in my RSS reader about his experiences learning Scrum, a paper submitted for the proceedings of Agile 2011. Here’s my favorite part:

“I have come to think of our daily standup as being similar to a security guard at a bank. Most security guards stand around for their entire career without ever firing their weapon. It's probably a boring job. But the consistent presence of that security guard probably prevents some big problems from ever happening. Our daily standup is the same way.  Nothing exciting ever really happens. But we can confidently assume that many big problems have been avoided because we regularly take the time to get synced up.

“The culture of Scrum teams seems to be built on working together in shared spaces. In contrast, our company has always placed a high value on each person having a private office.

“We are aware that there are tradeoffs here. A private office gives each person a quiet place to work, but it also creates the opportunity for people to get isolated. So even as we provide private offices, we create ways to drag people out of them, including soda in the kitchen, lunch together on Wednesdays, a pool table, and a video game room.”

For this I would consider relocation to Illinois. And I always tell recruiters I’m not interested in relocating.

The Two Keys to Programming Productivity

Occasionally other developers with whom I work will comment on my productivity. For example a couple of weeks ago, after working hard one day and delivering an urgently needed service to the team the next morning, one developer said in standup, "You're an animal. You wrote that in one day and it would have taken me two weeks to do that." I’m a little embarrassed by such comments and have often thought about what makes me or any programmer productive, so today I enjoyed reading parts of Neal Ford's book The Productive Programmer and wanted to share some thoughts on programmer productivity.

I loved the forward by David Bock of CodeSherpas. He writes, "The individual productivity of programmers varies widely in our industry. What most of us might be able to get done in a week, some are able to get done in a day. Why is that? The short answer concerns mastery of the tools developers have at their disposal. The long answer is about the real awareness of the tools' capabilities and mastery of the thought process for using them. The truth lies somewhere between a methodology and a philosophy, and that is what Neal captures in this book."

Ford suggests there are four productivity principles for programmers.

  1. Acceleration: speeding things up. Keyboard shortcuts, searching and navigation.
  2. Focus: eliminating distractions and clutter, both physical and mental.
  3. Automation: getting your computer to do more. Put your computer to work.
  4. Canonicality: doing things once in one place. Also known as DRY or Don't Repeat Yourself.

These are all good and important. For me the most important item in Ford's list is number two: focus. And I would add one more: conditioning. Focus and conditioning are the two most important keys to successfully improving your programming productivity.

Programming Productivity Key #1 – Focus

Start by eliminating mental and physical distractions. Remove the clutter from your mind and your desk, but most importantly, eliminate the distractions caused by environmental disruption. Find or create a quiet place where you can focus on the task at hand, where you can put your entire mental energy into what you are doing. Distractions are a HUGE productivity destroyer.

Cubicles are of the Devil. They may be great for a sales team or reporters who thrive on eavesdropping, but chit chat does not get code written. Cubicles foster incomplete and sporadic communication that becomes a crutch for broken requirements and shoddy analysis resulting in an unsearchable and unsellable body of knowledge persisted only in the fragile collective of a distributed and disconnected human neural network that often cannot survive the loss of one or two key nodes.

Ford suggests instituting "quiet time" where, for example, there are no meetings or email or talk during two two-hour periods each day. He claims to have tried this in one of the consulting offices where he worked and the organization found that they got more done in those four hours than was getting done in the entire day before implementing the "quiet time." This is not surprising to me at all. Ford writes, "It is tragic that office environments so hamper productivity that employees have to game the system to get work done."

While I have the luxury of working from a home office at the moment, I have worked in a number of cubicle environments in the past and probably will in the future. The most common technique I see other developers using to create their own "quiet time" is the use of a very expensive pair of noise canceling headphones piping in whatever tunes or white noise that developer finds most conductive to focusing on the task at hand. Do whatever you have to do to achieve focus.
 

Programming Productivity Key #2 – Conditioning

To play at the top of your game, you have to condition. You have to practice. You have to study. You have to prepare your body but most importantly your mind to execute. And you have to condition your attitude. You have to be excited to write code. You have to get a thrill out of making it work and work well. You have to condition your mind to expect excellence and then work to achieve that.

In conditioning, there are mechanics you must learn. Spend time studying and using your IDE's keyboard shortcuts. Regularly study and practice using the base class libraries of your platform so you don't end up wasting time writing code that a solid community or well heeled development team has already written and tested heavily for you.

More importantly, spend time reading about and learning new techniques and technologies from open source projects to gather at least a passing understanding of the problem they solve and how you might use them even if you choose not to do so. Pay attention to the patterns, the naming and organizational patterns, the logical patterns and the problem solving patterns that you find in these projects. Even if you do not use them, you will be storing up mental muscle memory that will serve you well when you need a way to solve a new problem in your own projects.

Most importantly, learn from your own work. Repeat your successes, taking patterns from your past and improving on them. Avoid your failures, being honest with yourself about what did not work in your last project and finding ways to avoid or even invert the mistakes of the past, turning them into strong successes.

Conditioning is not about solving a problem for a specific project in which you're working now. It's about preparing your mind to be at its most productive when faced with programming problems you've never seen before. It's about creating mental muscle memory that will kick into overdrive as you solve the problems you have already faced, the code spilling out of your brain through your fingers and into the keyboard.

Conclusion
No matter how fast or slow you are as a programmer, you can improve. If you improve your focus and put your body and mind through regular conditioning, you will improve. And as you improve, you'll get noticed. And as you get noticed, you'll get rewarded.

Upgrade to System.Data.SQLite 1.0.74.0 in Visual Studio 2010

I wrote about getting SQLite up and working in Visual Studio 2010 and your .NET 4.0 projects in June. Then I got distracted with work and didn’t have a chance to come back to my exploration of SQLite until today. So I decided to look to see if we have a new version. Yes, the new System.Data.SQLite install for 1.0.74.0 (SQLite version 3.7.7.1 which includes a few bug fixes) is ready to download and install.

Unfortunately, the designers are still not available, so if you want any designer, however buggy, in Visual Studio, you’re still stuck with my original path in my previous post. (See the readme once you finish the 1.0.74.0 install.)

After a variety of experiments with the new installer, I ended up taking the following steps that led to successfully running my budding test/prototype application. Follow these steps and you won’t go too wrong:

  1. Uninstall all SQLite related items from Control Panel | Program and Features
  2. Download and install my favorite free SQLite admin tool from http://osenxpsuite.net/?xp=3.
  3. Download the latest from http://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki. In my case, it was the sqlite-netFx40-setup-bundle-x64-2010-1.0.74.0.exe.
  4. This will have removed some GAC items and NOT put 1.0.74.0 into GAC. At least on my machine. I could not get the assemblies into the GAC to save my life.
  5. Make sure your project references are pointed to the correct version and that you have Copy Local set to true.
  6. Change your config settings to remove useLegacyV2RuntimeActivationPolicy=true from the startup node like this:
      <startup>
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
      </startup>
  7. Change your config settings in DbProviderFactories to remove strong name reference like this:
      <system.data>
        <DbProviderFactories>
          <remove invariant="System.Data.SQLite" />
          <add name="SQLite Data Provider"
              invariant="System.Data.SQLite"
              description=".Net Framework Data Provider for SQLite"
              type="System.Data.SQLite.SQLiteFactory, System.Data.SQLite" />
        </DbProviderFactories>
      </system.data>
  8. Test to taste.

Visual Studio LightSwitch 2011 May Bridge the Gap

Microsoft announced recently that Visual Studio LightSwitch 2011 will be released on July 26. I’ve been watching the development of this product with keen interest for the last year or so. I’m looking forward to evaluating it more in-depth soon.

Building common line-of-business (LOB) applications in today’s enterprise development stacks can be too complex and too costly for today’s tight budgets. For this reason alone there are “literally millions” (exaggeration license: 02389-872.159-034) of LOB applications being created by non-programmer Office users. These “applications” most often get pushed around in Excel via Exchange. Many live in a hastily created Access database and shared clumsily and un-securely on a workgroup file server.

Still others are somewhat more sophisticated and are hosted on products such as QuickBase from Intuit. This latter category resolves many of the problems inherent in the emailed Excel spreadsheet and the Access fileshare such as security, reliability, common user interface, ease of use, etc. There are many more specialized web-based SaaS offerings that solve specific business problems. One notable vendor who led the way in this area is 37 signals.

But these solutions are not always enough to meet the needs of the business. There is a gap between these “entry-level” (ELOB) applications and what I will call primary line-of-business (PLOB) applications. The PLOB is a custom developed, sophisticated enterprise application with complex business rules, sometimes even more complex user interfaces, and far more complex data and integration requirements, upon which the business relies for its core service offering or mission critical systems.

Somewhere between ELOB and PLOB there has to be a middle ground. Let’s call it the mid-line-of-business (MLOB) application (FLA creator license: FOLK-LANG-MAKE-UPPR-WXYZ). There are plenty of “app generator” and scaffolding tools that claim to live in the MLOB space. I’ve never been too impressed with them. They always seem to lose their way in convention, diverging from business requirements too greatly to meet business needs.

So I am hopeful that LightSwitch will fill the MLOB gap. We’ll see. Time will tell. In any case, it should at least be fun to find out.

Winning Teams Have Winning Coaches

How many games will an NFL team win if the coaching staff and the owners remain in the locker room during practices and games, coming out only at half time and between quarters to ask the team members what they can do to win the game?

ESPN’s Worst NFL teams of all time include:

#1. 1976 Buccaneers (0-14)
It was their season debut. They were shut out five times and averaged fewer than nine points per game. Defensive lineman Pat Toomay said, “The coach stopped talking to us after the third game. During the week, he wanted nothing to do with us.”

… (You can read more on the page linked above. I’ve included snippets here of the two that I felt illustrated my point.) …

#9. 2001 Carolina Panthers (1-15)
”The energy has been sucked out of our organization and our fan base,” said owner Jerry Richardson, after firing head coach George Seifert at the end of the year.

Great players cannot win consistently without great coaches. The same is true of software development teams, or any other type of team for that matter. If the coaches remain in the locker room, the team, being paid professionals, will still play, and they might even score, and with ideal conditions, they might even deliver a win or two, but a losing season can be guaranteed when the coaches and owners can’t be bothered to be a part of the game.

On the other hand, we have great examples such as Vince Lombardi who went to work for the Packers and turned a 1-10-1 in 1958 team into one of the greatest teams in the game and with five NFL championships before he left nine seasons later. He was in the game. He was a motivational leader. He was a great coach.

Or how about Tom Landry and his goofy hat who coached the Cowboys for 28 years and had a 20 year winning season streak. He was a great coach.

This list of winning coaches is long. Only the losing players on a losing team remember their losing coaches beyond the losing season. Winning coaches are remembered and revered long after they’ve left the field.

And how many games do you think those winning coaches missed?

Who was your greatest coach? And why? I want to hear from you.