Bad Code is Platform Independent

I want a T-shirt with these words on it:

Bad Code is Platform Independent

All the political and religious debates about platform superiority all come to an end when running bad code. It amazes me how much bad code is out there (some of mine included). And yet we so often jump to blame the platform, runtime, operating system, tools, or some other outside element.

Where have all the good coders gone? More to the point. Were there ever any?

Bad coders never die, they just pick a new platform.

Take a POST for REST without a Form in ASP.NET

Some time ago and again today, I had occasion to write an ASP.NET page that had no form in the .ASPX page but would accept and handle POST 'ed data. This was in an effort to support a REST-like interface for non-ASP.NET developers. Here's the way it turned out.

The .ASPX page looks something like this:

<%@ Page Language="C#"
 
AutoEventWireup="true"
  CodeBehind="extract.aspx.cs"
  Inherits="KeyExtractWeb.extract" %>

There is nothing else in the file. Now the code behind looks like this:

using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Collections.Generic;
using System.Collections.Specialized;
using System.IO;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;

namespace KeyExtractWeb
{
    public partial class extract : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            string alldata = string.Empty;
            using (StreamReader sr = new StreamReader(this.Request.InputStream))
            {
                alldata = sr.ReadToEnd();
            }

            //convert to strings - assumes URL encoded data
            string[] pairs = alldata.Split('&');
            NameValueCollection form = new NameValueCollection(pairs.Length);
            foreach (string pair in pairs)
            {
                string[] keyvalue = pair.Split('=');
                if (keyvalue.Length == 2)
                {
                    form.Add(keyvalue[0], HttpUtility.UrlDecode(keyvalue[1]));
                }
            }

            if (alldata.Length > 0 && this.Request.HttpMethod.ToUpper() == "POST")
            {
                if (form["text"] != null)
                {
                    //TODO - do something with the data here
                }
                else
                    Response.Write("*** 501 Invalid data ***");
            }
            else
                Response.Write("*** 599 GET method not supported. ***");

            Response.End();
        }
    }
}

Well, there you have it. There are probably better ways to do this, but I didn't find one.

Captain Zilog - Redux

Nothing like a great laugh to perk up your day. Stumbled onto this Captain Zilog on computerhistory.org today. Take a moment and read it. What happened to those good old days? Did the dolldrums of reality take over?

"Systems designer Nick Stacey works late into the night, unknown to him, a small eerie comet passes overhead..."

"I am known to men as...opportunity! I give you the key to man's destiny in a brave new world!"

"It is the beginning of a new freedom for man's imagination! It is a microprocessor! I bestow upon you all of the knowledge that goes with it, but use it wisely! Now, go!"

Corny? Yes. Prescient? Definitely. It was 1979. I was in junior high school. It was the battle of the little, inventive, hungry geek vs. the titans of business with deeper pockets than I could imagine. It was an epoc battle that went to the best and the brightest, not to the most powerful. Or so it seemed. But as a kid, I was only barely aware of the war that raged in the world of technology in those days. To me, it was just an exciting time of change.

Now, change is more terrifying because I have responsibilities. I have four kids, a mortgage and car payments. Just like everybody else I know. And a lot has changed in the last couple of months. And it's been terrifying. And exciting. Two days ago, I blogged about the ethics of meta-searching. It came as a shock to others involved in the project because I had not discussed it with them. I blindsided them. That was fundamentally unfair. And yet, even had I wrung my hands over the issue and discussed it with them, it would probably not have changed the end result. Things changed. And it scared the heck out me. Some of them are probably still angry with me. I don't blame them. I would be too.

Looking back to the days of Captain Zilog made me laugh. It also made me think. Zilog is not a player in the huge PC market. But it's still alive and from all appearances, it's doing well. They innovated. They struggled. They stayed alive and ultimately found a niche market that has served them well. Are they comparable to giants like Intel and Microsoft? No way. But did they survive? Did they make money. I'm guessing that they must have given the fact that they're still around and still selling the Z8 line.

So what is our challenge? We must find a way to survive. Find a way to innovate something truly useful. Believe in that thing. Work hard to make that thing succeed, even if it's in a market you had not originally foreseen. In other words, we must adapt without losing a sense of who we are or what we've created. Time will tell if we, as technologists and entrepreneurs will do just that.

 

Respecting Terms of Use -- The Ethics of Meta-Searching

(Modified slightly on Monday, August 21, 2006, after considering issues of fairness and further introspection.)

For the first seven months of this year I worked on a project which, in part, was a meta-search tool designed to bypass the router pacing algorithms used by sites such as Google, Yahoo and MSN. I have come to believe that even if this were not a violation of their terms of use, it would be fundamentally unethical. I cannot claim the high road in having come to this conclusion. I did not arrive at this conclusion until some time after becoming unemployed and then re-employed.

Yesterday I began to question myself more seriously about the ethics of meta-searching than I had done before. Without doing any online research, a rarity for me, I just wrote down in long hand some basic questions and let one lead to another. My conclusion was that I could not ethically or morally justify the acquisition of data and resale of it in some form or another using meta-search techniques in violation of the source's terms of use.

On July 21, 2006, I was suddenly unemployed along with the rest of the development team at Provo Labs LLC, a Paul Allen (the lesser) venture. I didn't abandon the project even then. I looked for ways to keep the project alive. After all, I had spent months, including nearly every Saturday and Sunday, working on the code for this project. It was my baby. I was the only developer on it. A week or so after that fateful day in July, reality set in. I had no income and four children to feed. I had to find a job. And I did. A great job! The timing could not have been better.

In my first three days on the new job, I was impressed by the effort and expense the company is willing to expend to be sure that copyrighted material used in their product is properly licensed. This reminded me of a conversation I had had with management at Provo Labs earlier in the year. I had raised the question of the ethics of meta-searching and collecting data using automation from public search engines and other resources whose terms of use statements clearly prohibit such behavior. The discussion was brief and the subject was quickly swept aside. It boiled down to "everybody does it, including the search engines, so that makes it okay". I filed that rationalization away and kept going.

The intellectual property transfer from Provo Labs LLC to the new company Phil Burns is starting had not yet happened. I had even contemplated using my company, NetBrick Inc, an S corp of which I am the sole shareholder, as a holding company for this new venture. But I had become impatient and as Phil put it, "emotional and panicky".

I had my doubts about the whole deal and so today I pulled myself out of the deal entirely in part because I had lost faith that we would successfully negotiate the intellectual property rights to this product, in part because I did not believe I would have time to continue working on the project, but mostly because I had come to believe that it would simply be the wrong thing to do.

This process of introspection has been painful. I had to admit to myself that for the last seven months of my life, I have been building, enthusiastically, a product that was in large measure designed to violate the terms of use and possibly violate the law in the acquisition of meta-data from search engines and other sites for the express purpose of reselling that data in the form of market research and other such reports. I had rationalized this by thinking that we would not sell the data but only the conclusions we reached from the data. Splitting hairs like this was just another way to sweep the ethical inconsistency under the rug.

Today I informed Phil and Paul that I will no longer be involved with the project as it stands and that I will deliver the code in its existing form. I did not share with them my reasoning behind my decision because I really did not want to engage them in a debate on the merits of my decision. We had already been down that road.

After I informed Phil and Paul by email, I did some online research--something I really should have done, and unbelievably did not ever do, prior to starting the project. From any of the big three engines (Google, Yahoo, and MSN), you can click one or two links to get to the following terms of service information.

Google
http://www.google.com/intl/en/terms_of_service.html
"The Google Services are made available for your personal, non-commercial use only. You may not use the Google Services to sell a product or service, or to increase traffic to your Web site for commercial reasons... You may not take the results from a Google search and reformat and display them... You may not "meta-search" Google... You may not send automated queries of any sort to Google's system without express permission in advance from Google. Note that "sending automated queries" includes, among other things: using any software which sends queries to Google to determine how a website or webpage "ranks" on Google for various queries;
"meta-searching" Google; and performing "offline" searches on Google.

MSN
http://tou.live.com/en-us/
"In using the service, you may not:...use any automated process or service to access and/or use the service (such as a BOT, a spider, periodic caching of information stored by Microsoft, or “meta-searching”);"

Yahoo & Overture
http://docs.yahoo.com/info/terms/
"Except as expressly authorized by Yahoo! or advertisers, you agree not to modify, rent, lease, loan, sell, distribute or create derivative works based on the Service or the Software, in whole or in part."

Clearly, these search engines do not want you to use automated search software to mine their meta-data presented in search results and the results of other search related queries. It is clear that their intent is to only allow individual users through a normal web browser to access and use this information. Yahoo is more vague than the other two but the intent is still there.

So is the search engine behavior of crawling the content and indexing the content of other web sites unethical or immoral? Does that violate the terms of use posted by many other sites? Will the search engines remove your content from their site if you request it? I do not believe that it is unethical or immoral to drive traffic to a web site because its content contains what a search engine user probably wants to find. The search engine is not repackaging and reselling the data they find on the crawled sites. Yet they do profit in some measure from mining that content, for without the content, they would have no users. It seems to be a trade that most web site owners are willing to make.

I want to make it clear here and now that I believe that if I had made my concerns known to Provo Labs management more forcefully in the early days of this project, they would not have required me to work on it. They would have, I think, found something else for me to do. I hope this illustrates the flaw in my own character, which I hope to remedy in this, and does not leave the reader of this post to believe that Provo Labs LLC acted in an unethical manner.

The code is powerful and capable of being extended and used in a variety of ways. A friend of mine pointed out to me that not everything it does is a violation of terms of use document. In fact there is a lot of things that it is designed to do which goes no further than a typical web crawler in terms of gathering data. Perhaps a means can be found to make use of what it can do without violating terms of use policies. Perhaps the power of the code can be leveraged within the framework of licensed APIs. This is something that will have be determined.

Until that time, I'll continue my work at my new job and focus my personal coding efforts on my Forseti Project to keep my coding skills as sharp as I can. And I will take away an important lesson from this whole roller coaster ride: always examine and question the ethics of a project and then listen to your instincts.

If you'd like to comment and berate me here, go right ahead. I deserve it. If you're particularly vicious, I reserve the right to edit or remove the comment. If you've had similar experiences and stood up more valiantly, I'd like to hear about it and how it all turned out for you.

Starting Something New -- The Forseti Project

I've recently taken a position with a company as Principal Technologist which means I'll be doing more writing and more meetings than I have in the last few years. It also means I'll be writing less code during the work day. The great part about that is that I'll be able to focus more of my own time on writing code that I want to write.

So what is it that I'm writing? Well, I can't tell you that right now, but I'll tell you the name of the project, the name of this code: The Forseti Project.

No, I don't own the domain name. It's specifically designed to take advantage of some of the things I've learned in the last couple of years and to explore things I've not had time to work on because of a very hectic coding schedule. Now that the coding schedule seems to be lightening up, I'll be exploring some things I've wanted to delve into.

And that's all I have to say about it for the moment. As I move along, I'm sure I'll be sharing some of the things that I learn and discover as I work on this exciting new project.

I'll keep you posted.

Revive an Old Turbo Flame?

Just found this referenced on an FTPOnline story: http://www.turboexplorer.com/

For an old Delphi aficionado (version 5 was my last), I can't wait to download the Explorer versions to see what they've done with the place.

I've always felt that starting with a Borland tool was a better place for a beginner to start. And then you take a corporate job and everyone is drinking the blue coolaid. Don't get me wrong. I like the coolaid too. Visual Studio 2005 is hands down the best IDE I've worked with. And no, for you Eclipse fans, I've not tried that highly vaunted IDE. I do know people that have used both and they invariably have good things to say about both.

Borland is spinning off the tools, so they say. So where will they be spun and how do these new dolled up Turbo versions fit into the equation. And so I don't have to wait so long, is there anyone at Borland that can get me a sneak peek copy.

I promise to run it through it's paces and report back here. I'm especially eager to try the C++ flavor. Could the good old days of Turbo be back? Let's see....

SOAP vs REST -- Clean vs Comfortable

SOAP vs REST
In my work I've had occassion to use both SOAP and REST in the client and the server. SOAP is easy if you have good tools. Hard wiring a WSDL is not my thing. At the risk of committing a pun foul, I'd rather eat a bar of soap than hard code good WSDL. Fortunately, .NET makes WSDL for simple web services easy, both on the server and client end of things. And WSCF makes more complex web services easy in the .NET world.

At the same time, REST is more comfortable, especially for those without nice support tools for consuming SOAP on a plate of WSDL. A nice simple HTTP POST. A simple agreement between friends to pass X, Y, and Z data along in a simple name value pair model.

Trust or Verify
I guess in some ways it comes down to trust. Do you trust the client to submit clean data? Can you trust your server application to parse through and make safe any data that is not clean? Or would you rather automate some of that authentication via schema and the rigidity of SOAP? For me, it all depends on the circumstances.

The Illusion of SOAP and Schema
How tight are your contracts? A good lawyer will take a two page agreement and expand it to ninety pages. Not only because she wants to bill you more but because she needs to cover all the bases. Are your web service contract bases covered? Is the schema and secondary validation sufficient.

Can REST Be Secure?
This line of thought takes me to the question. Can we trust REST? Well, the short answer is no. But the longer answer is, yes, just as much as we trust SOAP. The brilliance of SOAP is the contract is carried with the data, or at least that data is transported in a container with which the contract may be validated. So is that really better? Well, the underlying truth is that someone else wrote a bunch of helper code to help us perform the first level of validation in the message--form. But what about content. Yes, schema validation can do some content validation as well, especially of the type type of validation. Beyond that, it's up to you pretty much.

Validation Bottom Line
Ultimately the value and robustness of a web service, whether you use REST for it's simplicity or SOAP for the niceties of automated tools, will be determined by the code you write to validate and execute and respond with an appropriately formed response.

Consider Your Audience
Back in my poor old days as a technical writer, I had to always keep in mind and understand my audience. It really does matter. For example, my mother would not understand a single word of this post. If you are publishing web services, you must consider who will consume them. Will they be a hodge podge of PHP, JSP, ASP, and many other forms of "server pages" technology? Or will they be hard driving Visual Studio SOAP users who would rather have the tool do the heavy lifting and eliminate the need to parse?

Give your users what they want. And to do that, you may have to give them both SOAP and REST. I guess that won't hurt us too much. After all, a hot shower is always a good combination with a good night's rest.

Frameworks in Their Place

A friend shared this Joel on Software post with me today. An absolute hilarious read.

Excerpt:

"So you don't have any hammers? None at all?"

"No. If you really want a high-quality, industrially engineered spice rack, you desperately need something more advanced than a simple hammer from a rinky-dink hardware store."

"And this is the way everyone is doing it now? Everyone is using a general-purpose tool-building factory factory factory now, whenever they need a hammer?"

"Yes."

Frameworks have their place but time seems to be unkind to them. I'm not a Java guy but even the Java zealots I knew "back in the day" are now less bullish on the frameworks mess that exists on the Java stack. And no, for you remaining zealots, I don't want to fight about the point.

The fact is, the .NET Framework is only a few years behind. Will it bloat too? Has it already started with 2.0 and all the changes in ASP.NET and so forth and so on? Will 3.0 see bloat or a cohesive, more conservative growth pattern.

If we're all doomed to framework elephantitus, what is the solution? My hope is that Microsoft will learn from the failures of its biggest competitor and work as hard to keep the framework tight as they work to sell their operating systems. The same aggressive and successful behavior will be the only thing, perhaps, that can save us all from the same doom suffered by our Java brethren who are now escaping in droves to PHP.

Okay, I made that last part up. But it could be true. ;-)