While I Compile

… I compile my thoughts about programming

Let’s turn off all features to improve UX

Here’s a thought; what if all features on a new software upgrade were turned off by default?

What if when you bought a software upgrade, all the new features were disabled and the software worked exactly as if it did before you installed it? What if new software had all but the most structurally necessary features disabled?

Yes, I’m serious.

But why?

For a maximum user experience of course …

WTF? … how can you have a maximum user experience with everything turned off?

Because the biggest issue with user experience is surprises.

Well duh! But now we’re talking bugs, and everybody knows bugs suck.

Yeah, bugs suck, but that’s not what I’m talking about.

I’m talking about features that work to spec, but unfortunately the user doesn’t even know exist! Notice that I’m not talking about features that users don’t know how to use. I’m talking specifically about features, often intrusive features, which the user doesn’t know about.

Language Bar

Language Bar

Like what? … oh, I don’t know … like having my keyboard cycling between US, Canadian French, and Canadian Multilingual, in an apparently random pattern last week. This happened because the default hot keys on the Language Bar software were set to the same keys as is standard for highlighting text for cut & pasting. This took me days to figure out, for a variety of reasons, the biggest being that, the change is not obvious until I type a letter that has a different mapping, turning my your single quotes ‘, into “ or è! I was completely oblivious to the fact that utility even had keyboard shortcuts, as I’m sure the person who selected those keys was oblivious to the fact they are standard shortcuts. This was extremely frustrating, and notice it was not a bug.

The problem is that things did not work as expected.

Unfortunately, when the user doesn’t know ‘about’, let alone ‘how to use’, all the funky bells & whistles we added to improve their lives, their experience more closely resembles a series of surprises. And surprises suck … oh yes they do suck. I’m not talking “Ed McMahon showing up on your door step” surprises, I’m talking “your meeting starts in 5 minutes, and the printer doesn’t work, and when you reboot, a large Windows Update starts installing” surprises.

Surprises are frustrating and unproductive.

So why the surprises? Because users are diving into the software to get stuff done, and don’t have the time or are just unwilling to put in the effort to learn all the bells and whistles. And who can blame them? Who wants to read hundreds of pages of documentation just to use the one new piece of functionality they bought the upgrade for.

Maybe if we just gave them what they wanted, they might invest the 5 minutes to learn what they need, and we’d have a more educated user base. Maybe if we gave them functionality and an education in bite sized chunks, they’d be more inclined to learn more because the commitment is a 2-5 minute video, not a huge, 1000 page, $75 book, or worse, a help file which provides absolutely no indication of scope.

So what’s my solution?

I think software should install with all features disabled.

Then when the user wants to use the awesome feature he was sold on, they just kick up some kind of feature start up dialog, which prompts the user to watch a minimal length tutorial for the feature and make the decision to enable the feature right in the tutorial. Yes, there would need to be the option to bypass the tutorial of course, for those who already know it, or just want to fumble around in it, but just knowing the feature exists will make their fumbling more likely to succeed. Maybe functionality could even be presented (marketed) to the user in order of popularity, helping users decide what features to look into if they don’t already know them. It might also help software companies know what functionality they should not be spending time on.

Yes, I realize there will be some things that must be enabled due to structural integrity issues, where enabling / disabling a feature is just too much work. Or business has decided to require it, regardless, to force feed some type of functionality to the user (think Apple iTunes). Or it’s just a design decision. But maybe those auto-enabled features could have their (again, bitesized) tutorials queued according to the most invasive functionality so the user could still skip them, but again, at least they’d know what changed.

I think the biggest argument against this idea would come from the business end of software product companies who feel that it’s got to be setup immediately based on the most common users, assumed workflow, and I’m sure there will be many who believe it must be enabled because users just won’t enable it themselves … yeah … just think about that statement for a second please. I know that would be an argument from somebody.

So, in summary, turn off as many features as possible, preferably all of them and push the user through a bite sized tutorial before allowing them to enable each one.


July 23, 2010 Posted by | Uncategorized | 3 Comments

New Job

New job

I’m starting at a new job this morning. 

I’ve been looking since April. Well sort of; it took me a long time to accept the fact that full time might be the best choice for me at this point in my life. But once I accepted that, I went straight to Infusion Development and got in their interview process. 

I’ve known about Infusion since about 2006 via their ads on DotNetRocks. As I understood, they’ve been growing super fast, have reputation for having bright developers, & a bleeding edge development strategy.  For example, Infusion is a leading developer of Microsoft Surface apps … If you’re reading this, I’m sure you know about Surface already, but if you don’t, go to Google Video and do a search on it.  It’s an awesome product.

I won’t be working on the Surface though. I really want to work in ASP.NET MVC / jQuery, and that’s what I’ll be doing. 

I’ve wanted to work at Infusion for a long time, but didn’t act on it since I had a pretty good thing going as an independent consultant. But when my contract ended in April, a bit of soul searching led me to the conclusion that I needed to push my career to another level, and this might be the best strategy.

As I understand it, Infusion has over 100 developers in their Toronto offices. If you’ve ever looked my Twitter feed, you know I like talking to smart developers. So, yeah, I’ll be in heaven.  🙂

One thing I’ve noticed since I started looking for a job is that I seem to have missed an important exit in my career. Nobody looks for developers with more than 10 years experience. It seems that after 10 years most dev’s have moved to management or architecture, which makes sense, but as a ‘do it all’ independent consultant, I never made that distinctive leap. I made more of a indistinctive transition.

Based on conversations during interviews, I got the impression they would like to see me pursue a move to architect and will help me do it. So that’s nice … Assuming I did interpret that correctly. 

Infusion also has a wickedly awesome intensive training program as well, where you do intensive, 8 hours a day, in house, until you learn the new technologies.  I believe they even have their own training team. Freakin awesome!  

The HR lady was happy when I expressed excitement about the training. She said a lot of people don’t like it. I’m like WHAT!?!?. In all seriousness, I expect it won’t be too long before my manager gives me an ass kicking for signing up for too many training courses.

You know, I’m really too old to be naively enthusiastic about a new job, and I’m sure there will be things that suck, there always are. But for right now, I don’t think there is a job I would like more, is better for my career, or can best maximize my talents.

I am very excited about starting at Infusion today, helping them kick ass and take names. 😉

Wish me luck.

July 12, 2010 Posted by | Uncategorized | 3 Comments

Visual Studio Bug – ‘if’ followed by a try / catch causes debugger stepping error

Yesterday I was debugging and stepped into a method. I wanted to get past my parameter validation checks and into the meat of the method, so I quickly, F10’d my way down the method, but I noticed a line of code was stepped on which should not have been touched.

The code was a simple parameter validation like:

if (enumerableObj == null)
    throw new ArgumentNullException("enumerableObj");

with several similar parameter validation lines above it and a try/catch block containing the meat of the method below it.

The odd thing was, I thought I saw the debugger step on the throw statement even though the enumerableObj should have had a value.

I assumed I had somehow passed in a null value to the enumerableObj parameter and had nearly missed the problem in my haste. I had been moving quickly, so quickly in fact that I had stepped about 3 more lines into the method before I even stopped. To be honest at this point, I wasn’t even sure if I saw it step into the ‘if’ block, so I repositioned my debug cursor back to the ‘if’ condition, and stepped again. Sure enough, it stepped into the ‘if’ block.

I assumed I passed in a null parameter, but when I evaluated enumerableObj, it was set, what’s more, evaluating the entire enumerableObj == null expression resulted in false, as expected. But why the heck was I being stepped into the ‘if’ block when the ‘if’ condition was false?

I retried it again, just in case the enumerableObj had somehow been set as a result of a side effect somewhere, but even then, it still stepped into the ‘if’ block.

So, I did the standard stuff; cleaned my solution, deleted my bin and obj directories, reopening the solution, restarted Visual Studio, & rebooted, all the while rebuilding and retesting the project with each change. Nothing seemed to work. I even cut & pasted my code into notepad, then cut & pasted from notepad back into Visual Studio to ensure there was no hidden characters in my files.*

None of this worked, so I started commenting out code in the method, and eventually was able to isolate it to the above code failing if, and only if, it was followed by a try / catch block. Seriously! If the try / catch block was there, it would step onto the throw statement even though it should not have, but when you removed the try / catch block, everything worked just fine.

In order to isolate the problem for debugging purposes and to have something I could ask for help with, I isolated the problem to a simple command line app.

using System;
using System.Collections.Generic;

namespace IEnumerableBug
    class Program
        static void Main(string[] args)
                List lst = new List();
                Console.WriteLine("Post IsIEnumerableNull() call");
            catch (Exception ex)
                Console.WriteLine("-- Error --");
                Console.WriteLine("\n\nPress any key to exit...");

        public static void IsIEnumerableNull(IEnumerable enumerableObj)
            if (enumerableObj == null)
                // this should never be stepped over, but is (at least for me)
                // however execution moves to the next line once this is thrown instead of interupting execution
                // Also, a breakpoint on this line is never hit, as you would expect.
                throw new ArgumentNullException("enumerableObj");

            // commenting out this line of code will cause the debugger to step through
            // the above control statement properly
            try {} catch { }

Once I got it isolated, I also realized that even though it was stepping onto the throw statement, it didn’t actually jump to the appropriate catch block, it just continued on it’s way to the next line of code. This is why I was able to F10 3 lines past the throw the first time I stepped on it.

Anyway, I threw the code up on Pastebin.com and asked my Twitter followers to see if the problem was repeatable for them. After the standard Twitter, 140 character limit, miscommunication confusion, everybody came back with a negative result. “It works fine on my machine.” … typical 😉

Stepping through the debugger with the throw statement, line of code, highlighted

Close up, click for a screen shot of the entire IDE

At this point I was wondering if it’s some wonky bug only on my machine or if I hadn’t gotten enough sleep and I was doing something so incredibly moronic that I was about to publicly brand myself as an idiot.

Fortunately 2 things happened at that point; 1) I remembered one of my old laptops had VS installed on it, and I was able to repeat the problem on that computer, and posted a screenshot, so people wouldn’t think I completely lost it.

And 2) @james_a_hart came back telling me that he could repeat it and that it was a problem with the way Visual Studio is stepping through the debugger symbols. He also reduced the problem a lot further than I had. I had been so caught up in the problem, I hadn’t even realized how much more the test command line app could be simplified.

using System;

namespace IEnumerableBug2
    class Program
        static void Main(string[] args)
            if (new object() == null) 
                throw new Exception(); 
            try { }  catch { }

I don’t know who James works for, but I think he has a pretty sophisticated VS virtual machine setup, because he was able to confirm the problem only happened on 64bit Visual Studio 2008 and 2010.

Oh and for the record, in my testing, this only seemed to happen only with a throw statement. Switching the throw statement with a Console.WriteLine(), worked as expected.

using System;

namespace IEnumerableBug2
    class Program
        static void Main(string[] args)
            if (new object() == null) 
                Console.WriteLine("Equals NULL");
            try { }  catch { }

It was a heck of a day, and a huge waste of time, but I’m just glad I now know what the problem is.

I want to thank everybody who helped me by brainstorming with me on this, or running my code to check if they could repeat the problem; @james_a_hart, @JMBucknall, @dullroar, @SteveSyfuhs, @InfinitiesLoop, @klmr, @SyntaxC4, @CarolFil, and @brianrcline.

* As I understand it, cs files are plain text, so this was unlikely to be the problem, but I felt it was an assumption that I should test.

July 2, 2010 Posted by | C#, Code, Programming | , , , | 1 Comment

An Abstract Data Model

This is post 3 from a 7 part series entitled Technical Achievements in my Last Project.

Usual N-Tier Application ModelOverview
Normally, when I build a new system, I design the new data model based on the requirements, and build my business objects and data access, based primarily on a that data model*. The remainder of the application is built on the components beneath it, so when you change something at the bottom, like the data model, changes ripple throughout the application. The data model serves as the foundation of my application.

Now as far as this project goes, one of the important requirements was to deliver the new system incrementally, while leaving the older system to run in parallel until completely replaced.

Parallel Data Models
This presented a bit of a dilemma for me since the current database was … well … lacking, and I was planning to refactor it enough to make it a very unstable foundation for the old system. I wanted to refactor it for a number of reasons including; missing primary keys, no foreign keys, no constraints, data fields which were required but not there, data fields which were there but not used, data fields containing 2 or more pieces of information, and tables which should have been multiple tables. Not to mention the desire to achieve a consistent naming convention without the insane column names using characters like ‘/’ and ‘?’ … seriously.

However the parallel systems requirement caused a bit of a dilemma. I mean, how do you manage parallel systems, one of which needs a stable foundation, and the other is so temperamental that you don’t want to touch it.

Parallel databasesMy options as I saw them were something like:

  1. Scrap the data model refactoring.
    This really didn’t get much thought. Well it did, but the thought was, is this the best route for the client? And if so, should I offer to help them find my replacement or just leave? I definitely wasn’t up for replacing one unmaintainable piece of junk for another.
  2. New data model and re-factor the existing app.
    The existing application was a total nightmare built in classic Access spaghetti code fashion. Just touching that looked like going down a rabbit hole of certain doom.
  3. New application on the old data model and refactor the data model later.
    This would have caused a real disconnect between the data model and the application. I’m not sure if the data model and application ever would have lined up properly. Not to mention the clients probable later decision of not completing that part of the project since everything worked. This seemed like a very bad idea.
  4. Parallel databases with synchronization

  5. Build a parallel data model for the new system, while leaving the old system as is.
    From a development point of view, this seemed like the best alternative, but keeping an active database in sync presented a serious, possibly unconquerable, challenge.

The final option of refactoring the data model immediately and basing all construction on a solid foundation was definitely the most appealing. But how do we keep it in synch? I’m sure there are tools out there for that, but with a possibly dramatically different data model? With active live data? Even if there are tools, I doubt the price would have been within the project’s budget**. And if it did exist how would we bring concurrency issues back to the users who caused the conflict?

Abstract Data Model
That’s when I had the idea; Why not just build an abstraction layer on the database? Why not manage the data all in one database while abstracting out the other data model? Why not build a simulated data model? Why not just redirect all my views and procs to the other database?

This was so bloody simple. Why hadn’t I ever heard of anybody else doing this before?

Parallel databases with abstraction layerSo the plan was to refactor the data model, build a concrete database, and instead of having stored procedures and views pointing to the tables like it was meant to, they would point to the tables in the other database. All changes would proceed as usual, for example; if the client had a change request which required a new field in a table, it would be added to the physical table, views and stored procedures would be updated, and the applications would change to accommodate. And when the old system was completely replaced, all that would need to be done, is to rewrite the DML to direct to the current system. Even the data transition would be easier since we’d already have views aggregating data in the expected format!

I was pretty excited about this when I designed it and told a few developer friends, who thought it was either stupid idea, problem ridden, or pointless at best. Now I do have a lot of stupid and pointless ideas, but didn’t feel like this was one of them.

Implementation Challenges
So how did I implement it?

Well once the new data model was finished, I wrote the views and stored procedures, as you might expect, but at this point you run into the following challenges:

  1. Required data missing from the existing database
    For example; A create date for products so business knows when a product was added to the system.
  2. Existing data in old system requires new values.
    For example; An order has a boolean status field for ‘pending’ & ‘completed’, but business requires status’s to be changed to ‘pending’, ‘ordered’, ‘shipped’
  3. Non-existing data tables need to be simulated
    For example; Lets say business wants the user to be able to request product literature on the order with regular products, you’ll need to simulate orders for product literature ordered via the old system.

The non-existing data tables were easily simulated with a view. However, these often came with a performance penalty. This is one of the few cases where the new application needed minor modifications to get around. Basically, different views were created for different situations, and the data access component would select the most appropriate view based on the circumstance.

Extension Tables
The missing required data and data changes (like status codes) were handled with extension tables.

So if I had a table named ‘order’ for example, I would create a new table called ‘order_x’, with a matching primary key column, plus columns for data that was required but missing, and data which required changing. Then insert, update, and delete triggers would be added to the ‘order’ table so changes from the old system would keep the extension table up to date. And procs and views on the new system would join the 2 tables to represent it as a cohesive unit.

If the current fields required value changes and/or new values, the new values would be stored in a field in the extension tables, and the update trigger on the main table would update the status when it changed from the old system. In situations where the data did not synch up 1 to 1, certain column mapping rules would be used. To extend on the order status example; ‘pending’ in the old system is the same as ‘pending’ in the new system, but what about ‘completed’? Is that ‘ordered’ or ‘shipped’? It might be mapped so if the old system updates the order to ‘completed’, it would change the extension table to ‘ordered’, and if the new system updated the status to either ‘ordered’ or ‘shipped’, the ‘order’ table status would be updated to ‘completed’.

The Dirty Data Problem
But the biggest problem was dirty data. This was a killer! This is the one challenge which plagued us throughout the entire project and knocked us off our schedule continuously. Because the old system was still being used, which offered the users absolutely no restrictions; we were getting situations which never could have been predicted. This was causing the application to act in unexpected ways, and even after making changes to accommodate the dirty data, we received endless support inquiries on unexpected behavior caused by null data and unexpected values.

There were changes to the application based on this as well. We actually had to change our business objects to set default enum values and make most properties nullable types, even though in the new data model, they were not nullable. This doesn’t effect input, but anywhere that data was being read from the database, we had to accommodate it. These nullable types will not require changing when the old system is completely replaced, but they do represent a smell which I hope somebody will eventually eliminate.

Overall though, I’d say this strategy was an overwhelming success. Other than the dirty data issue, which still rears it’s head every now and again, there have been no problems since it was first deployed.

If you can get away from a parallel deployment, I would recommend doing so, but if you can’t, I really think this strategy is a good one.

EDIT: After I posted this, it occurred to me that this strategy really cost almost nothing, since the biggest costs were in the setting up the views to extract the data out of the system in the expected format, which would have needed to be done when the data was moved to the new system anyway. The only real extra work was the extension tables and abstract procs, neither of which were very difficult once the mapping was established in the views. My colleague Ben Alabaster also pointed out that even if we bought an overpriced synch tool; configuration of the tool would have taken longer to setup than my solution.

This is post 3 from a 7 part series entitled Technical Achievements in my Last Project.

Credit-Thank you Ben Alabaster for the illustrations.

* I need a pretty good reason to build a data model and object model that are different. I’ve have done it, but its rare to have a compelling enough reason.

** At the time I wasn’t aware of any tools to do this. Karen Lopez was kind enough to let me know that TIBCO & Informatica may have done the job, but are expensive. From what I can tell, these tools would have been more expensive than the strategy I implemented. Thanks Karen.

Copyright © John MacIntyre 2010, All rights reserved

April 1, 2010 Posted by | Programming, SQL, SQL Server | , , , | Leave a comment

Hey #region haters; Why all the fuss?

I hear a lot of programmers saying #regions are a code smell. From programmers I don’t know to celebrity programmers, while the development community appears to be passionately split.*

The haters definitely have a point about it being used to hide ugly code, but when I open a class and see this, it just looks elegant to me.

Elegant Regions

Elegant Regions

Now none of these regions are hiding thousands of lines of ugly code, actually, most of these regions contain only 3 properties and/or methods and the last curly brace is on line 299. So the whole thing with 17 properties and methods including comments and whitespace is only 300 LOC. … really, how much of a mess could I possibly be hiding?

To me, the only question is whether I should have this functionality in the ContainerPageBase or the MasterPageBase**.

You may also notice the regions I have are not of the Fields / Constructors / Events / Properties / Methods variety. It has taken some time for me to accept that all data members (aka fields) do not need to be at the top of the class as I was classically trained to do and that perhaps grouping them by functionality is a better idea. This philosophy only makes regions that much more valuable.

… is anybody still here? …. have I converted anyone? 😉

* These posts are fairly old, but in my experience in the developer community; the consensus hasn’t changed.
** The Database Connection & Current User regions may have some scratching their heads. There are valid reasons for them, however the Data Connection region will never be included at this level again. More on that in a future post.

Copyright © John MacIntyre 2010, All rights reserved

March 29, 2010 Posted by | C#, Programming | , | 6 Comments

Large Application Estimation in 2 Weeks

This is post 2 from a 7 part series entitled Technical Achievements in my Last Project.

My role in this project started out by being asked to assess the existing project, provide insight into options to move it forward, with one of those options being a rewrite*. An estimation was needed for the rewrite option, so I was given 2 weeks to do it.

This post explains how I was able to pull off this massive estimation undertaking in a mere 2 weeks.

Ideally, the project documentation from the existing system could be used to give an excellent estimate, but this is a blog post, not a fairly tale. Or a thorough specification could have derived from an in depth analysis of the existing application, which business could have adjusted as needed, and used to conclude a reasonable estimate. But this is the real world, and this is a real business; and I was given a real (short) deadline.

Now I should also mention this wasn’t a 20 KLOC project, it was a fairly complex piece of software with over 500 KLOC** and almost 1800 database objects along with satellite applications. Everybody understood how this short timeframe severely limited the accuracy of anything I would be able to provide, but I was determined do the best job possible.

So my next goal was to figure out how to do a somewhat accurate estimate, provided the constraints, where I wouldn’t be setting myself up for a lynching at the end of it. I explored many different ways to get a rough idea about the entire projects scope.

This is what I finally settled on:

  1. Dumped all Microsoft Access Objects
    First I modified an Access VBA script I found for exporting objects to text files and exported everything.
  2. Dumped all database DDL
    I wrote a little command line utility to loop through a SQL Server database, pull the DDL for each object using the sp_helptext stored procedure, and write it out to text files.
  3. Created an analysis database
    Created an analysis database primarily comprised of three tables; one for all the entities the application is comprised of, a second for linking which entity called which, and the third for linking menu items to all dependent forms.
  4. Collected the names of all objects into the database
    I wrote another little command line utility to read each code file dumped out in steps 1 & 2, and add the objects name and a few other statistics.***
  5. Determined all entity relationships
    I wrote another command line utility which traversed each code file, reading in the code, and determining which of the known entities it was dependent upon. This information was stored in the second linking table in the analysis database.
  6. Determined dependencies of each menu item
    Wrote yet another command line application to traverse the dependencies to determine which menu item could eventually load which forms. Certain forms were ignored in the calculation including, a) previously calculated forms (obviously), b) menu item starting point forms, and c) specific forms which could load almost every other form in the application.
  7. Ball park estimated each GUI component
    Loaded each of the nearly 400 forms and 200 reports, and did ball park estimations on each one, deriving what business logic I could glean from the UI. I used the CRUDLAFS estimation technique to ensure I didn’t miss any basic functionality.
    Other than trying to figure out how I would do the estimation, this was the most time consuming task. Just think, even at a mere 3 minutes per form, we are still talking 30 hours of tedious effort.
  8. Totaled the estimates
  9. Menu estimate breakdown
    In order to determine the time to replace one complete menu item with all functionality from that starting point, I needed to sum the estimates from all dependencies from that form onward. So I queried the times for each menu item starting point, summing all dependency estimates and added it to my report.

Now there are some serious issues with this strategy, like the high probability for; missed complexity, missed functionality, and just overall inaccuracies, but these issues were known and pointed out at the time with the estimate.

Was the estimate a success? Was it accurate? Honestly, I’d say it was a success, but it didn’t turn out to be accurate.

…. Wait! What?

How could an estimate be a success if it wasn’t accurate?

Well, let me revise that by saying some of the core underlying assumptions were changed dramatically 5 months into the project which increased complexity far beyond the simplistic web design the estimate was based on.

The big lesson learned from this task wasn’t so much about estimation as it was about managing requirements and sign off. …. But I digress. 😉

Anyway, I think the estimation I performed was well grounded in something, even if that something was not as thoroughly researched as would be ideal. I believe the executed strategy had a good return on investment.

This is post 2 from a 7 part series entitled Technical Achievements in my Last Project.

* For the record, I already had more consulting work than I could handle at the time, so while a rewrite was interesting, steering the client into an unwarranted rewrite was not beneficial for anybody.

** LOC sizes include comments, white space, and database object DML.

*** The other statistics, LOC, etc.., was actually one of my false starts in how to do this analysis.

Copyright © John MacIntyre 2010, All rights reserved

March 25, 2010 Posted by | Consulting, Process, Programming | , , , | Leave a comment

Technical Achievements in my Last Project

I’ve wanted to write this series for a long time, but hadn’t gotten around to it. Now, however, with my contract ending soon, I feel if I don’t write some of this down, I will never find the time, and it will be lost forever, which would be disappointing since I feel there are some really interesting things I did on this project which could benefit others.

This isn’t about the kludges needed to fix Microsoft’s dysfunctional webforms architecture to work the way I need it to. It’s not about how to fire a server side event from a client side created button or how to write JavaScript for an ASCX used multiple times on the same page, when you don’t know what the rendered ID will be, and it’s not about overcoming resistance created by bizarre vendor API paradigms or outright bugs. It’s about overcoming the big requirements challenges placed before me.

The project basically revolved around a significantly large and complex Microsoft Access application which had many issues. It was determined the best course of action was to rebuild it as an ASP.NET web application. However two important constraints placed upon me were; a) development could not be done in parallel, switching everybody over all at once upon completion, b) the new web application must be run from within the existing MS Access application and interact seamlessly until the MS Access app is completely replaced. The first constraint wasn’t a big deal until you consider the fact that the existing database was a total mess, needing refactoring, and the Access app was a spaghetti code nightmare where changes could potentially drag into eternity. It was wisely decided very early on that we would not do anything to upset the stability of the existing application.

The series will cover some of the more interesting things done on this project and will be over 7 parts:

  1. Introduction and Overview
    Introduction to the series, brief run down of the general requirements and intention of the project.
  2. Large Application Estimation in 2 Weeks
    How I assessed the condition of a very large and complex MS Access application with 540 KLOC and almost 1,800 database objects, and how I was able to provide a very rough estimate to its reconstruction with a 2 week deadline. I expect to be able to piece together how I did this from memory and rough notes I have, but if I’m unable to come up with something meaningful, this may get nixed.
  3. An Abstract Data Model
    Because I did not want to unsettle the existing software and needed to keep the data in synch, while simultaneously refactoring the database and providing a good data model to serve as the foundation of the new application, the new data model was simulated. This was an interesting approach which I’ve never seen anybody else do.
  4. A User Configurable List Mini-Framework
    Unnecessary and/or missing list columns came up repeatedly in conversations with users, so I designed a configurable, flexible, and extendable mini framework for quickly building data lists which allowed for user selected and positioned columns, advanced filtering, sorting, and exporting.
  5. Embedded Web App to MS Access App Communications
    Communicating from client side JavaScript to the container Access application the webpage is being run within, was one of the more innovative solutions on this project. But what’s even more interesting was that I was able to un-intrusively inject the functionality with a simple 70 LOC JavaScript file which can be switched out to remove the functionality.
  6. The Plug In Architecture
    Intrusive integration is a major problem, tying companies to specific vendors and creating unnecessary dependencies. I designed a simple plug in architecture to allow a developer to implement an interface, make one configuration change, and run without any changes to the underlying application.
  7. HTML Table Column Sizing similar to Excel
    Users didn’t like the standard HTML tables and wanted Excel like column sizing functionality. Finding information on how to do this proved impossible, so I sat on it for a while and eventually created a small JavaScript function which adds column sizing to any HTML table without messing up existing cell editing script.

I hope you find this series beneficial. I expect to complete the next 6 posts over the next 2 weeks, but make no promises. I am currently looking for a new contract after all.

Copyright © John MacIntyre 2010, All rights reserved

March 22, 2010 Posted by | Career, Consulting, Programming | , | Leave a comment

The CRUDLAFS Technique for Software Estimation

I have always, like so many other programmers, had a problem with software estimation and costing out a software project. Most of my career was plagued by software estimates so bad that I made significantly less money on my independent projects than I did at my low paying day jobs. This was not the reason I was working through my vacations!

The turning point came when I realized the CRUD acronym encompassed most of the functionality of the line of business applications I was writing and could be used as a check list to ensure I wasn’t missing functionality. Later, this checklist was refined to CRUDLAFS.

Wait! What? CRUDLAFS? … did I just coin a new acronym? …. Hmmm, looks like I did!</egosmirk>

While I’m sure you are already familiar with the CRUD acronym, here is CRUDLAFS:

Create All functionality around validating and creating an entity
Read All functionality around reading and displaying an entity
Update All functionality around validating and updating an entity
Delete All functionality around deleting an entity
List All functionality around querying and displaying a list of entities
Additional  All additional functionality related to an entity
Filter All functionality around filtering a list of entities
Sort All functionality around sorting a list of entities

I set up an Excel spreadsheet with CRUDLAFS as column headings, all my known or probable entities down the left, and probable hours to accomplish each in the cells, like so:

An example of a CRUDLAFS estimate

An example of a CRUDLAFS estimate (numbers are totally contrived)

So in my example, the estimated time to create a list of orders is 5 hours.

And when I say known or probable entities, I mean; on smaller projects, I usually have a pretty good idea what all the entities will be, but on big projects, I charge hourly until requirements, data model, GUI design / functional specifications, and initial risk assessment is complete. Most larger projects just have too much back and forth communication and the customer usually doesn’t even have more than a vague idea of what they want. This was a very important distinction which has kept me out of bankruptcy.

I get the feeling this may not work well with agile approaches, but for Big Design Up Front projects, it has been a major leap for me.

Copyright © John MacIntyre 2010, All rights reserved

March 19, 2010 Posted by | Consulting | , | Leave a comment

7 Features I Wish C# Had

A while ago I saw StackOverflow question What enhancements do you want for your programming language?. I was able to, to my surprise, actually come up with something, but to be honest I don’t really think very much about what in my current programming language should be changed. Don’t get me wrong, I bitched and moaned about Visual Basic for about a decade, but it wasn’t that I wanted VB ‘changed’; I just didn’t want to work in it at all. But I digress, actually, you may want to avoid me if I ever get on the topic.

This question about what I would like to see in a programming language is something I’ve thought about, since, but the only real features I can think of consist around building a domain specific language around a specific industry and methodology. I think it would be cool to build a language around a technical analysis technique called Elliot Wave for example.

More recently, I’ve heard Jeff Atwood assert, on the StackOverflow podcast, that any good programmer can whip off 10 things they hate about their favorite programming language in a flash.*

I couldn’t come up with 10 things I hate, so I’m going to settle for 7 features I’d like to see. There may be good reasons why we don’t have some of them, but here’s my list anyway:

1. Parameter constraints

One of the first things I learned to do when I started programming, is to validate the parameters. This has saved me untold hours of debugging and I currently start every method with something like:

void DoSomething(float percentage, string userName)
	if (0 > percentage || 100 < percentage) 
		throw new ArgumentOutOfRangeException();
	if (string.IsNullOrEmpty(userName)) 
		throw new ArgumentNullException();
	if (0 > userName.Length) 
		throw new ArgumentNullException();
	if (userName.Equals("Guest")) 
		throw new ArgumentException("User must log in.");

	// .... do stuff

This is in almost every method. So why can’t the language do this for me automatically?

Yes, I know about declarative programming, but isn’t that just pushing it into the attributes? Where a 3rd party tool will process it?

What I’m suggesting is something similar to the generics keyword ‘where’. Why not have something like this:

void DoSomething(float percentage where 0 <= percentage && 100 >= percentage, 
		string userName where !string.IsNullOrEmpty(userName) && 20 >= userName.Length && !"Guest".Equals(userName))
	// .... do stuff

Really complex validation rules could be separated into its own function or you could always fall back to validation in the body of the method the way we do it now..

2. An ‘IN’ operator

I find the following code a bit redundant:

if (x == 1 || x == 2 || x == 3 || x == 5 || x == 8 || x == 13)
	// .... do stuff

I’d prefer to borrow the SQL keyword and just write

if (x in (1, 2, 3, 5, 8, 13))
	// .... do stuff

EDIT: Apparently SecretGeek already created a class for SQL Style Extensions for C# which does exactly what I wanted (for strings anyway). Thanks Darrel Miller for the link.

3. A 3 way between operator

I’ve always wished we could replace

if (fiscalYearStartDate <= currentDate && currentDate <= fiscalYearEndDate)
	// ... do stuff


if (fiscalYearStartDate <= currentDate <= fiscalYearEndDate)
	// ... do stuff

4. Only allow var on true anonymous types

var adds awesome functionality to C#, like the ability to create anonymous types from an expression. However, when I start seeing code where all the variables are typed ‘var‘ I get a little worried.

Now I know about type inference and that var is the same at run time as explicitly stating the type, but when you want to look up what the heck variable x is; it’d be nice if the variable definition would tell me

ObjectX v = new ObjectX();

instead of

var v = new ObjectX();

I also realize, I can figure out the type by looking on the right side of the assignment operator and in the above example above it’s pretty darn clear exactly what type v is. But what if the expression is a method?

var v = DoSomething();

Now I’ve got to look up the definition of DoSomething().

Not a big deal, but you’re already 2 definitions away from your core task. This is needless resistance as far as I’m concerned.

Well, you say, I could just use intellisense to hover over the method to get the method’s return type. That is true, because, thank god, the return type for a method (along with parameters) requires the type be specified, so I can get the method type from intellisense with only one definition source code jump. However, this logic is flawed because if the type was used instead of var, intellisense would have told me without any source code definition jumps what the type was. Rather than the useless “(local variable) object v” tip I receive with it.

You know, I am pretty anal about this type of thing, and NO, I haven’t had a real world problems yet, but it just feels wrong.

5. Constant methods and properties

One of the features I always used when I coded in C++ was constant functions. The great thing about constant functions is they prevent side effects. So you know if someone, like yourself, alters functionality which is not expected to change anything, has a side effect, it won’t compile and will have to be dealt with.

This isn’t the most popular functionality in C++ and while this concept helped me write bullet proof code on my own, when I started working on a team, they weren’t very thrilled with the keyword and quickly removed it. But hey, that’s kind of a tell isn’t it?

To be honest, I don’t usually have problems like this with projects I initiate. This may be irrelevant with good design, but I would like the peace of mind in having it anyway.

6. JIT properties without a local variable to cache the value

Remember how you’d need to create a local variable for every property you create? Like:

private int _id = 0;
public int Id
	get {return _id;}
	set {_id = value;}

Which became

public int Id { get; set;}

But what about properties with JIT functionality or have a side effect? Something like this, seems awfully unfair:

private int _nextId = 0;
public int NextId
	get {return _nextId++;}
	set {_nextId = value;}

Why not have something like a thisProperty keyword? Something like :

public int NextId
	get {return thisProperty++;}

7. Warn command

I’m not talking about the preprocessor #warning. I want something like throw, but without interrupting program flow.

Why? Lets say your data access layer pulls a null or otherwise unexpected value out of a database for a value which should have a very specific set of values.**

This is recoverable and no big deal, you use a default value, but you might like to somehow warn the user this was done. So; you can’t throw an exception since that would interrupt your program flow, I can’t think of any framework component you could add***, you could add some external dependency I suppose, you can build your own external dependency which all future apps will require, you cannot use one of the GUI level features like Session since it wouldn’t be available from the DAL DLL level, and you definitely don’t want to add parameters to every method so you can pass some warning collection up & down the stack!

But something like a warn command looks awfully elegant!

warn new UnexpectedDataWarning("Unexpected status code. Set to 'Open'.");

Then in the GUI level, you can traverse the warnings collection and display them to the user.

* I can’t figure out which episode it was, so no link. Please comment if you know. Thanks.
** Yes, the database should have constraints, but sometimes things are not under your control or perhaps you have a logical reason to not create constraints … but that’s another post … probably early next week.
*** At least I don’t know of anything. Please let me know if there is something built into core framework which would allow this.

Copyright © John MacIntyre 2010, All rights reserved

WARNING – All source code is written to demonstrate the current concept. It may be unsafe and not exactly optimal.

March 17, 2010 Posted by | C#, Programming | , , , | 14 Comments

When to start looking for another job

Last week I listened to Seth Godin’s audiobook Linchpin. It really resonated with me and I had a few significant insights.

One of the interesting ideas he promoted was the idea of ‘gifts’. A gift as he explains it, is any additional work above and beyond what is required as part of the ‘transaction’. The transaction is fulfilling your end of the bargain for their end of the bargain. Seth Godin explains a transaction as

If I sell you something, we exchange items of value. You give me money, I give you stuff, or a service. The deal is done. We’re even. Even steven, in fact.

And he explains a gift as

If I give you something, or way more than you paid for, an imbalance is created.

Lets say a client is having an issue and after some digging, you have an insight where a slight change not only resolves the current problem, but prevents a similar problem from occurring throughout the entire application. The client did not offer to pay for it, and you can’t charge them, as a matter of fact, if you are a consultant, it will reduce future billable work from the client to fix the future problem. The insight and change is a gift.*

Seth continues with regards to the gift:

That imbalance must be resolved.

So how is this imbalance resolved? … Appreciation

Yep … that’s it.

If you have a particularly astute client/employer, you may receive additional work and referrals as a consultant, or a bonus, raise, and/or promotion as an employee, but these are peripheral. Appreciation is the critical element for the recipient to experience**. If your gifts are not appreciated, the client/employer does not value what you have to offer and both of you should seek out more compatible relationships.

So why give a gift? Personally, I give gifts because I want to create the best software I am capable of creating. Creating beautiful software, not just meeting requirements, is the very nature of craftsmanship. Beautiful software is a gift. I can’t write software without gifts. I am actually repulsed by the thought of merely delivered what was asked for since the requirements are always missing something. Yes I’m repulsed. It reminds me of those Mad magazine comics; ‘If kids designed their own xmas toys’. With few exceptions, the results would be horrendous!

Unfortunately, I’ve noticed a downward cycle with the reception of my software gifts. I’ve noticed when starting a new position, gifts are recognized and appreciated. You are ‘the man’ (or woman) and everybody is ecstatic with every gift. However the appreciation always seems to dissipate. Maybe it’s a ‘familiarity breeds contempt’ type of thing, or perhaps it’s just me (but I don’t think it is).

I believe there are 5 phases of perception regarding the receipt of your gifts:

  1. Worshiped – You are new with the client/employer and your gifts are truly unexpected, recognized as such, and are appreciated. You get thanked a lot by every recipient.
  2. Valued – You’ve been here for a while, and although these above and beyond tasks are appreciated, they’re not exactly unexpected anymore. You don’t get thanked much, but they realize you are a valued service provider.
  3. Unappreciated – Your gifts are expected and/or unrecognized, but unappreciated any way you slice it.
  4. Tolerated – Your gifts are viewed as a time consuming waste of effort, but tolerated. You continue to provide them out of a desire to do a good job.
  5. Rejected – Your gifts are rejected and no longer tolerated. Every suggestion of doing something with additional benefit is rejected.

In my opinion, the transition from Worshiped to Valued is normal, expected, and even desired. People can’t run around thanking you for the rest of your life, nor would you want them to. It might get a little creepy. 😉

The transition from Valued to Unappreciated is your cue to leave. This is a downward slide, and it’s unlikely that things are ever going to move back up to Valued***. You are in a great position to find other work, you have plenty of time to find the ideal next job or project, and you are leaving on a high note with a favorable memory still in their minds. However, you do need to be objective in your observation, your gifts may still be recognized and appreciated, but the feedback you are receiving is based on another pressing issue at the company and is misleading.

If you’ve moved to Tolerated or Rejected; you have completely missed your cue to leave and there is an obviously serious disconnect between what you are offering as a gift and what management perceives as valued. Regardless of the reason, both you and the client/employer might need to seek more compatible relationships***.

So what if the problem is not in the ‘perception’, but you have actually become complacent and are no longer delivering the gifts? If this is the case, you had better get back on track, because these gifts are your value added proposition and the only thing separating you from the lowest cost outsourcing alternative.

EDIT (03/08/2010): Ben Alabaster added a great comment about how an IT department being viewed as a Cost Center or Profit Center will impact on how your gifts are perceived.

*This is not a gift if the change took a significantly larger amount of time which the client did not agree to.
** The appreciation does not have to be communicated, but it must be felt by the recipient.
*** It has occurred to me that gifts could be ‘adjusted’ to more closely align with what management values. However, I think this idea is flawed since your ‘gift’ is your best ideas at improving the software, where as management is mostly interested in features, which is completely different.

Copyright © John MacIntyre 2010, All rights reserved

March 8, 2010 Posted by | Career, Consulting | , | 7 Comments

%d bloggers like this: