While I Compile

… I compile my thoughts about programming

Finding my voice

I’ve written before (non-programming) and although I’m not a writer, I feel I wrote fairly well. I mean it was time consuming, and I probably made grammar errors which I’m not even aware of … but all in all; I feel I did a good job.

However, since starting ‘While I Compile’, I must admit, I haven’t felt the same kind of flow. Each post has been forced, and in rereading them, I’m unsatisfied. For example, the post on ‘Knowledge Holes & The Brilliant New Programmer’ describes an insight, which colleagues and clients thought was perceptive when told verbally, but when I reread that post, it doesn’t seem to pack the same punch.

I do enjoy blogging, and plan to continue. I’ve got a ton of great ideas and thoughts I want to share. But basically, I appear to be in the ‘Finding Your Voice’ phase. Over the next couple months, I may try a few different styles, and see which one is the most effective. By effective I mean; first and foremost delivering real value, maximizing my efficiency, and maximize my own insights, innovations, and learning.

I hope you’ll stick with me during this phase, don’t judge this blog based any one post and possibly even subscribe to the feed. Like I said, I do have some really good ideas that I plan to share.

Please leave a comment below on anything you feel I’m doing right, or anything you feel I could improve.

Thanks,
John MacIntyre

EDIT [02/24/09] – I’ve since yanked the ‘Knowledge Holes’ post. I’m just going to sit on it for a while, and perhaps it will be reborn in a new post in the future.

February 20, 2009 Posted by | Non-Programming, This Blog | Leave a comment

How To Get The Most Frequently Used Column Values

Whenever I import external data, integrate to another database, or am new to a project, I need to get familiar with the database. The table schemas, relational integrity, and constraints are the first thing I look at and take me a long way, but soon I need to know what the data looks like.

In an ideal world, relational integrity and database constraints would define control this, and all I’d really need to do is look at those. But the reality is, in 15 years of working in this industry, most of the databases I’ve worked on, that I didn’t design, have barely used constraints and some haven’t even used relation integrity fully!

The need to get a good feel of the data is even more prevalent when working with dirty data, or when refactoring poorly written applications to ensure any refactoring doesn’t introduce other issues. I will usually wind up writing the following query repeatedly:

Select	column_name, count(*)
From	table_name
Group by column_name
Order by count(*) desc, column_name

This little query often reveals; inconsistencies between data and the application, where an application sets column X to possible values of ‘A’, ’B’, ‘C’, ‘D’, or ‘E’, but in reality, there may be zero ‘C’ and ‘E’ values in that column, but there is 6 ‘X’s, 1 ‘Q’, and an ‘?’. Or I may find that there are only 6 rows with data in that column, out of almost 3 million rows, indicating the column / application feature is unused.

Anyway, yesterday I finally wrote a little stored procedure which will print out the most frequent N values in each column for a specified table.

/*
Purpose : Retrieves the top N most frequent values in 
        each column of the specified data table.
Copyright © Jaten Systems Inc. 2009, All rights reserved
http://www.jatensystems.com/
https://whileicompile.wordpress.com/
http://www.johnmacintyre.ca/
*/
create proc jaten_diag_GetTableContents
(@tblName nvarchar(200),
@rowCount int)
as
begin
	declare @colId int;
	declare @colName nvarchar(100);
	declare @sql nvarchar(2048);
	declare @tblRowCount int;

	-- cursor to get columns
	DECLARE curColumns CURSOR FAST_FORWARD 
			FOR	SELECT	colid, [name] 
					FROM	syscolumns
					where	id=object_id(@tblName)
					order by colid
	open curColumns;

	-- get table row count
	set	@sql = 'select @retCount=count(*) from ' + @tblName;
	exec sp_executeSQL @sql, N'@retCount int OUTPUT', @retCount = @tblRowCount OUTPUT;

	-- print table header
	print '';
	print '---------------------------------';
	print '--- ' + @tblName;
	print '--- Row count : ' + cast(@tblRowCount as nvarchar)
	print '---------------------------------';

	-- get info for each column
	fetch next from curColumns into @colId, @colName;
	while 0 = @@fetch_status
	begin
			-- print column header
			print '';
			print '---------------------------------';
			print '--- Column [' + cast(@colId as nvarchar) + '] - ' + @colName + ' ---';

			-- compile & execute grouping sql
			select	@sql = 'select	top ' + cast(@rowCount as nvarchar) 
							+ '		count(*) as [count], '
							+ '		cast(((count(*) * 100)/' + cast( @tblRowCount as nvarchar) + ') as nvarchar) + ''%'' as [Percentage], ' 
							+ '		[' + @colName + '] as [col_value] ' 
							+ 'from	' + @tblName + ' ' 
							+ 'group by [' + @colName + '] ' 
							+ 'order by count(*) desc, [' + @colName + ']';
			exec sp_executeSQL @sql;
			--print @sql;

			-- next
			fetch next from curColumns into @colId, @colName;
	end

	-- clean up
	close curColumns;
	deallocate curColumns;
end

Please note 2 things :
1. You need to run it with ‘Results to Text’ or ‘Results to File’ setting.
2. The table parameter will need square brackets if the table name uses unconventional characters.

If you create it and run it in AdventureWorks on the ‘Production.Product’ table

exec jaten_diag_GetTableContents 'Production.Product', 5

… you will get these results

———————————
— Production.Product
— Row count : 504
———————————

———————————
— Column [1] – ProductID —
count Percentage col_value
———– ——————————- ———–
1 0% 1
1 0% 2
1 0% 3
1 0% 4
1 0% 316

(5 row(s) affected)

….

———————————
— Column [6] – Color —
count Percentage col_value
———– ——————————- —————
248 49% NULL
93 18% Black
43 8% Silver
38 7% Red
36 7% Yellow

(5 row(s) affected)

….

Notice how the Color column reveals that almost half of the products do not have a color setting? This could imply relevancy or this data possibly has a problem being maintained. But also, notice how unique columns will obviously provide meaningless data.

The AdventureWorks database is a very clean database, so this example is a bit contrived, but in the real world, there are plenty of databases where this little procedure will allow you to get some insight into the data.

How do you get familiar with new data?

Copyright © John MacIntyre 2009, All rights reserved

WARNING – All source code is written to demonstrate the current concept. It may be unsafe and not exactly optimal.

February 8, 2009 Posted by | Code, SQL Server | , , | Leave a comment

How To Guarantee Dependent JavaScript Files are Included

I’m very much a statically typed kind of programmer and knowing that missing code will not be found, until executed, and then only when it hits that missing code … well, lets just say it makes me … uneasy … actually, I’m nervous as heck every time I run it! I feel like my app is held together with duct tape! …. a house of cards waiting to collapse with the next gentle breeze.

When I started programming in JavaScript, this really bothered me. In addition to my ‘uneasy’ feeling, there was the constant aggravation of missing dependencies. This was more than uneasiness; this was an irritating thorn in my side. It was again only compounded by the fact, that if there were 5 missing dependencies, I would only find them one at a time, and only if I happened to be so luck as to covered it’s reference in my GUI testing!

Wouldn’t it be nice if my code would tell me on the first run ever dependent file that was missing?

I thought so; so I came up with this little trick to test if a JavaScript file is included already. Basically, you create a variable at the top of the JavaScript files which others depend on. I try to keep the name obvious, and weird enough to avoid collisions.

var include_ajax_utility = true;

Then in all the files which are dependent on this script (the ajax utility in this case), I add the following code to check.

//----------------------------------
// Check for dependencies
//----------------------------------
try
{
	var fileName = "mypage.events.js";
	if( "undefined" == typeof(include_ajax_utility))
		alert( "JavaScript file '" + fileName 
			+ "' is missing dependent file"
			+ " : ajax.utility.js");
}
catch(e)
{
	alert( "Programmer Alert : At least one dependent”
		+ “ file has not been included.");
}

Notice the code is checked to confirm the variable is set (in other words it exists), and if it doesn’t, an alert is displayed to the user indicating exactly which dependent file is missing. [As a side note, the try catch block doesn’t appear to be necessary, but still … I like to cover all my bases]

There are a couple drawbacks to doing this however; a) you may get multiple messages saying practically the same thing, and b) the JavaScript script tags need to be in order! Yes, the ordering of tags can be frustrating, if you insert a new dependency into existing code, only to have this break, and this is enough to make most people quickly abandon this idea. But for me, it’s a small price to pay for a giant step forward in having a sense of stability.

What do you think? A little too anal? 😉

Copyright © John MacIntyre 2009, All rights reserved

WARNING – All source code is written to demonstrate the current concept. It may be unsafe and not exactly optimal.

February 6, 2009 Posted by | Code, JavaScript | , , , | 2 Comments

How To Write Dynamic SQL AND Prevent SQL Injection Attacks

One of my pet peeves is when general rules are taken as gospel, and declared as the only acceptable practice regardless of the circumstance.

One of the big ones is Dynamic SQL. There’s a heck of a good reason for this, and it’s called an SQL Injection Attack, and if you are not familiar with it, I would strongly urge you to leave this post right now, and read up on it.

Anyway, Dynamic SQL is not inherently evil, it’s the appending of user entered text that is evil. Appending user entered text is just lazy and can be easily avoided with parameterization.

The trick is to create dynamic SQL with parameters.

Interestingly, I’ve never seen anybody else do this. I am constantly hearing people recommending stored procedures … even when are clearly not flexible enough to meet the required functionality. Don’t get me wrong, stored procedures have a lot of benefits, but flexibility isn’t one it’s popular for.

And now for some code …

I created a console app which queries the SQL Server sample database AdventureWorks. The following static method was added to the Program class.

public static int GetOrderCount(string productPrefix, SqlConnection cn)
{
	// initialize SQL
	string starterSql = "SELECT count(*) FROM Production.Product";
	StringBuilder sbSql = new StringBuilder(starterSql);

	// add parameters
	if( !String.IsNullOrEmpty(productPrefix))
		sbSql.Append( " where [name] like @namePrefix");

	// initialize the command
	SqlCommand cmd = new SqlCommand(sbSql.ToString(), cn);
	if (cmd.CommandText.Contains("@namePrefix"))
		cmd.Parameters.AddWithValue("@namePrefix", productPrefix + "%");

	// get count
	return Convert.ToInt32( cmd.ExecuteScalar());
}

Basically, the function queries the number of orders where the product name starts with a certain prefix.

The strength of doing this via dynamic SQL is we only need to filter on the product name when a valid prefix parameter is passed in. So, if the optional parameter (productPrefix) exists and is valid, the filter condition is added to the SQL and the parameter is added to the SqlCommand object.

In this overly simplified example, we could manage the same thing by just setting the productPrefix variable to the ‘%’ wild card, but then we’d be doing a filter for nothing. Not to mention things might be a little more difficult if the operator were ‘equals’ instead of ‘like’, or if there were multiple optional parameters. Creating SQL dynamically means we don’t need to write some funky kludge and our SQL is always nice, simple, and doing minimal work.

To execute my function, I added the following code to the Main(…) method.

// get total count
Console.WriteLine( "There are {0} products in total.", 
			Program.GetOrderCount( null, cn));

// get totals for various prefixes
string[] prefixes = new string[6] { "a", "b", "c", 
						"'; drop table Production.Product;--", 
						"d", "e" };
foreach(string prefix in prefixes)
{
	Console.WriteLine("There are {0} products"
			+ " prefixed with '{1}'.",
			Program.GetOrderCount(prefix, cn), prefix);
}

First we call GetOrderCount(…) without a name prefix to test it without the parameter, then we traverse the array of possible prefixes (this would be the user entered data in a real app). Notice the fourth item? Pretty menacing eh? Don’t worry, it’s safe.

Here are the results

There are 504 products in total.


There are 3 products prefixed with ‘a’.
There are 4 products prefixed with ‘b’.
There are 12 products prefixed with ‘c’.
There are 0 products prefixed with ”; drop table Production.Product;–‘.
There are 3 products prefixed with ‘d’.
There are 9 products prefixed with ‘e’.


Notice the ‘d’ and ‘e’ prefixes were searched, and items found, proving the ‘drop table’ statement was not injected into the command.

You’d be surprised how much I use this. Many of my objects have a static GetList(…) method, and this method usually has multiple overloads. Keeping with the DRY principle, I prefer to keep all my logic in one place, so this method will usually have one overload with every possible filter parameter, and all the other overloads will just call this one. Surprisingly, the overload with the code, is not overly complex, and is actually pretty simple.

What do you think? Will you use parameterized dynamic sql in the future?

Copyright © John MacIntyre 2009, All rights reserved

WARNING – All source code is written to demonstrate the current concept. It may be unsafe and not exactly optimal.

February 5, 2009 Posted by | C#, Code, Security, SQL | , , , , , | 1 Comment