While I Compile

… I compile my thoughts about programming

10 things to review after finishing your data model

When I finish modeling my database, I like to just let it sit a couple days, then spend some time just reviewing it to check for inconsistencies. The kind of inconsistencies you never notice when you’re up to your eyeballs in details, but drive you up the wall after 5 years of maintenance.

When I finished my database design a few days ago, I jotted a few things down to remember to do. But when I transcribed them into my bug tracker, where I manage all my tasks, 3 things turned into 5, then 7 and I realized if I could just add a few more, I’d have an infamous ‘Top 10′ list.

Anyway here it is. It’s not exactly comprehensive, but it’s a start.

10 things to review after finishing your data model:

  1. Naming consistencies
  2. Column default consistencies
  3. Identity / auto-number technique exists
  4. Constraint consistencies
  5. Foreign Key relationships exist
  6. Indexes on Foreign Key columns
  7. Abbreviations are consistent
  8. Abbreviations are documented
  9. Data type and size consistencies. (For example; TableA.UserName is nvarchar(N), so TableB.UserName should be nvarchar(N) as well, not nvarchar(N±X))
  10. Review all requirements again to reconfirm everything was covered

November 29, 2009 Posted by | Programming, SQL Server | , | 1 Comment

How to enforce a foreign key constraint against multiple tables

I am building a web app with Ben Alabaster, and one of the requirements is for the user to be able to flag items for moderators. So the user can flag entity A, entity B, entity C, etc…

So I created a single flag table.

Flag Table

Flag Table

Which I then tried to tie it to the entity tables, hoping for something like

Ideal Foriegn Key Relationships

Ideal Foriegn Key Relationships

Where all the foreign key relationships were from [flag].[entity_id] to [EntityX].[id]

Then when I wanted the top 10 flags from a particular entity (B in this case), I could run a query like

select top 20 e.[name], count(*) "count"
from entityB as e
   left join flag as f
    on f.entity_id = e.id
where f.entity_type='B'
group by e.[name]
order by count(*) desc

Unfortunately, if you were to create the above table relationship, and run the following inserts

insert into EntityA( id, name) values (1, 'EntityA');
insert into EntityB( id, name) values (2, 'EntityB');
insert into EntityC( id, name) values (3, 'EntityC');
insert into EntityD( id, name) values (4, 'EntityD');

The following statement

insert into flag(entity_id, flag_reason) values(5, 'Testing without a valid FK value.');

would fail as expected, as expected, with the following error. “The INSERT statement conflicted with the FOREIGN KEY constraint “FK_flag_EntityA”. The conflict occurred in database “test”, table “dbo.EntityA”, column ‘id’.”

But

insert into flag(entity_id, flag_reason) values(1, 'Testing the FK to entity A.');

would also fail, which was undesired, with the following error: “The INSERT statement conflicted with the FOREIGN KEY constraint “FK_flag_EntityB”. The conflict occurred in database “test”, table “dbo.EntityB”, column ‘id’.” +

So, my options with regards to referential integrity are :

  1. Ditch the referential integrity, which I am vehemently opposed to. ++
  2. Create multiple flag tables, each with the exact same schema, but a different Foreign Key relationship, which just seems wrong.
  3. Managing referential integrity via triggers.

While I’m not a big fan of triggers, the ‘Managing referential integrity via triggers.’ option seems like the only tolerable one. So I added the [entity_type] column to my flag table.

Flag Table With Entity Type

Flag Table With Entity Type

Removed the relationships

No Relationships

No Relationships

And wrote the following trigger to manage the foreign key relationship.

-- =============================================
-- Description: maintain referential integrity on
-- a column which is a FK for different tables
-- =============================================
CREATE TRIGGER flag_entity_id_fk
ON flag
AFTER INSERT,UPDATE
AS
BEGIN
declare @entity_type char(1);
declare @entity_id int;
declare @cnt int;

 -- SET NOCOUNT ON added to prevent extra result sets from
 -- interfering with SELECT statements.
 SET NOCOUNT ON;

 -- get info
 select @entity_type=entity_type,
   @entity_id=entity_id,
   @cnt=0
 from inserted;
 
 -- check if records exist
 if 'A' = @entity_type
 begin
  select @cnt=count(*)
  from entityA
  where id=@entity_id;
 end
 else if 'B' = @entity_type
 begin
  select @cnt=count(*)
  from entityB
  where id=@entity_id;
 end
 else if 'C' = @entity_type
 begin
  select @cnt=count(*)
  from entityC
  where id=@entity_id;
 end
 else if 'D' = @entity_type
 begin
  select @cnt=count(*)
  from entityD
  where id=@entity_id;
 end

 -- records exist? exit
 if 0 < @cnt
 begin
  return;
 end

 -- no? error
 raiserror( 'Unable to find foriegn key match on entity type ''%s'', id ''%d''.', 16, 1, @entity_type, @entity_id);
 rollback transaction;
END

Now, when you run

insert into flag(entity_type, entity_id, flag_reason) values('B', 5, 'Testing without a valid FK value.');

The trigger doesn’t find a match in the appropriate table, rolls back the insert, and gives you a descriptive error message.

Unable to find foriegn key match on entity type ‘B’, id ’5′.

However, a good value is accepted.

insert into flag(entity_type, entity_id, flag_reason) values('B', 2, 'Testing without a valid FK value.');

I’m still not happy with this approach, but it does seem to be the lesser of all the evils. Please let me know with a comment if there is another option I’ve overlooked. Thanks.

* Frankly I was surprised it even compiled.
+ Unless of course you were unfortunate enough to test this in a coincidental situation where all tables happened to contain the id of every test you ran.
++ Yes ‘vehemently’

EDIT (11/10/2009) : It just occurred to me that this article does not take into account what would happen if the entity tables were to delete a row which this table was pointing to. When I designed my tables this was taken into account, but since we are not planning to allow actual deletions, it was left out. However, if you were to implement this strategy, where entities could be deleted, a delete trigger would need to be created for each entity table.

November 9, 2009 Posted by | Programming, SQL Server | , , | Leave a comment

6 simple steps to a stress free database change deployment

A few years ago, I was contracted to work on an internal web application in a large Fortune 500 company. Our app was constantly in use, had thousands of users, was fairly active, and had a rather large database (one table had over 300 million records).

I was fortunate enough to work beside a DBA who had among other tasks, the responsibility of running deployment scripts against production. We talked occasionally, and he’d tell me about some of the deployment scripts he would run and the programmers he ran them for. He’d get programmers whose scripts never fail or report errors and would actually request the output text to confirm the DBA did their job. Then there were the programmers who would work against development, and when it was time to deploy, they’d attempt to remember all the subtle database changes made along the way and whack together an SQL script to repeat them. My friend was often rejecting scripts which were obviously never run before and sometimes he’d have to run expedited scripts, because the first was missing a necessary column or two.

Deployments can be a real headache at the best of times, but especially when schema updates to a production database are involved. Don’t get me wrong, you usually have a backup to fall back on, but how long will that take to restore? … Really, you don’t want to resort to the restore, have the database offline for that long, or have your name associated to it.

So gradually I evolved a process which has kept me sane and confident when deploying schema changes to production servers, even on large, sensitive, and active databases.

Here’s my simple 6 step process for deployment sanity. (FWIW-I’ve never called it my ‘6 step process…’ before writing this post. ;-)

Step 1 – Create a central shared directory for all scripts

The first thing you do is store all database changes as scripts in a central shared directory.

I usually store the scripts in a directory structure like
\projects\ProjX\DeploymentScripts\dbname.production\
\projects\ProjX\DeploymentScripts\dbname.archive\deployed.YYMMDD\

Where the ‘dbname.production’ folder stores currently pending scripts. ‘dbname.archive’ stores previously deployed scripts each in their own dated subdirectory.

Step 2 – Create scripts for all changes

Any and all changes are scripted and stored. If you use a tool to adjust your database schema, then you export the generated SQL and run that exported SQL instead of running the tool directly against your database.

Keep scripts in order and don’t change old scripts. If … ok, when you make a mistake, don’t try to go back to an older script and change it, just write new one to undo the mistake.

Each script is saved in the ‘dbname.production’ folder with the following naming convention:

dbname.##.user.descr.sql

where:

dbname : the database to deploy it to (mostly so you can see it in your Query Analyzer title bar)
## : the sequence number to deploy it in (just increment the highest number)
user : initials of the programmer who wrote the script
descr : a brief description of what the script does

Here are some examples
HR.01.jrm.AdjustedTblEmployees_AddedStartDateColumn.sql
HR.02.jrm.AdjustedProc_sp_GetEmployees.sql
HR.03.jrm.AdjustedProc_sp_ActiveEmployees.sql

Step 3 – Number your scripts

As you may have noticed in step 2; scripts are numbered in the order they are created so object dependencies exist when run. Do you like dependent objects to exist? Yeah? … me too.

In case you’re wondering, the single digits are prefixed with ‘0’ so they will sort in Windows explorer properly. And there is only room for 2 digits since you will rarely if ever create more than 99 scripts before deploying.

Step 4 – Keep track of what has been deployed to where

Ok, so you’ve got your ‘dbname.production’ directory with 10 SQL files in there, you’ve got 3 developers each with their own copy of the database, a testing and/or business validation environment, and production.

Sooo … what’s been deployed to where? It’s pretty easy to lose track of what got deployed to validation last week. … Sorry … did we deploy 5? Or did we stop at 4?

I toyed with a few different ways of doing this, but finally settled on a very obvious and simple solution; keep an empty text file for each database, updating the number each time it’s run. Since I want them to sort intermingled with the script files, I use a similar naming convention:
dbname.##.zzz–DEPLOYED-TEST

Here’s some examples mixed in with the previous examples:
HR.00.zzz–DEPLOYED-DevBox2
HR.00.zzz–DEPLOYED-DevBox3
HR.01.jrm.AdjustedTblEmployees_AddedStartDateColumn.sql
HR.02.jrm.AdjustedProc_sp_GetEmployees.sql
HR.02.zzz–DEPLOYED-TEST
HR.03.jrm.AdjustedProc_sp_ActiveEmployees.sql
HR.03.zzz–DEPLOYED-DevBox1

Notice how they’re sorted amongst the scripts and it’s obvious what got deployed? I could probably delete the entire text of this post, other than that sample, and you could figure out my deployment process. … But since I enjoy typing … I’ll continue. ;-)

When you finish executing your scripts, you would increment the number to reflect the last script run on that database.

Step 5 – Execute scripts in order

Obviously.

Step 6 – Post deployment clean up

When the scripts are deployed to production, you would create a new deployed.YYMMDD subdirectory with today’s date in the dbname.archive directory and move all the SQL files from the ‘dbname.production’ directory. Then Renumber the deployment marker files back to 00.

This can be incorporated into almost any database schema change deployment process, and isn’t specific to SQLServer. After all I started evolving this process while working on an Oracle database.

It has occurred to me that this may be an old school idea and modern database schema diff tools may provide a way to avoid this kind of preparation. But unless you are working as a sole developer; I can’t imagine a diff tool making your life easier than what I’ve outline above.

I hope your next deployment is less of a nail biter.

Copyright © John MacIntyre 2009, All rights reserved

WARNING – All source code is written to demonstrate the current concept. It may be unsafe and not exactly optimal.

May 1, 2009 Posted by | Programming | , , , | 2 Comments

How To Get The Most Frequently Used Column Values

Whenever I import external data, integrate to another database, or am new to a project, I need to get familiar with the database. The table schemas, relational integrity, and constraints are the first thing I look at and take me a long way, but soon I need to know what the data looks like.

In an ideal world, relational integrity and database constraints would define control this, and all I’d really need to do is look at those. But the reality is, in 15 years of working in this industry, most of the databases I’ve worked on, that I didn’t design, have barely used constraints and some haven’t even used relation integrity fully!

The need to get a good feel of the data is even more prevalent when working with dirty data, or when refactoring poorly written applications to ensure any refactoring doesn’t introduce other issues. I will usually wind up writing the following query repeatedly:

Select	column_name, count(*)
From	table_name
Group by column_name
Order by count(*) desc, column_name

This little query often reveals; inconsistencies between data and the application, where an application sets column X to possible values of ‘A’, ’B’, ‘C’, ‘D’, or ‘E’, but in reality, there may be zero ‘C’ and ‘E’ values in that column, but there is 6 ‘X’s, 1 ‘Q’, and an ‘?’. Or I may find that there are only 6 rows with data in that column, out of almost 3 million rows, indicating the column / application feature is unused.

Anyway, yesterday I finally wrote a little stored procedure which will print out the most frequent N values in each column for a specified table.

/*
Purpose : Retrieves the top N most frequent values in 
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;each column of the specified data table.
Copyright © Jaten Systems Inc. 2009, All rights reserved

http://www.jatensystems.com/


http://whileicompile.wordpress.com/


http://www.johnmacintyre.ca/

*/
create proc jaten_diag_GetTableContents
(@tblName nvarchar(200),
@rowCount int)
as
begin
	declare @colId int;
	declare @colName nvarchar(100);
	declare @sql nvarchar(2048);
	declare @tblRowCount int;

	-- cursor to get columns
	DECLARE curColumns CURSOR FAST_FORWARD 
			FOR	SELECT	colid, [name] 
					FROM	syscolumns
					where	id=object_id(@tblName)
					order by colid
	open curColumns;

	-- get table row count
	set	@sql = 'select @retCount=count(*) from ' + @tblName;
	exec sp_executeSQL @sql, N'@retCount int OUTPUT', @retCount = @tblRowCount OUTPUT;

	-- print table header
	print '';
	print '---------------------------------';
	print '--- ' + @tblName;
	print '--- Row count : ' + cast(@tblRowCount as nvarchar)
	print '---------------------------------';

	-- get info for each column
	fetch next from curColumns into @colId, @colName;
	while 0 = @@fetch_status
	begin
			-- print column header
			print '';
			print '---------------------------------';
			print '--- Column [' + cast(@colId as nvarchar) + '] - ' + @colName + ' ---';

			-- compile &amp; execute grouping sql
			select	@sql = 'select	top ' + cast(@rowCount as nvarchar) 
							+ '		count(*) as [count], '
							+ '		cast(((count(*) * 100)/' + cast( @tblRowCount as nvarchar) + ') as nvarchar) + ''%'' as [Percentage], ' 
							+ '		[' + @colName + '] as [col_value] ' 
							+ 'from	' + @tblName + ' ' 
							+ 'group by [' + @colName + '] ' 
							+ 'order by count(*) desc, [' + @colName + ']';
			exec sp_executeSQL @sql;
			--print @sql;

			-- next
			fetch next from curColumns into @colId, @colName;
	end

	-- clean up
	close curColumns;
	deallocate curColumns;
end

Please note 2 things :
1. You need to run it with ‘Results to Text’ or ‘Results to File’ setting.
2. The table parameter will need square brackets if the table name uses unconventional characters.

If you create it and run it in AdventureWorks on the ‘Production.Product’ table

exec jaten_diag_GetTableContents 'Production.Product', 5

… you will get these results

———————————
— Production.Product
— Row count : 504
———————————

———————————
— Column [1] – ProductID —
count Percentage col_value
———– ——————————- ———–
1 0% 1
1 0% 2
1 0% 3
1 0% 4
1 0% 316

(5 row(s) affected)

….

———————————
— Column [6] – Color —
count Percentage col_value
———– ——————————- —————
248 49% NULL
93 18% Black
43 8% Silver
38 7% Red
36 7% Yellow

(5 row(s) affected)

….

Notice how the Color column reveals that almost half of the products do not have a color setting? This could imply relevancy or this data possibly has a problem being maintained. But also, notice how unique columns will obviously provide meaningless data.

The AdventureWorks database is a very clean database, so this example is a bit contrived, but in the real world, there are plenty of databases where this little procedure will allow you to get some insight into the data.

How do you get familiar with new data?

Copyright © John MacIntyre 2009, All rights reserved

WARNING – All source code is written to demonstrate the current concept. It may be unsafe and not exactly optimal.

February 8, 2009 Posted by | Code, SQL Server | , , | Leave a comment

   

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: