Tuesday, December 23, 2008

Simple, but effective code example

The other day I answered this question at SQLServerCentral, "Need a date function to return every Wednesday of the year", and got a couple of kudos, one from Jonathan Kehayias and one from Chris Morris (frequent contributor to the SSC forums), so I thought I'd post the code here and then explain it a bit. So here's the code:

SELECT TOP 366
   IDENTITY
(INT, 1, 1) AS n
INTO
  
#nums
FROM
  
sys.all_objects
  
  
SELECT
      
DATEADD(DAY, n, '1/1/2008')
  
FROM
      
#nums
  
WHERE
      
DATEPART(weekday, DATEADD(DAY, n, '1/1/2008')) =
              
CASE @@DateFirst
                  
WHEN 7 THEN 4
                  
WHEN 6 THEN 5
                  
WHEN 5 THEN 6
                  
WHEN 4 THEN 7
                  
WHEN 3 THEN 1
                  
WHEN 2 THEN 2
                  
ELSE 3
              
END      
DROP TABLE
#nums


The first section just creates a temporary numbers table and you can do that several ways, I just opted for a simple one. If you don't have a numbers table already, you should as there are many good uses for it.

Then I just run a simple select against the temporary numbers table adding each number to the first day of the year. The WHERE clause then just compares the value returned from the DATEPART function (weekday) to the value for Wednesday based on the @@Datefirst value. According to Books on line @@Datefirst:

SET DATEFIRST indicates the specified first day of each week. The U.S. English default is 7, Sunday.

Language settings affect date information.


This can also be affected by using SET DATEFIRST if you want it to be different from the default for the language. This case statement is important if you expect the code to work on any database in any SQL Server install.

Ideally to solve a problem like this you would have a dates table as Jonathan recommends in the thread.

For more information on a numbers table check out these articles:

http://www.sqlservercentral.com/articles/TSQL/62867/

http://www.sqlservercentral.com/articles/Advanced+Querying/2547/

Wednesday, December 17, 2008

Improving the blog

Well, this blog has been up for about 9 months now and I have over 50 posts. I don't know how many "regular" readers I may have, but if you are reading my blog, I'm looking for comments and criticisms (constructive) about it. I believe that I can do a better job with the blog, and I think the best way is to get others to critique what you are doing. So how's my grammar? What can I do to make the posts easier to read? Are the posts too long or too short? Here's the big one, do I stay on topic?

I've read a few How to blog blogs and tried to pick out the and apply the good stuff.

I really am interested in hearing what you have to say. I know Andy Warren has asked a similar thing in his blog, of course I'm not offering anything in return. You can post a comment on the blog or if you don't want to criticize me publicly you can email me at corbett.jack@gmail.com.

Thanks in advance, and don't worry you won't hurt my feelings.

Friday, December 12, 2008

SQL Quiz #2 - Challenges

Chris Shaw posted a new SQL Quiz and I have been tagged by Kendal Van Dyke, so I need to take up the challenge. The question is: “What are the largest challenges that you have faced in your career and how did you overcome those?”

Starting out

I'd say my first challenge was entering the Information Technology field in the first place. I should explain that I had little interest in computers in high school and college. I was more interested in sports than technology and my original goal was to become a basketball coach so I went to college to study Physical Education. Well, along the way I learned full-time coaching gigs are hard to find and I don't really like working with children so teaching was out. A good friend of mine is a developer (a true geek genius) and he started mentoring me. When he was the Software Development manager at the local pulp and paper mill he had two openings that he was struggling to fill and he managed to convince me to apply, as he was convinced I could do the job. Well, I finally gave in and was hired (his boss had to approve). Well, I found out I loved doing the work, but it has always been a struggle as I don't necessarily understand all theory and it was then as I had to learn VB, SQL Server, accounting, and pulp and paper making all at once! I managed to do it and have been doing it for almost ten years.

The big project

The second challenge would be the first big project that I was the IT lead for. We were purchasing a soup to nuts Manufacturing Execution system that required me to write the data transfer process, interface with the vendor's development team for custom modifications, prove bugs were bugs, and basically question everything. The biggest issue that I had to solve was a deadlock issue in the database from the RF units on our forklifts in the warehouse. Basically the way it was coded, once you had 3 or more units working at once the units totally locked up because of deadlocks in the database. I don't recall the details in the code, I do know it had much looping (RBAR). I used Profiler (this was SQL 7) to find the offending SQL code. Then I had to go into the vendor's proprietary scripting language and change the code that was causing the problem. Miraculously, the units stopped locking up. Much to the vendor's consternation I proved the bug, but to my knowledge they never fixed the code, and shall remain nameless.

I think almost everyone I follow I know who blogs has been tagged for this, but I have not seen R. Barry Young's name go past so I'll tag him.

Thursday, December 11, 2008

Kerberos Authentication and SQL Server

Great article by K. Brian Kelley on SQLServerCentral today on Configuring Kerberos Authentication. If, like me, this was something you do not really understand this is a very good explanation that is easier to understand than most other resources I have found.

As always, it is a good idea to check out the discussion as well, as questions you may have are probably in there and answered.

Wednesday, December 10, 2008

Measuring Developer Performance

I just read this post on Joel Spolsky's blog (Joel on Software) where he says, about using timesheets to measure performance:
We have a theory, here, that this is a bad idea. Using timesheets as a performance metric can lead to only one thing: bad data in timesheets.
You really need to read the whole post to understand where he is coming from, but I agree with his point. As soon as someone knows how you are measuring their performance, many people will find a way to "game" the system. What Joel doesn't do, likely because this was not the real point of his post, is mention how he measures developer performance. So the question is:
How do you measure performance for developers?
I wish I had a good answer for this. I have always thought performance should be measured by what you accomplish, not how long it takes to accomplish it. If the task is to create three reports for the accounting department by the end of the week and one developer gets it done by Tuesday and surfs the net for the rest of the week and another developer gets it done on Friday, which one performed better? Both met the expectation, working under the assumption that the reports work the same, so how do you evaluate performance? Granted the one who finished sooner could be assigned more work so is more productive, but both met the requirement.

I had a short contract position writing SSRS reports for a company. The first day I showed up, they had a certain number of reports spec'd out for to work on and expected it to take me several days to get them done. Well, by the middle of the second day I had completed them all and the Business Analyst I was working with didn't have anything else ready for me and I was expected to be there from 8 to 4:30. I couldn't leave, but I wasn't being productive, yet my employer was happy with my perfomancem and my contract, which was originally for four weeks, was extended and I ended up there for six months.

The point of the story is that I had days where I was paid for 8 hours and worked 3-4 because I was waiting on someone else. So was my performance good, they thought so. Also my timesheet showed 8 hours worked, which really isn't accurate.

This also relates to the editorial at SQLServerCentral. Be sure to read the discussion as well.

Thursday, December 4, 2008

RedGate SQL Refactor

I was fortunate enough to receive a copy of RedGate's SQL Toolbelt, which includes SQL Prompt, SQL Compare, SQL Data Compare, SQL Refactor, SQL Packager, SQL Data Generator, SQL Doc, SQL Backup, SQL Response, SQL Multi Script, and the SQL Comparison SDK. I have used SQL Prompt (intellisense for SQL), SQL Compare, and SQL Data Compare in the past and have always liked RedGate's products.

Today I had the opportunity to use SQL Refactor to rework several tables in a a database. My co-worker had asked that a table, in a database in development, could be renamed. As you probably know, there is no thing as a simple table rename, especially when it is one of the core tables in the design. In the past I would have to go through every relationship, trigger, view and stored procedure and make the changes, but today, with SQL Refactor, I was able to do it much faster using the Smart Rename function. SQL Refactor plugs into SSMS, so if you right-click on an object you get a Smart Rename option. Once selected you are prompted to provide the new name, and Refactor finds references to the table in foreign keys, triggers, views, and stored procedures, and generates a script for all necessary changes. It does the same for column renames as well. Even with this tool I spent my entire afternoon making changes. Mainly because this one table had references to several other tables, thus every place there was table_id, I needed to make the change to that table as well as Refactor does not rename columns in other tables. Still I would have been looking at probably a full day plus of work without the tool.

The only change I would, and will, suggest to RedGate is to allow you to rename multiple columns in a table at one time so one script is generated instead of having to rename each column separately for the refactoring to be done.

The next tool I will be trying is SQL Response. I'll probably let you know how it goes.

Tuesday, December 2, 2008

Script published on SQLServerCentral

I'm a day late for this, but the script, Get SQL Server 2005: Details about running or defined traces, I submitted to SQLServerCentral was published in the newsletter yesterday (12/1/2008). I put his script together after answering a couple of questions on the forums about Profiler and Server-side tracing. I've decided that any common answers I provide I am going to try to create either a script or article to try to "head off" the next post. If that doesn't work then at least I'll have a link to give them.

Tuesday, November 25, 2008

Count Me In!

I'm joining the Battle Against Lawless Database Design. Too often the database is seen as "just" a place to put data. This is wrong, and if handled this way then the application will not perform well. The database is like the foundation of a house, if it isn't solid, you will be dealing with problems because of it.

Yes, SQL Server is easier to manage and maintain than many other databases, but it is an enterprise ready product that needs to be dealt with as such.

Tuesday, November 18, 2008

Stored Procedure, Trigger, and UDF Formatting Preferences

This is part 2 of my personal standards and, when I have the power, the standards at my office. Part 1 is Naming Conventions, you may want to take a look at that before reading this post.

I have to admit to being very finicky about formatting not only in SQL, but any programming language. I am a huge proponent of white space and consistency. I also always wish I commented more thoroughly.

Some basic rules:
  1. Set NoCount On at the top
  2. Declare any variables at the top of the procedure
  3. Create any temporary tables immediately after the variable declarations
  4. Set any variable defaults after temp table creation.
  5. Capitalize first letter of Keywords. I know alot of people like all caps for keywords, but I am spoiled by the tools that colorize them.
  6. Always use Begin and End in control of flow blocks (If-Else, While, etc...)
  7. Whitespace between statements.
  8. Use "As" when aliasing coumn names. first_name + ' ' + last_name As full_name.
  9. When using "Between" I do not capitalize the "and". amount Between 10 and 20. I do this to differentiate the "Between and" and the "And" for adding criteria. amount Between 10 and 20 And product_id = 10.
  10. Table aliases should be meaningful and in CAPS.
  11. Whitespace between operands. amount = 10.
  12. Indenting
    • Column lists - between Select and From, in Group By, in Order By, and Having
    • Tables in From
    • Criteria in On
    • Criteria in Where
    • Begin and End inside Control of Flow statements, and the inside the Begin and End
  13. Always end Procedure or Trigger with a Return.
Here is a basic example:

CREATE/ALTER PROCEDURE SCHEMA.proc_name
(
-- required paramters (no defaults - first)
@parameter datatype [= default],
-- output parameters always last in the list
@output_parameter datatype output
)
AS

/* Author Name
**
** Purpose - this procedure is used to show formatting
**
** Example
** ----------
** Declare @output_parameters datatypes, @retval Int -- if needed

Exec @retval = schema.proc_name parameter list

Select @retval As return_value, @output_parameters As paramter_names
**
** History
** ----------
** mm-dd-yyyy - Created
** mm-dd-yyyy - Initials - change made and why
**
*/

SET NOCOUNT ON

DECLARE
@variable datatype,
@variable2 datatype

CREATE TABLE #temp
(
column1 datatype,
column2 datatype
)

SET @variable = value

IF @parameter = value
BEGIN
SET
@variable2 = value
END
ELSE
BEGIN
SET
@variable2 = value2
END

INSERT INTO
#temp
(
column1,
column2
)
SELECT
column1,
column2
FROM
table1 T1 JOIN
table2 T2 ON
T1.primary_key = T2.foreign_key
WHERE
T.column1 = @variable AND
T2.column2 = @variable2

SELECT
column1,
column2
FROM
#temp

RETURN
For an example of my rules and recommendations for triggers please see my article, Introduction to DML Triggers , on SQLServerCentral

Wednesday, November 12, 2008

Two Mistakes

I was challenged by Andy Warren to write a post about two mistakes I have made in my career since he was sure I had made many. I'm sure I have made many, but only one SQL Server mistake sticks out in my mind although I compounded it while trying to fix it.My second mistake does not relate to SQL Server, but did have a negative impact on the business I was working for.
  1. In the first year of my career my colleagues and I had written an order entry system for the paper mill we worked at using SQL Server and classic asp. Being an agile shop, even before agile was really popular, we rolled it out and made upgrades/fixes on about a weekly basis. One evening I was tasked with rolling out an update to the database. We had our development/QA database up to date with the latest data and I had a dts package that I had developed to migrate the changes. Well, I started the package, and I realized I had made a mistake and stopped the package. THAT was my REAL mistake as the first step of the package DROPPED the existing tables and I stopped the package after tables had been dropped, but BEFORE they had been recreated. I had not put the package ina transaction (another mistake) so there was no rollback. Ouch! Unfortunately, I was young and not paranoid, like any good DBA would be, so I had NOT taken a full database backup before starting the process. I did have the previous night's backup and all the transaction log backups from the day so I was able to restore the database and then re-do the upgrade. Of course it took me "forever" to do the restore because first I forgot to add WITH NORECOVERY to the first restore so I had to do the full restore twice! So what should have taken me an hour took 5 hours. What did I learn? Always, always, always take a full backup BEFORE doing any maintenance or upgrades on a database or server.
  2. My second memorable mistake (I am sure co-workers have others, I have blocked the rest) does not have anything to do with SQL Server, but was a pretty big one. Our mill ran on an AS400 based system that tracked, labeled, wrapped, and shipped the product. When a roll of paper (not like a toilet paper roll, but a large roll of 500+ lbs) was weighed, labeled, and wrapped the system compared the actual weight with an estimated weight to make sure the roll was the one that was expected. Humans would be a barcode on the end of the roll that was scanned as it was weighed and, believe it or not, occasionally they would put the wrong barcode tag on. Obviously, this process would take the scanned ID and read the DB2 database to get the estomated weight. Well, a younger co-worker was on call and had a problem that he could not solve and I was the only person he could get in touch with to help. I was not and am not an AS400 or DB2 expert, but I "knew" where to look for this problem. We checked for locks on the "rolls" table and, sure enough, a process had a lock on roll that it had not released. The precsribed treatment for this diagnosis is to "kill" the process that had not released the lock. This was done fairly regularly and all that happened was that user just needed to re-enter the program they were in, no big deal. Of course, I did not notice that the process I was killing was a system process, not a user process. So I killed the wrapper program and what was already the bottleneck in our productin line was SHUT DOWN! Needless to say the production folks were NOT happy and neither were the other IT guys who had to be called at 2 AM to first figure out what I had done, and then to fix it! I did learn how to determine what was a system process on an AS400 from that mistake.
I'll pass along the challenge to a couple of SQLServerCentral friends, Grant Fritchey and R. Barry Young.

Thursday, November 6, 2008

Naming Conventions

The other day I read Aaron Bertrand's blog post about his stored procedure development best practices which got me thinking about the standards I have developed over the years so I decided to do a short series on my personal standards.

Let's start with naming conventions.
  1. Object names in all lower case with an underscore, "_", between words. With this standard I am not affected by the case-sensitivity of the collation.
  2. Table names are plural because they represent sets of an entity, companies not company.
  3. Common column names like id, name, or description should be prefaced with the singular of the table name so our companies table would have company_id, company_name as columns. This is so that they do not need to be aliased in queries when there are multiple columns with the same name. For example, if I have a query that joins products, orders, and companies, I would do P.product_name, C.company_name instead of P.name as product_name, C.name as customer_name.
  4. Use clear and descriptive names for objects. For example, companies instead of cos or last_name instead of lname.
  5. Settle on common abbreviations. I like "desc" for description in columns like product_desc. I also prefer "no" to "num" for number. I'm not really stuck on either, but I believe you need to set the standard and be consistent.
  6. For date columns decide on either date_column or column_date. I prefer to use birth_date or start_date over date_birth or date_start. Again, I can live with either, just pick one.
  7. I like to prefix my views with "vw_". I understand the arguments against this practice, but when I am in code I want to know when I am referencing a view and not a table, since thay can be used interchangeably and can change performance.
  8. Stored procedures are entity_action. Where action is ins, upd, del, get, list, find. I don't see the need for any prefix as the use tells you it is a stored procedure.
  9. Function names are fn_entity_action to distinguish them from stored procedures.
  10. Cursors, in rare use, are c_descriptive name.
  11. CTE's in 2005 and later are cte_descriptive name
  12. Triggers are trg_table_reason_on_action(s). For eaxmple for auditing purposes I would have a trigger named trg_persons_audit_ins_upd_del. I have typically only used AFTER triggers so I have not developed a standard for defining the type type of trigger. I would probably start with something like this for an INSTEAD OF trigger, trg_persons_io_ins_upd_del. I'm not sure I like that one so I'll take suggestions.
Next up will be a post on stored procedure and trigger formating.

Saturday, November 1, 2008

Props for my daughter

I know that this is normally and is intended to be a technical/professional blog, but I have to brag about my 9 year old daughter, Danielle, today. She joined the Orlando Devil Dogs Young Marine unit in September. There are 6 Saturday's of recruit training where the new recruits learn Marine and Young Marine history, customs and courtesies, physical fitness training, and drill. When you finish and pass (there are tests) each recruit becomes a Young Marine private, except for the Honor Recruit who is immediately promoted to Private First Class. Well, Danielle was the youngest and smallest recruit in her class of 7 recruits, and she was the Honor Recruit! This means she had the highest test scores, showed leadership, and desire to learn. I have to admit that I was not sure about her joining the program, but she LOVES it and is doing well at it. If she sticks with it and continues to do well, she will have the opportunity to travel and also earn scholarships. Any way, I know this is not a technical or professional post, but since it's my blog I get to be the proud dad today.

Sunday, October 26, 2008

SQLSaturday #8 - Orlando Recap

I attended/spoke at/volunteered at SQLSaturday #8 - Orlando today and had a very good time. There was close to 300 attendees and things went well. The sponsors did a great job and provided some great prizes (3 Xboxes, an IHome, an IPod Touch, and a full class at End2End Training). As usual Andy Warren and crew (I had a small part) did a good job organizing and keeping everything moving. I don't envy Andy as being the lead for this event is a full-time job for at least 2 weeks. I did not have the opportunity to attend any sessions as I was busy helping out where needed, prepping for my Profiler session, and meeting people. It was great to meet in person some folks I'd met in forums, particularly on SQLServerCentral. There was Steve Jones, Brandie Tarvin, Brian Kelley, and Fraggle (Nathan, last name not remembered). I also met Rachel from RedGate, who doesn't think Hershey's chocolate is any good, and prefers Cadbury. There were many others as well, and since I'm horrible at remembering names, especially when I spoke with at least 50 people I'll stop naming new names. Then there are the old friends, Andy Warren, Chris Rock, Rodd Harris (also from NTM, where I work), David (who's last name I can never remember), and Dolores Toefel from the OPASS group.

My Profiler session went well and seemed to be well-received. There were no empty seats, nor was there anyone standing. I forgot to look at the session evaluations as I was quickly moving on to let the next speaker get setup and to get to my next task. I think I could have done a little more prep, especially for the trace I used for my replay definition. The next step would be to do an advanced presentation that spends more time on server-side tracing and querying and interpreting trace information.

I also went to the after party which was a lot of fun. It's great to sit and relax with other geeks, talk shop, sports, politics, and just goof around a bit. It looked like about 30-35 people went to the after party which was a decent turnout.

The next SQLSaturday is SQLSaturday #10 - Tampa 2009 January 24, 2009. I won't be able to make it as I'll be freezing in New Hampshire for a few months, but I bet it will be a good event to attend.

Wednesday, October 15, 2008

Featured Video on JumpstartTV

Cool, one of my videos was the featured video on JumpstartTV today. Here's the link to the video:
http://www.jumpstarttv.com/integrating-performance-monitors-with-sql-server-profiler_451.aspx

Always cool when you see your name out there.

JumpstartTV is putting out a ton of material right now particularly for SQL Server and, not just because I am on there, I would recommend checking it out for some quick How To videos. Most of the videos are under 5 minutes so it's not a huge time sink to try some out. Be sure to rank and comment on any videos you watch and if you want to send any comments to me I'll be sure to pass them along to the owners\developers.

Tuesday, October 14, 2008

Introduction to DML Triggers published on SQLServerCentral

My third article, Introduction to DML Triggers, was published on SQLServerCentral today. The article was born out of forum questions about triggers and the errors and/or bad practices I have seen in the posted triggers. The goal of the article is to help people understand how DML triggers work in SQL Server so they can avoid the common mistakes.

You can also follow any discussion of the article here.

Monday, October 13, 2008

JumpstartTV Videos Published

A couple of months ago I spent a day with the guys at JumpstartTV creating some videos on using SQL Server 2005 Profiler. Those videos have now been published. Here's a link to the first one:
Creating New Trace With Sql Server 2005 Profiler. Please comment and let me know what you think.

JumpstartTV, in this incarnation, is being filled with 1-5 minute "How to" videos on a variety of topics with a current focus on technology, specifically SQL Server and .NET. I think the nicest feature is the ability to suggest video topics. So if you don't know how to use configuration files in SSIS and there is not a video there already, you can ask for that topicand it will be available to authors, like myself, can "take ownership" and create the video.

The question that still needs to be answered is, "Is this format better than a step by step article with screen shots?". I happen to think the it will be better for some people as everyone learns differently. I definitely like the short format. I certainly hope that it takes off as I consider the guys behind it friends and want to see them succeed. Also, in a more selfish manner, I wan tmy videos to do well so they'll ask me for more.

Tuesday, September 30, 2008

What happened to my profile?

When I arrived at work on Monday morning, I decided I should reboot my computer. I like to reboot regularly since that is what I always tell people to do when they call me with a computer problem. Well this time I made a mistake. When I logged in and attempted to work I found that my rights had been restricted and severely. I could no longer access my C drive and did not have RUN available. This was a little before 8:00. When the network admin came in, I put him to work on the problem. First he checked all the Group Policies that had been applied to my account and, of course, there was nothing out of the ordinary. Then he tried forcing a group policy refresh, and once again, no joy. Finally he logged into my PC and renamed my local ntuser.dat file and with a reboot I was logged back in, albeit without any of my personal settings as I had a new profile. It was about 11:00 then.

All this because I decided to reboot.

Friday, September 26, 2008

SQLSaturday #8 - Orlando

Just a quick note to remind you that SQLSaturday #8 is coming up in Orlando on October 25th.
Register for the event at http://www.sqlsaturday.com/register.aspx.

Some reasons to attend:
  1. I'll be there! Seriously if we haven't met we'll have the chance and I'll also be presenting one session, Getting Started with SQL Server Profiler 2005.
  2. It's free! You can't beat that price. Even if you can only make a few sessions the price is right.
  3. The speakers know their stuff! This is not a place where you only get theory. You'll see real-world examples and methods that work.
  4. You get to meet many experienced professionals who are willing to help you out and that you can help out.
  5. You are supporting local people. Most of the speakers are local professionals who are donating their time and expertise to help YOU! So help them out by attending and letting them know what you think.
  6. Networking, networking, networking. It's not what you know, it's who you know.
  7. Free stuff. Raffles for some good prizes and lots of books given out. I don't know what the prizes are, but there is always something like an XBOX given away.
  8. You'll learn something you can use. Last year I attended and learned how to use DDL triggers and more about performance tuning. I have used both in my work.
Hope to see you there.

Tuesday, September 23, 2008

Favorite blogs and why I read them

Over the last several months I have added to the blogs I read and I thought I'd share my favorites and the ones I find the most useful. I'll also share the newsletters and web sites I like.

Blogs:

It Depends - Andy Warren. This is not always a deep technical resource but there is helpful information on Professional Development, Speaking, and SQL Server. I like the fact that he posts often and that his posts are usually fairly short and to the point.

SQL Musings - Steve Jones. Another well rounded blog that has some personal, general professional development and SQL Server content. Again regular posts and not long.

Rock's Thoughts - Chris Rock. More of a .NET blog. I like Chris's writing style which is informal and usually entertaining along with informative. He shares real-world problems and solutions. He hasn't posted as regularly as he was, but when he does post it is worth reading.

Scary DBA - Grant Fritchey. Grant is a guy I have encountered in the forums on SQLServerCentral, and based on the quality of his answers, I started reading his blog.

Bart Duncan's SQL Weblog
- Bart Duncan. Good blog on msdn for performance tips.

In Recovery - Paul Randal. If you want to know about SQL Server Storage Engine and recovery techniques, this is the place to go.

SSIS Junkie - Jamie Thomson. A great blog on SSIS.

What do I look for in a blog? Regular posts, real-world problems and solutions, and a variety of topics. I'm not interested in BOL regurgitated and these blogs regularly provide information that I can use in my job.

Monday, September 15, 2008

Scripts on SQLServerCentral

I recently submitted a couple of scripts to SQLServerCentral and they have been published:
  • Who has accessed my 2005 Server - which queries the default trace running in SQL Server 2005 and system views to show what logins have been used to access your SQL Server.
  • Find Foreign Keys  - which queries the INFORMATION_SCHEMA views to return information about the foreign key relationships in your database.
I have to admit that I have not used the script library available through SSC and I probably should.  I am sure I have struggled to write a few scripts that are available there already.

Thursday, September 11, 2008

OPASS Presentation Follow up

Well, I got through my first technical presentation and overall I thought it went pretty well.
I have to admit I wasn't feeling great about it going in as I had practiced at home the night before and made some serious mistakes, but as my wife said, "Bad dress rehearsals usually mean good performance." I did not make any major mistakes and managed to keep the OPASS group interested. I presented on SQL Server Profiler, all my demos worked and I didn't pass out. Andy Warren, the OPASS president, is always trying to drum up and pump up local speakers, so after the presentation he gave everybody the chance to offer some comments and criticisms. Fortunately the worst comment was that I had mentioned that "I just learned this" a couple of times and as the "expert" I probably shouln't have done that, oh, and, of course, I used "um" a few times. The best comment was that the presentation wasn't dry and included a good amount of practical information and experiences along with theory.

I'm always nervous before I speak, but once I get started I relax pretty quickly and try to enjoy myself. Hey, the worst that can happen is that I look foolish. It's not like it's life or death.

Speaking, teaching, and writing are all great ways to learn. Of the the 3 I enjoy speaking the most, just not the preparation part. Writing can be a chore for me, but since I started blogging last spring it has become easier, and now I am also trying to submit an article at least quarterly to SQLServerCentral, my professional web site of choice. At the very least that helps me calrify my own thoughts on things and hopefully helps someone else as well. Something I have learned recently is that what is mundane and simple to me can be new and exciting to a lot of other people. So what keeps you from contributing your knowledge and experience to the community?

Friday, September 5, 2008

Presenting at Spetember Orlando PASS Meeting

I will be presenting on Profiler Basics on Tuesday September 9th at the Orlando PASS meeting.  The meeting starts a 6pm with pizza and networking and is located at the End to End Training offices.  Directions are available on the web page.

I'm pretty excited about speaking since I learned a lot preparing and think the attendess will as well.  This is my first time doing a technical presentation although not my first time speaking in public.

Hope to see you and meet you there.

Wednesday, September 3, 2008

Thankfully the backups were good

I got a call from our System Administrator this morning telling me that the main database for a financial system was suspect. In 9 years of working with SQL Server I'd only dealt with this once and that was last fall with a suspect msdb. The last time I had no backups of msdb so I had to rebuild it and then rebuild all the jobs. As a side note, I was not made aware of this key departmental SQL Server until after the problem. Needless to say I started gettting backups of the system databases. After this occurred we upgraded from SQL Server 2000 to 2005 and moved the databases to a true server in our climate controlled server room with UPS and generator power backups. It was a brand new server purchased in the 1st quarter of this year.

This time I had backups and had the database restored in under 15 minutes to a point withing 3 minutes of the failure. Unfortunately there were no errors in either the SQL Server Error logs or the Windows Event logs. We'll have to watch the server for errors as we go forward. This was also the first time I had run a point in time restore using the STOPAT parameter of the RESTORE command. I'm glad I was able to get everything up and running quickly after pushing to have the databases moved to a central SQL Server that I could manage and monitor.

After restoring I did a DBCC CheckDB which found consistency errors in an index on one table. So, before releasing the database to the users I dropped and recreated the index to fix that error as well. Again the first time I've had a consistency error to fix as well.

As you can see, even though I've been working with SQL Server for many years, there are still many issues I have yet to deal with, and, to be honest, I hope I never have to deal with anything more complex than I did today.

Linked Server Test

I recently found out that Microsoft had added a much needed component to linked servers in SQL Server 2005, sp_testlinkedserver. sp_testlinkedserver requires the @servername parameter and returns 0 if the linked server is available (success) and 1 if it is not (failure). I have seen several forum questions about how to check if a linked server is available and prior to 2005 you had to either check an error or a row count. This is a much needed improvement. Here's a link to the documentation of it: sp_testlinkedserver BOL.

Friday, August 29, 2008

Resource database

When SQL Server 2005 was released there were a number of changes to how you accessed system tables, such as sysobjects. The main one being that you were no longer actually accessing tables, but views, and that you should change from using sysobjects to sys.objects. A big part of that change is that many of the tables are actually in the hidden Resource database. If the Resource database is missing, corrupt, or unavailable SQL Server will not start. Now you would think that with the importance of this database you'd want to back it up, right? Well, you can't, at least not using T-SQL. I had not read anything about backing up the Resource database until today in this article, Importance of the Resource Database. So you need to do a file copy if you want this important database backed up. You'd think that Microsoft would make this more well-known if it is so important.

I'm not backing it up, are you?

Thursday, August 28, 2008

Triggers in SQL Server

As I was browsing the active threads on SQLServerCentral today I happened on this post about a problem with a trigger. I did not post on the thread because someone had already made the point I would have made. The problem was that the poster had created a trigger that would not work on a batch insert or update. Something like this:

CREATE TRIGGER test_ins_upd ON dbo.test
After
INSERT, UPDATE
AS

DECLARE
@test_id INT,
@test_name VARCHAR(25)

SELECT
@test_id = test_id,
@test_name = test_name
FROM
inserted

IF @test_name LIKE 'a%'
BEGIN
--Do something
END
ELSE
BEGIN
--Do something else
END

I seem to see this mistake on the majority of forum posts about triggers. SQL is a set-based language and triggers deal with sets as well. I have to admit that when I was new to SQL Server I made the same mistake, but only once. I had thought that since we only allowed access to the database through stored procedures, and these procedures only acted on 1 row that I would be safe. Well, I quickly learned I was wrong. What about what I did? Or, another developer?

The first thing a new person to SQL needs to learn and really understand is that their code needs to work with sets. This will help newbies to write better performing code and help them avoid errors like this one.

Here are a couple of good blog posts about triggers:

The Trouble with Triggers by Conor Cunningham
Triggers...Evil? by Louis Davidson

Tuesday, August 19, 2008

SSIS Error Logging Custom Component

Well, I finally got around to writing my first custom component for SSIS. It is a synchronous transform component for Error Logging in a DataFlow. I was using a script component, which I wrote about for SQLServerCentral, that I was copying and pasting all over the place and I finally had had enough. Much of the code in the component is based on Josh Twist's XMLify Custom Component which I found through Jamie Thompson's excellent SSIS Junkie Blog. I also used the examples in Professional SQL Server 2005 Integration Services, to get through creating the component.

The code itself is pretty easy to follow, if you know anything about SSIS programming ;). It just accepts the input columns (ErrorCode is required, all others optional) and creates the new columns I want to pass out which are:
  1. ErrorDesc - retrieved using the ErrorCode passed in.
  2. ErrorTask - the Data Flow which hosts the component.
  3. ErrorStep - the name of the Error Details component. If anyone knows how to get the name of the previous component let me know as that is really what I'd like to have here.
  4. PackageTime - the start time of the containing package.
  5. ErrorDetails - an xml document consisting of name-value pairs based on the input columns.
Probably one of the neatest things I learned was how to debug the component. I had never used the Debug -> Attach to Process feature of Visual Studio before. It was really cool to start an SSIS Package with a breakpoint in it (never done that before either), then in my Custom Component solution attach to the SSIS process (DtsDebugHost.exe), continue the SSIS package, and then hit breakpoints in my component code. Man, do I LOVE learning new things!

Feel free to use the component and improve on it. I make no warranties on it.

I have posted the code and component in zip format on my new SkyDrive account (link at the bottom of the post). I had to create the SkyDrive account because I can't put attachments on the blog. The code is also attached to the discussion of the article on SQLServerCentral, here.

Friday, August 15, 2008

SQL Server Database Management

Today's editorial on SQLServerCentral was about what your employer should provide for training and professional development and that got me thinking about what I have been doing. Over the the past year I have really made an effort to catch up with SQL Server 2005 and start managing my professional development. I relocated from northern New Hampshire to central Florida, right outside of Orlando and that has given me the opportunity to take advantage of local user groups (OPASS and ONETUG), Code Camps, SQLSaturday, and other technical events that were not easily available to me in New Hampshire. Through these resources I have probably learned as much about performance and administration as I had learned in the previous 5 years. I have picked up information about backups, profiling, performance tuning and monitoring, security, and reliability best practices. All without paying for a class or book. I've also taken the time to write a couple of articles for SQLServerCentral and I am preparing to speak at the next OPASS meeting. These are great ways to work on your professional development as one of the best ways to learn something is to have to teach it to someone else.

I also take advantage of online resources like SQLServerCentral, SSWUG, SQLServerPerformance.com, and many blogs. Most MVP's and Microsoft development teams have blogs so there many good ones, along with some you may want to out. So while I believe your employer should provide some time and money, I also think you need to take advantage of the same resources I have. Granted I take time at work to utilize many of these on-line resources, but since I'm not asking my employer to spend on me, I feel justified.

What do you do to "keep up"?

Tuesday, August 12, 2008

Page Restores

This is just a short post about something I just learned. You can restore a single page of your database. I learned about this in this post, Search Engine Q&A #22: Can all page types be single-page restored?, by Paul Randall on his In Recovery blog. The basic syntax is:

Restore Database Test Page = '2:10' From Disk = 'C:\Backups\Test.bak'

I'm not sure what the practical use of this is, as doing a full database restore will obviously restore any pages that might be corrupted. You need to search a little to find some good documentation in BOL, so here is a link to the documentation, Performing Page Restores.

If you have ever used page restore let me know why and how it worked for you.

Sunday, August 10, 2008

Profiler Bug?

I was working with SQL Server 2005 Profiler to prepare for a talk I will be giving at the OPASS September 9th meeting (come on out and see how I do) and I saved my definition as a template. I then re-opened the template and received and error, "Invalid Event ID". I could not figure out what had happened as 5 minutes prior I had this same Trace running, and all I had done was stop the trace and save as a template. Being intrigued by this I tried to determine what the invalid event was, but how could I do that? I used Profiler. I started a new Trace filtering out the SPID for that trace and then removing the default filter that excludes Profiler. I then opened the custom template that was failing. In my new trace I saw all the trace procedures firing and the last sp_trace_setevent was for an event if of 64000 something (I did not write down the actual event id). I then queried the sys.trace_events table to see that the highest event_id is 202. Where do the 64 thousand-something number come from? Any ideas? Is it a bug, or some kind of weird combination of events? I wish I could re-create it, but I have not been able to, any helpers out there?

Wednesday, August 6, 2008

Disk Alignment?

I attended the Orlando PASS meeting last night where we had 2 speakers. Dolores Tofel, a local and regular attender, who did a short presentation on using Microsoft Access as a UI for SQL Server data. Even though I have developed a distaste for Access, she did a good job showing how it can be used as a RAD tool, particularly for reporting. Andy Warren, president of OPASS, is trying to give local speakers a boost by giving them an opportunity to do a mini-presentation and not be the main draw for the meeting.

The main speaker for the evening was Amy Styers from EMC. She does performance consulting for EMC customers running Microsoft products (SQL Server, Exchange, Sharepoint). Her topic was the top ten ways to improve your SQL Server performance, and I have to admit that most of the list were practices I was already familar with. But, her #10 was new to me and I think to majority of the ~20 folks in attendance. It was to get your disks aligned. She showed us that because the Master Boot Record is 63 k and the tracks are 64k. Basically this means that you will regularly have to do 2 IOs per read or write because you have to cross tracks. Apparently you can see up to a 20% IO performance gain by aligning your partitions. Of course to do this on a production system is a BIG deal as you have to make sure you have good backups and totally wipe your disks. Then you can use the DISKPAR or DISKPART(Windows Server 2003 sp1) to align the partitions and then reformat. Apparently Windows Server 2008 no longer has this issue. Linchi Shea has done some testing of this and posted the results on his blog, here. There is another article here, that deals with the same issue for Exchange with reference to an HP whitepaper. I found this subject very interesting as I had never heard this before and neither had the server admin at my workplace. We don't have any overburdened servers at this time so we won't be making the change, but it is still good to know about in case I meet someone who may be able to do something with it.

This is one reason I go to the user groups meetings even when the featured presentation may not be something I think I'll be using. Last meeting was about SQL Server Mobile and I am using it now for a personal project when I never thought I would before the meeting. I find that I learn something new every meeting and I have a good time meeting other SQL Server specialists.

The next OPASS meeting is scheduled for September 9th, 6:00pm at End to End Training, just outside Orlando. If you are going to be in the area come join us. The featured speaker is going to be great. How do I know that? It's ME! I'll be presenting on SQL Profiler Basics and I think you'll get something from I it. I know I have in preparing for it. If you do come, plan on joining us for the after event as well for some informal geek talk.

Wednesday, July 30, 2008

Aggregation based on Date

Imagine you have Sales database, we'll use AdventureWorks, and you need to return the number of sales by product in the last 30, 90, and 180 days. How would you do this? In the past I might have used temp tables, table variables, or a union in order to get the the numbers for each date range. This would not be, in my opinion, the optimal way to retrieve this data. You could just return all the data with order dates and use Excel, SSRS, or another reporting tool to pivot the data, but once again this is probably not the optimal way to return the data. So, what do I think is the best way to do this? I do this in a single select filtering on the largest date range using CASE to return the data for the smaller date ranges. Here is the code, based on AdventureWorks (I use 7/1/2004 because that is the most recent order date in my copy of AdventureWorks):

SELECT
D.ProductId,
COUNT(H.SalesOrderID) AS sales_for_180,
SUM(D.ORderQty) AS no_products_sold_180,
SUM(CASE
WHEN H.OrderDate >= DATEADD(DAY, -90, '7/31/2004' ) THEN 1
ELSE 0
END) AS sales_for_90,
SUM(CASE
WHEN H.OrderDate >= DATEADD(DAY, -90, '7/31/2004' ) THEN D.OrderQty
ELSE 0
END) AS no_products_sold_for_90,
SUM(CASE
WHEN H.OrderDate >= DATEADD(DAY, -30, '7/31/2004' ) THEN 1
ELSE 0
END) AS sales_for_30,
SUM(CASE
WHEN H.OrderDate >= DATEADD(DAY, -30, '7/31/2004' ) THEN D.OrderQty
ELSE 0
END) AS no_products_sold_for_30
FROM
Sales.SalesOrderHeader H JOIN
Sales.SalesOrderDetail D ON
H.SalesOrderID = D.SalesOrderID
WHERE
H.OrderDate < '7/31/2004' AND
H.OrderDate >= DATEADD(DAY, -180, '7/31/2004')
GROUP BY
D.ProductId

This will return the data you asked for in a single pass and will do an index seek if you have an index on OrderDate. All in all an efficient way to return the desired data. You may have learned this a long time ago, but it is something I recently picked up when answering a question on SQLServerCentral, so I know not everyone knows this. This technique works whenever you need aggregates based on some sort of partition. It could be a breakdown of products within a product category as well.

Tuesday, July 22, 2008

Ghost of SQL Server 2005 Database Mail Reply To - Fix

If you have been following my posts at all, you know I have had some issues with Database Mail (1, 2, 3, 4). Well the first one has been fixed, I have not tested the fix yet, by Microsoft in Cumulative Update Package 8 for Service Pack 2. The article for that is here, http://support.microsoft.com/kb/953841/en-us. Microsoft never updated my Connect post, where they said it would not be fixed unless a QFE was requested, but I'm not complaining.

Now I need to try it out on my personal SQL Server.

Friday, July 18, 2008

SSIS Webinars, tablediff, and Free Zoom Tool

I had the opportunity to attend 3 of 5 free webinars on SSIS presented by Brian Knight of Pragmatic Works. The 2 I chose not to participate in were introductory in nature, and I have some experience with SSIS. I attended Administering and Deploying SSIS Packages, Performance Tuning in SSIS, and SSIS for the DBA. I enjoyed each of the webinars and picked up some good tips. I enjoy taking any opportunities, particularly free ones, to expand my toolkit and knowledge base. I appreciate Brain and co-workers taking the time to present the webinars as I know that there had to be a lot of time and effort put into doing them. I hope it pays off for them in their business as well. I believe that the goodwill it will. Check out the Pragmatic Works web site as they offer some very good custom SSIS tasks for free! They also offer some other commercial product as well.

I actually used something I learned in one of the Webinars today, Webinar SSIS for the DBA,. I used an Execute SQL Task to populate an object variable with a list of schemas and tables and then used a For Each Loop Task to loop through the result set and use the Execute Process Task to run the tablediff utility Microsoft has included with SQL Server 2005 (I read about it in Benjamin Wright-Jones' blog earlier today). It is a command line utility that compares 1 set of tables (source and destination) and I wanted to compare all the tables in my database to my development database so I needed the loop. Tablediff compares the schemas and if they are different will return an error, "can not compare the table x to table y because the schemas are different". If the schemas are the same it will, depending on the parameters you supply, show any differences in the data and prepare a T-SQL scipt to synchronize the data. I actually was only interested in comparing the schemas and, not having read all the documentation, was hoping it would report those differences as well. Maybe in a later version of SQL Server a schema comparison tool will be included (sorry Red Gate). It was still a good exercise, and knowing about the tablediff utility will probably come in handy later.

One last thing, Brian used a nice free tool to zoom and markup during his presentation. It is called ZoomIt and is by Sysinternals (now part of Microsoft). I would recommend it to anyone who give presentations as it really helped us see what Brian was doing.

Wednesday, June 25, 2008

Powerless

My family and just returned to Florida from a trip to northern New England to visit churches and individuals that financially support our ministry with New Tribes Mission. I think this needs a little explanation for those who are not familiar with the workings independent, non-denominational missions. New Tribes does not pay any salaries, but each individual that becomes a member of the mission needs to visit churches and individuals of like faith in order to raise the funds necessary to live. Then after you are working for the mission you return on regular basis to those churches and individuals to report on what you have been doing. I liken it to the Interview and Performance Review process that you go through when landing and keeping a job. The first time you visit a church to present the ministry you will be doing, you are interviewing with them. When they choose to financially support you, they have "hired" you. Then you return to tell them about the work you have done, you are having your performance review. Have you really done what you told them you were going to do? Hopefully the answer is "Yes", and they will continue to financially support you and maybe give you raise.

All that to actually get to the real story. We flew from Orlando to Manchester, NH, on Thursday night and had our first speaking engagement at Calvary Baptist Church in Newport, ME, on Sunday morning. Thursday and Friday night we stayed at my brother's house in Derry, NH. Well, I setup my laptop at my brother's to use while we were there. On Saturday we packed up and headed for Newport. I went to the church Saturday night to setup and did not have a power strip so I got everything setup and tested (projector & laptop) using my laptop battery. On Sunday morning I arrived early to get the power strip I was borrowing setup so I could plug in my laptop. I went to my laptop bag to get my power cord and, OH NO, it was not there! I had left it at my brother's 3 1/2 hours and 140 miles away! I currently use a 5 year Dell Latitude D600, so the battery life is not long, so I knew I could be in trouble. I use the PowerPoint slides as my notes and expand on them extemporaneously and I had not printed out the presentation, so I had no notes. Having attended a Successful Technical Speaking class presented by Andy Warren at End to End Training this spring, I knew better, but hey, I'd never not had power before, so why be prepared? Needless to say, about halfway through the presentation the battery died and I had a blank screen. Since I had recently put together the presentation, most of it was fresh in my mind, so I was able to complete it without leaving out any major points, but this was not a situation I am interested in repeating. In the long run, this worked out, as I learned that several slides incorporated more detail than was necessary and I was able to remove them to improve the flow of the presentation. Thankfully the people were very gracious and understanding, so my faux pas was forgiven.

The moral of the story, make sure you pack everything and print out those slides!

Tuesday, June 10, 2008

Audit Trails and Logging

This week at SQLServerCentral there have been a couple of articles on Audit Trails and Logging (Part One and Part Two) that did a good job giving the basics of logging changes made to the data. In the discussion for part two someone asked about logging select statements and the returned results. While logging the selects is reasonable, logging the results of each query is, in my mind, a little out of whack. I understand why you may want to log the results as they may change over time. For example, Mary Smith married Joe Johnson and takes his last name, or hyphenates so she is now Mary Johnson or Mary Smith-Johnson. A query for LastName = 'Smith' the day before the change returns Mary, but the day after it does not. So in the event that data was compromised it will be more difficult to determine whose information was compromised if only the query is logged and not the data. There are several questions that come up now:
  • How much space do you have for logging and is the expense associated with it justifiable?
  • How far back do you keep the logged data?
  • Do you log queries of the logged data?
  • Who has access to the logged data?
These are questions that come up right off the top of my head and do not address the mechanics of the logging. There are no "Select" triggers in SQL Server so that eliminates triggers, which also eliminates being able to log ad-hoc queries using Access, SSMS, Query Analyzer, or any OLE DB or ODBC connection (Excel). One solution suggested in the discussion is using a server-side trace which works for logging the query and who ran it, but will not log the results. As I am a firm believer in not granting direct table access, I would start out by putting logging in the stored procedures used for data access. Of course, that can lead to some issues with developers who would prefer to use an ORM tool, like Linq To SQL or SubSonic, for the data access layer. Granted they are still able to use stored procedures through these tools, but one of the selling points is that you "never have to write another stored procedure". So now they not only have to data access code in the DAL, but also auditing code. If the audit needs to include the data, can I do a batch insert with the ORM tool using the dataset, list, collection I already have or do I have to iterate through the results and insert one at a time? Or do I have a separate call to the database that runs the select again, like:

Insert Into audit
Select
query,
user,
data columns....
From
table(s)
Where
criteria

This means now I am storing the user and query with each row returned and I am doubling the reads on my database.

I certainly don't have the solution to this problem. I know that for Inserts/Updates/Deletes, I like triggers because they put all the code in one place and will fire on all changes unless someone explicitly disables triggers before making a change and if this happens you have other problems.

I'd be interested in hearing what other people think.

Tuesday, June 3, 2008

June oPASS Meeting

I attended the June oPASS meeting and post meeting gathering at Bennigan's this evening. The speakers were Ginny Caughey and Steve Lasker, Program Manager for SQL Server Compact Edition, and, guess what, they spoke on SQL Server Compact Edition. It was a very informative meeting as I have had no experience or exposure to SQL Server CE and the presentation certainly gave me something to think about. I actually have a personal project that I am currently using SQL Server Express with, but I am now considering moving to SQL Server CE. Why? SQL Server CE is lightweight, portable, and appears to meet the application requirements. It would also give me some experience with SQL Server CE which I can use to evaluate it for use in future projects. The database can be packaged with the application and if I want to move it to another PC I can simply copy and paste. After the session 7 of us, including the speakers, adjourned to Bennigan's for some more discussion. I always find it enlightening to listen to Andy Warren voice his opinions. He is very much an idea guy, and has usually thought through the subject. We discussed the direction of SQL Server CE, Linq to SQL, and SQL Server Data Services. It's nice to hear about development directions from someone at Microsoft and to see the interest in "user" opinions and ideas.

Once again I found it to be a productive use of my time, even though the topic was not one that seemed to apply to my interests and work. As always, I encourage you to take the time to find and attend user groups in your area.

Tuesday, May 20, 2008

ANSI_PADDING and Variable Length Character Columns in SQL Server

I have been involved in a very interesting thread on SQLServerCentral.com about trailing spaces in strings. It has changed the way I will be looking as strings going forward. I had always assumed that when a character column was defined as variable length (varchar, nvarchar) any trailing spaces inserted into the column would be trimmed. Well this WAS how SQL Server handled it because SQL Server defaulted the ANSI PADDING setting to OFF. Now SQL Server in SSMS (SQL Server Management Studio) the connection settings set ANSI PADDING to ON. This means that any table created using SSMS will use the ANSI PADDING ON setting, regardless of the default database setting. ANSI PADDING ON means that 'a ' will store the 'a' and the space while ANSI PADDING OFF only stores the 'a'. This becomes an issue when upgrading from an earlier version of SQL Server or, as in the thread, you are not aware of this change and are not coding appropriately for it.

Why is this a problem? Well. Here is a table showing how SQL Server acts with the 2 ANSI_PADDING options (courtesy of Matt Miller in the thread mentioned above and some testing):

SQL Function

Result (ANSI Padding ON)

Result (ANSI Padding OFF)

Primary Key

Trim it. Duplicate key error

Trim it

Unique Index

Trim it. Duplicate key error

Trim it

Comparison ("=", "!=")

Trim it. 'a' = 'a '

Trim it

Comparison ("like using _")

Don't trim it

Trim it

Concatenation

Don't trim it. 'a ' + 'a' = 'a a'

Trim it 'a ' + 'a' = 'aa'

Storage

Don't trim it

Trim it

Group By

Trim it

Trim it

Order By

Trim it

Trim it


The RED rows show where there are differences in behavior. Here is what Microsoft has to say from SQL Server 2005 BOL:

Important:

In a future version of SQL Server ANSI_PADDING will always be ON and any applications that explicitly set the option to OFF will produce an error. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.


Note:

We recommend that ANSI_PADDING always be set to ON


Here is some code that proves the behavior:

SET Ansi_Padding OFF

CREATE TABLE
test.categories
(
CategoryID INT PRIMARY KEY CLUSTERED IDENTITY(1,1),
CategoryName VARCHAR(15),
Name2 VARCHAR(15),
NCategoryName NVARCHAR(15),
NName2 NVARCHAR(15)
)
--Create Unique Index UX_varchar_test On test.categories(CategoryName)
--Create Unique Index UX_nvarchar_test On test.categories(NCategoryName)

INSERT INTO test.categories
(
CategoryName,
Name2,
NCategoryName,
NName2
)
SELECT 'a', 'b', 'a', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b' UNION ALL
SELECT 'a ', 'b', 'a ', 'b' UNION ALL
SELECT 'a ', 'b', 'a ', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b'

-- length, storage, concatenation
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
CategoryID,
CategoryName,
CategoryName + Name2 AS concat_names,
LEN(CategoryName) AS len_name,
DATALENGTH(CategoryName) AS Datalength_Name,
NCategoryName,
NCategoryName + NName2 AS concat_unicode_names,
LEN(NCategoryName) AS len__unicode_name,
DATALENGTH(NCategoryName) AS Datalength_Unicode_Name
FROM
test.categories

-- varchar group by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'group by' AS test_function,
COUNT(categoryid) AS num_recs,
categoryName
FROM
test.categories
GROUP BY
categoryName

-- nvarchar group by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'group by' AS test_function,
COUNT(categoryid) AS num_recs,
NcategoryName
FROM
test.categories
GROUP BY
NcategoryName

-- varchar order by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'order by' AS test_function,
categoryid,
categoryName
FROM
test.categories
ORDER BY
categoryName DESC

-- nvarchar order by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'order by' AS test_function,
categoryid,
ncategoryName
FROM
test.categories
ORDER BY
ncategoryName DESC

-- varchar =
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'=' AS test_function,
categoryid,
categoryName
FROM
test.categories
WHERE
categoryName = 'a'

-- nvarchar =
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'=' AS test_function,
categoryid,
ncategoryName
FROM
test.categories
WHERE
NcategoryName = 'a'

-- varchar like %
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'Like %' AS test_function,
categoryid,
categoryName
FROM
test.categories
WHERE
categoryName LIKE 'a%'

-- nvarchar like %
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'Like %' AS test_function,
categoryid,
ncategoryName
FROM
test.categories
WHERE
ncategoryName LIKE 'a%'

-- varchar like _
SELECT
CASE
WHEN SESSIONPROPEERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'Like _' AS test_function,
categoryid,
categoryName
FROM
test.categories
WHERE
categoryName LIKE 'a_'

-- nvarchar like _
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'Like _' AS test_function,
categoryid,
ncategoryName
FROM
test.categories
WHERE
ncategoryName LIKE 'a_'
GO

SET Ansi_Padding ON

CREATE TABLE
test.categories2
(
CategoryID INT PRIMARY KEY CLUSTERED IDENTITY(1,1),
CategoryName VARCHAR(15),
Name2 VARCHAR(15),
NCategoryName NVARCHAR(15),
NName2 NVARCHAR(15)
)
--Create Unique Index UX_varchar_test On test.categories2(CategoryName)
--Create Unique Index UX_nvarchar_test On test.categories2(NCategoryName)

INSERT INTO test.categories2
(
CategoryName,
Name2,
NCategoryName,
NName2
)
SELECT 'a', 'b', 'a', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b' UNION ALL
SELECT 'a ', 'b', 'a ', 'b' UNION ALL
SELECT 'a ', 'b', 'a ', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b' UNION ALL
SELECT 'a ', 'b','a ', 'b'

-- length, storage, concatenation
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
CategoryID,
CategoryName,
CategoryName + Name2 AS concat_names,
LEN(CategoryName) AS len_name,
DATALENGTH(CategoryName) AS Datalength_Name,
NCategoryName,
NCategoryName + NName2 AS concat_unicode_names,
LEN(NCategoryName) AS len__unicode_name,
DATALENGTH(NCategoryName) AS Datalength_Unicode_Name
FROM
test.categories2

-- varchar group by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'group by' AS test_function,
COUNT(categoryid) AS num_recs,
categoryName
FROM
test.categories2
GROUP BY
categoryName

-- nvarchar group by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'group by' AS test_function,
COUNT(categoryid) AS num_recs,
NcategoryName
FROM
test.categories2
GROUP BY
NcategoryName

-- varchar order by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'order by' AS test_function,
categoryid,
categoryName
FROM
test.categories2
ORDER BY
categoryName DESC

-- nvarchar order by
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'order by' AS test_function,
categoryid,
ncategoryName
FROM
test.categories2
ORDER BY
ncategoryName DESC

-- varchar =
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'=' AS test_function,
categoryid,
categoryName
FROM
test.categories2
WHERE
categoryName = 'a'

-- nvarchar =
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'=' AS test_function,
categoryid,
ncategoryName
FROM
test.categories2
WHERE
NcategoryName = 'a'

-- varchar like %
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'Like %' AS test_function,
categoryid,
categoryName
FROM
test.categories2
WHERE
categoryName LIKE 'a%'

-- nvarchar like %
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'Like %' AS test_function,
categoryid,
ncategoryName
FROM
test.categories2
WHERE
ncategoryName LIKE 'a%'

-- varchar like _
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'varchar' AS data_type,
'Like _' AS test_function,
categoryid,
categoryName
FROM
test.categories2
WHERE
categoryName LIKE 'a_'

-- nvarchar like _
SELECT
CASE
WHEN SESSIONPROPERTY('ANSI_PADDING') = 0 THEN 'Off'
ELSE 'ON'
END AS ansi_padding_setting,
'nvarchar' AS data_type,
'Like _' AS test_function,
categoryid,
ncategoryName
FROM
test.categories2
WHERE
ncategoryName LIKE 'a_'


DROP TABLE test.categories2
DROP TABLE test.categories


The key, as always, is to understand the system you are working with and design and code against it appropriately. As noted by the Original Poster in the SSC thread, "Trim your strings before inserting".