Monday, February 22, 2010

What’s up with the reads?

So the other day I read Andrew Kelly’s article, Do You Have Hidden Cache Problems, in the latest electronic edition of SQL Server Magazine, and decided to run the query he listed in the article to see if my server was suffering from this problem:

SUM(single_pages_kb + multi_pages_kb) AS "CurrentSizeOfTokenCache(kb)"
name = 'TokenAndPermUserStore'

My result was 189344 kb.  In the article Andrew states that:

…if its under 10MB, you should be fine.  I’d start getting concerned between 10MB and 50MB.  If the amount of the consumed memory is over 50MB, you’re probably affected by this problem.

Well, my value converts to nearly 185MB!!!  I guess I must have this problem.  The article says this about the cause:

On a system with reuses query plans effectively, this cache would be as large as only a few megabytes,… But on a 64-bit system with lots of ad hoc or dynamic SQL queries, this cache can grow to hundreds of megabytes in size.

So apparently my lightly loaded server (1 main application on it right now) and a max of about 50 user connections has a lot of ad hoc or dynamic SQL hitting it.  Well, since the application is a third-party application I decided to run this query to see what I had in the procedure cache for ad hoc queries:

sys.dm_exec_query_stats AS DEQS JOIN
sys.dm_exec_cached_plans AS DECP
ON DEQS.plan_handle = DECP.plan_handle CROSS APPLY
sys.dm_exec_sql_text(DEQS.sql_handle) AS DEST
DECP.objtype = 'ADHOC'
DEQS.execution_count DESC

Hmmm, 4800+ rows returned.  As looked a little deeper I noticed that rows 10-12 based on execution count were the same query with different WHERE clauses.  The queries looked like this:

Table WITH
(Column2 = 'Column Default Value' Or
Column2 = 'Another Column Value') AND
Column3 = 'Column Default Value') AND
Column4 = 'Value' AND
Column5 = 'Value'

Interesting.  Then I looked a little further down the list and found some more of the same query.  So I reran my query adding WHERE text Like '’%KeyWord1%KeyWord2%KeyWord3%’ and found over 600 rows for this “single” query in the cache.  Now, those of you who read my blog know that I am a believer in stored procedures for data access and if you aren’t using stored procedures then use parameterized SQL.  This is a great example of a query that needs to be encapsulated into stored procedure or parameterized.

It gets even better.  I took one of the queries and ran it to see what it returned.  It returned nothing! So I ran SELECT * FROM Table to see what was in the table, nothing at least at that time!  The fun part is that running the application query does a Clustered Index Seek, but takes 2 scans to do it according to STATISTICS IO and the SELECT * takes does a Clustered Index Scan but only scans the index once according to STATISTICS IO.  So I looked at the clustered index on the table and it is (based on my query):

    Column2, Column3, Column4, Column5, Column6

So the only column in the table not included in the clustered index is Column1 (the value being returned).  As I looked at the query, it hit me.  There is an OR in the where clause, so in the Query Plan you have 2 SEEK predicates thus 2 scans of the index.  If you remove the OR you get one SEEK predicate and 1 scan.

So what does all this mean?  Well, for me it solidifies 2 things:

  1. Use stored procedures or at the least parameterized SQL.

  2. OR’s are bad and should be avoided as much as possible

So what am I going to do?  I honestly don’t know for sure.  Because there are no complaints about performance I don’t feel the need to address the TokenAndPermUserStore issue and I’m not sure what I’d do anyway besides possibly using Forced Parameterization.  I think I will contact the vendor (they’ve been good to work with so far) to find out what the query really does and to find out why it isn’t parameterized or in a stored procedure since most of their other stuff is.  I’m open to suggestions.


  1. If you're interested, I have a couple posts from years back when I dealt with high token store problems.

    185 MB is nothing. I had one that could and did hit upwards of 8GB within a day or two. I wrote a job that cleared it once it hit 750MB

  2. p.s. A scan count of 2 from Statistics IO does not mean that the entire index was scanned twice.

  3. Thanks for the clarification Gail, but it does mean that SQL Server accessed the index twice, right?

  4. Maybe. :-)

    Think about it, two seeks require minimum two index accesses. Doesn't matter if you do it via OR or if you UNION two resultsets, if SQL has to check two indexes it has to access each of them.

    Also worth noting is that the scan count is often 0 for a non-0 reads....

  5. Jack how would I identify which database these results relate to? ADHOC plans have a NULL dbid value ...

  6. Jonathan,

    That's a great question and one I can't answer. I knew because of table names, etc... Definitely a deficiency in my opinion, you'd think that the execution context of the adhoc sql would be important enough to collect.


So what do you think I am?