jump to navigation

Performance analysis methodology September 25, 2009

Posted by msrviking in Performance tuning.
add a comment

Hello Guys,

Few months ago I had worked on performance analysis of database that needs to be upgraded from SQL Server 2000 to SQL Server 2008. Before doing the analysis, and providing recomemndations I thought I will document the methodology and share with the project team. As an effect of this thought I had prepare this methodology.

Start of the document:

Performance Issues – Database upgrade from SQL Server 2000 (32 bit) to SQL Server 2008 (64 bit)

Overall databases performance

Since, the source databases are moved from the existing system (SQL Server 2000 – 32 bit environment) to the destination system (SQL Server 2008 – 64 bit environment), the overall database performance would improve for the reasons that SQL Server 2008 has improvised database engine and the instance would be hosted on the 64 bit environment.

However there could be stored procedures, user defined functions, triggers and other database objects which may degrade performance after the upgrade because of the following reasons.

–          Query engine interpretation of execution plans in SQL Server 2008 is different to that of SQL Server 2000.

–          Statistics on columns and indexes will not be affective for query optimizer to prepare optimal plans.

–          Row counts or page counts will be inconsistent and /or incorrect.

To ensure that there is no performance issues due to upgrade, baseline performance will be identified for stored procedures, user defined functions, and also post-upgrade activities will be considered post the production deployment.

Poorly performing existing stored procedures

The identified poor performing stored procedures could have improvements in performance because of the improvised database engine features of SQL Server 2008 and higher processing power (64 bit) on which SQL Server will be hosted.

But the performance could be still not per existing environment because of the following reasons:

  • Poorly written queries.
  • Highly complex T-SQL logic.


Overall databases performance

The process of measuring the current performance will be to obtain a performance baseline, and this baseline will provide statistics of current usage pattern of the stored procedures, user defined functions and will also give a reference for the future statistics.

  1. The performance baseline information will be collected after running traces. The traces will have to be created in the identified baseline environment, along with trace dependent physical tables and stored procedures for the traces to run. Essentially in short these will be the components for the traces to be run
  • Stored Procedures which will create, start, stop and clear traces in the SQL Server instance
  • Physical Tables will consist of trace based information columns like events, data columns.
  1. The traces will be run on SQL Server 2000 in all databases before migration and on SQL Server 2008 after migration. The few options on where to run the traces could be run.
  • Running trace in production environment. This is the optimal environment, for the reason that the traces will return actual figures of what is happening in the production environment on the existing stored procedures, functions, and any other DML operations.
  • Running trace in pre-production, UAT or any environment which is closest to production environment. The closest to production environment means that the boxes should be similar with that of production boxes in terms of hardware configuration. Although there will not be enough load on the server to sense how the sp’s are doing while under load, but at least there will be good readings (Duration, CPU, RAM usage) that may be close to production box.

In order to simulate production environment load, have few users could access and test the application with multiple flows and scenarios. As an alternative to simulate load, a single flow /scenario could be executed to gather enough information for the trace. To compare the performance measurements after migration the same flow /scenarios should be followed.

To baseline pre-migration and post-migration performance information, the key parameters that will be captured for the events (SP start and SP end, SQL start and SQL end) would be

  • Database Name
  • Object Name
  • Duration (ms)
  • CPU (ms)
  • Logical Reads
  • Physical Reads

This information will be gathered in trace file format (*.trc) and saved into a folders. After the trace files are captured these will be saved into physical tables, and scripts will be run on the trace tables in order to retrieve the consolidated numbers on how many sp’s, function’s, trigger’s are called through the application, and also performance indicators (above mentioned parameters).

Poorly performing stored procedures

To identify the pain areas and to improve performance in these stored procedures, below information would be required,

–       Functionality of the stored procedures and how these sp’s are being used in the application.

–       The current issues that are causing performance bottlenecks.

–       SQL Server 2008 new features or the best practices that could be considered to fit into these sp’s.

And to gather the above information the below activities will be carried out.

–          Understand the functionality of the stored procedures.

–          Analyze and evaluate the current performance of the sp’s in SQL Server 2000 environment.

–          Identify the pain areas in the stored procedures.

–          Fix the pain areas with new features of SQL Server 2008 and with best practices.

Analysis and Findings

Overall databases performance

The below steps will be executed on the trace results that are gathered from the production environment to arrive at a baseline performance.

–       Load the trace files data of each database into physical tables.

–       Extract relevant SQL statements from “textdata” column of the trace table based on the event class 41, and 12. These events represent T-SQL event class and the events are SQL Batch Completed, SQL Stmt Completed.

–       Extract the object names from the exec statements (extracted from the above step).

–       For each object name the table will be queried on the “textdata” column to see where has the sp been called with the pattern “exec <objectname>” for the event class 41, and 12.

–       The event classes 41, 12 have very useful information like duration (ms), logical reads, physical writes and CPU (ms) for each exec statement.

The trace /profiler results may have captured only the sp’s that were being called by the application during the tracing period, and may not be all the n (total sp’s) in the database.

Poorly performing stored procedures

After understanding the functionality, analyzing and evaluating the stored procedures a list of few best practices and new features will be arrived at, and which could be implemented in the stored procedures for performance gains.

The below list of best practices and new features could be used for implementation.

Best Practices:

–          Use object owner against the objects used in the sp.

–          Temp tables to be replaced with table variables wherever applicable.

–          Use sp_executesql, and parameterized queries to avoid excessive recompilation.

–          Replace cursor with SET based operations.

–          Avoid use of SELECT INTO statements instead use INSERT INTO.

New Features of SQL Server 2008:

The below list gives description of the new features of SQL Server 2008, and where could these be used.

–          Common Table Expressions (CTE):

  1. As replacement for derived table.
  2. Replace table variables /temp tables.
  3. Avoid repeat of code /same query again.
  4. Hierarchical queries (recursive queries – calling query in a query).

–          Table Valued Parameters (TVP):

With TVP data can be passed around as a single variable. What this means is that if multiple records are saved, this can be done very easily in one procedure call.

This also can be done using an XML column or a delimited list, but the difference with a Table Valued Parameter is that a typed row set which can be used directly in an INSERT, SELECT, UPDATE statement, and any other statement that can take a row set. This means no shredding of XML or having a function to split delimited string. This essentially reduces the code, and gives flexibility of using typed data, that would mean reduced calls to the database from the DAL increasing the performance. However as it is indicative that there will be performance improvements but may not be possible to implement for the reason there will be changes in the stored procedures and the code calling the sp’s (T-SQL or the C#).

–          CONVERT function:

The convert function helps in converting binary data to string characters in hex format directly.

–           GROUPING SETS:

  1. The Grouping Sets feature is really helpful when you want to generate a set of aggregate results and at the same time you want to group by varying columns.
  2. It is much easier to maintain and provides better performance when compared to running different queries against the same data and then finally performing a UNION ALL to get the desired results.
  3. It provides better performance as it is executes once against the data source.
  4. It is much easier to program and use Grouping Sets than writing multiple select statements.

–          Transact-SQL row constructors:

This feature doesn’t give performance improvement, but helps in code size reduction which would essentially mean the execution plan will be slim and easy for the query engine to interpret.

–          Compound Assignment Operator:

This feature essentially reduces the code content in a batch of T-SQL statements.

–           Sparse columns:

The sparse column is an ordinary column just like other columns but it reduces the storage requirement for null values. A nullable column can be made as sparse column by adding the SPARSE keyword when the table is created or altered. Once the column is a SPARSE column SQL Server will not allocate space for null values. Note that in using this feature it adds an overhead for data retrieval of non-null values. Therefore it is needed to carefully apply this feature for columns by calculating the space that can be saved. It is recommended to make the column a SPARSE column only if the space that could be saved is at least 20 to 40 percent.

–          Filtered indexes:

A Filtered Index is an optimized form of non clustered index. It is basically useful to cover those queries which return a small percentage of data from a well defined subset of data within a table.

Filtered Indexes is one of the greatest performance improvement introduced in SQL server 2008. A Filtered Index allows us to create a filter to index a subset of rows within a table. i.e., non clustered indexes can be created with a WHERE clause.

A very well designed filtered index will help improve query performance on very large tables; this will also generate a better execution plan as it will be much smaller than the full table non clustered index. It is more accurate than a full table non clustered index because it will only cover those rows available in the filtered index WHERE clause.

A Filtered index will help reduce the index maintenance costs as it is smaller and is maintained only when Data Manipulation Language (DML) statements affect the data in the index. It is better to have large number of filtered index, especially in scenarios when the data is known on which filtered index is created is changed very less frequently.  Similarly, if a filtered index contains only the frequently affected data, the smaller size of the filtered index reduces the cost of updating the statistics.

Another major advantage of creating a Filtered Index is that it will reduce the disk storage space for non clustered indexes when a full table index is not required. A full table scan can be replaced with multiple filtered indexes without significantly increasing the disk storage space for the indexes.

But to implement all these one needs to essentially understand the functionality of the column and indexes along with the execution plan.

–          MERGE statement:

The MERGE statement internally works as an individual insert, update and delete statement within a single Merge statement. The SOURCE and the TARGET table or query which should be joined together should be specified. Within the MERGE statement specify the type of the data modification that needs to be performed when the records between the source and target are matched and what actions needs to be performed when they are not matched. With the introduction of MERGE statement the complex TSQL codes which was used earlier to do checks for the existence or inexistence of data within the data warehouse can be replaced with single Merge statement. The use of Merge statement will also improve the query performance because MERGE Statement runs through data only once in the database which also improves performance.

–          Composable DML:

The OUTPUT operator works much like a “local trigger” on the current statement. The drawback is that there is no way to filter the returned result set directly. The OUTPUT data has to be inserted in a staging table and work from there.

With composable DML a statement of UPDATE, DELETE and even MERGE as a data source for your query can be used. This doesn’t give any performance improvements but will be useful for sp’s that use the functionality of OUTPUT clause, and this would be an advance feature.

–          Table hints:

The FORCESEEK table hint forces the query optimizer to use only an index seek operation as the access path to the data in the table or view referenced in the query. You can use this table hint to override the default plan chosen by the query optimizer to avoid performance issues caused by an inefficient query plan. For example, if a plan contains table or index scan operators, and the corresponding tables cause a high number of reads during the execution of the query, as observed in the STATISTICS IO output, forcing an index seek operation may yield better query performance. This is especially true when inaccurate cardinality or cost estimations cause the optimizer to favor scan operations at plan compilation time.

SQL Server query optimizer typically selects the best execution plan for a query, we recommend using hints only as a last resort by experienced developers and database administrators. Moreover this would involve analyzing the sp for statement level performance and judging if the FORCESEEK hint helps the sp or not will involve time and effort.

–          FOR XML PATH:

This feature doesn’t give any significant performance gain but would reduce the complexity of the code which uses FOR XML EXPLICIT to return XML data.

Fixing and Re-evaluation

Overall databases performance

The above information gives on how much time each sp is taking, how many CPU cycles is the sp taking, what are the logical reads, what are the physical writes. With this data as baseline per sp on each database, the performance will be evaluated of the same sp, by executing same exec statement (as it was captured from the production) on 64 bit SQL Server 2008 Enterprise (testing environment) against each database.

Any differences in performance based on the parameters of duration /logical reads would lead to evaluate on why there are performance difference with the issue based sp(s), and changes will be proposed to get the performance in line or get better in comparison with the existing production system. After the first iteration of proposed changes is implemented the stored procedures will be evaluated again for performance.

Poorly performing stored procedures

After the best practices and new features are implemented in the stored procedures, the sp’s will be evaluated for performance on the 64 bit SQL Server 2008 Enterprise (testing environment). In case there are any performance degradations in the sp’s due to changes, these will be analyzed for performance analysis and changes will be made followed by re-evaluation until performance reaches to acceptable levels.

End of the document:

Let me know what you guys think about all these!



SQL University September 25, 2009

Posted by msrviking in General.
add a comment

I was going through a blog roll and realized that I should share this information. A great effort by Jorge Segarra! Visit this link if you really want to get started on SQL Server from basics.

I just hope that the series will continue.