SQL Stmt Performance better than Stored Proc - Why?

Last Post 30 Oct 2007 09:50 PM by SQLUSA. 8 Replies.
AddThis - Bookmarking and Sharing Button Printer Friendly
  •  
  •  
  •  
  •  
  •  
Sort:
PrevPrev NextNext
You are not authorized to post a reply.
Author Messages
sqldbasql
New Member
New Member

--
15 May 2007 01:53 PM
I am extracting data from two large tables each approx 50M rows. Distribution of data on the pages is not the best but could be termed less than satisfactory.

While we would like to do the maintenance on the database tables and indexes and distribution, we have deferred it to get most uptime of the systems.

I am now facing a problem with a proc which is performing poorly. Performance went down from a duration of under 20-30 seconds to over 15 minutes.

I executed the sql statements from the proc and am able to get the results in less than 15-30 seconds.
But when I execute the proc, it takes 15 minutes.

I recompiled and recreated the proc also to have a new and optimized query plan, but still the performance is over 15 minutes. Even the stats have been redone.

The proc uses union all as well in the logic (just to prevent any duplicates), though ther chances of getting duplicate rows are slim to none.

What could be missing? Any thoughts?

Thanks in advance,

RS


Check out the forums at:
http://forums.sqlmag.com/web/forum/...forumid=10

sqldbasql
New Member
New Member

--
15 May 2007 02:32 PM
Thanks russellb for a quick response.

I did review the execution plan. No table scans. It does index scans and has all the needed indexes.
Distribution is not the best but needs maintenance. No DDL and no dynamic SQL or variables that would otherwise cause adverse impact.

Here is the part of the code.


/*

CREATE PROCEDURE [dbo].[Get_User_History]

(
@UserNumber varchar(15),
@EarliestDate DateTime = null,
@LatestDate DateTime = null
)

as

begin

--Set default values for null variables
set @EarliestDate = isnull(@EarliestDate, dateadd(dd, datediff(dd, 30, getdate()), 0))
set @LatestDate = isnull(@LatestDate, dateadd(dd, datediff(dd, 0, getdate()), 0))

if @EarliestDate >= dateadd(mm, -3, dateadd(dd, datediff(dd, 0, getdate()), 0))
begin
--Date is within the last 3 months (size of DailyTrn table)
select *
from DailyTrn
where
-- BeginDate >= @EarliestDate
-- and BeginDate <= @LatestDate
BeginDate between @EarliestDate and @LatestDate
and UserNo = @UserNumber
order by BeginDate desc
end
else
begin
--Date is beyond last 3 months, combine both transaction tables (current and archived in the DailyTrn tables)
select *
from DailyTrn
where
--BeginDate >= @EarliestDate
--and BeginDate <= @LatestDate
BeginDate between @EarliestDate and @LatestDate
and UserNo = @UserNumber

union all

select *
from DailyTrnArchived
where
--BeginDate >= @EarliestDate
--and BeginDate <= @LatestDate
BeginDate between @EarliestDate and @LatestDate
and UserNo = @UserNumber
--Orders all results
order by BeginDate desc
end

end


*/

What is baffling here though is why is the SQL stmts from the proc when executed as stmts, perform better than the proc itself ?

I do understand, that the needed maintenance will help resolve the issues, but unfortunately we have had to defer that in the past. We will be able to have that addressed soon, now. If there is any other possible reason that appears obvious, please feel free to let me know.

Thanks again.
tom27
New Member
New Member

--
12 Oct 2007 04:34 AM
Make sure that you have indexes on columns where filter is applied. Check the excution plan, Seek performs better that scan mostly.

Hope the datatypes of procedure parameters are same with filtered columns in tables. If not make it same.

If you are still getting the problem try declaring local varibales for each parameter with datatype as columns in the table.

Assign parameters to corresponding local variables.

Use the local variables instead of parameters in your where clause



SQLUSA
New Member
New Member

--
28 Oct 2007 10:56 PM
SET NOCOUNT ON is the latest overrated snakeoil for performance issues.

Your manual query execution is faster because your PARMS are hard-wired.

In sproc plan preparation the optimizer makes plan for all kind of PARMS values.

In SQL Server 2005 you can specify a hint what kind of PARMS values to optimize for.

Kalman Toth, Database Architect
SQL Server Training - http://www.sqlusa.com/order2005grandslam

SQLUSA
New Member
New Member

--
29 Oct 2007 02:41 AM
Here is an example for SQL Server 2005 PLAN GUIDE

The HINTs line optimizes the compiled plan for 'US' .

sp_create_plan_guide
@name = N'Guide1',
@stmt = N'SELECT *FROM Sales.SalesOrderHeader h,
Sales.Customer c,
Sales.SalesTerritory t
WHERE h.CustomerID = c.CustomerID
AND c.TerritoryID = t.TerritoryID
AND CountryRegionCode = @Country',
@type = N'OBJECT',
@module_or_batch = N'Sales.GetSalesOrderByCountry',
@params = NULL,
@hints = N'OPTION (OPTIMIZE FOR (@Country = N''US''))'
SQLUSA
New Member
New Member

--
29 Oct 2007 07:35 AM
Adding the SET NOCOUNT ON to the beginning of sproc is a good idea for multiple reasons.

Apperantly in some cases even helpful with performance. Don't expect though to fix serious logic/index problems.

> have personally seen a situation in which adding this to the top of the code took the execution time >from over 20 minutes to less than 30 seconds.

DETAILS please, I am really curious about this one....maybe a WHILE loop inside a sproc?

Thanks,

Kalman Toth
tom27
New Member
New Member

--
30 Oct 2007 05:21 AM
First of all I don't see a place we require LOOP inside a procedure, except for handling complex business logic. That is where middle tier comes to picture. I think we should change our attitude towards procedures. Gunney what you suggest?
tom27
New Member
New Member

--
30 Oct 2007 08:54 AM
I am not saying that we should not put SET NOCOUNT on. What I was trying to say is if we get a performance improvement of say 10 minutes just by putting SET NOCOUNT ON, it's better to rewrite the procedure.
SQLUSA
New Member
New Member

--
30 Oct 2007 09:50 PM
The bottom line is:

1. It is generally a good idea to add SET NOCOUNT ON at the beginning of the sproc. At worst, it is harmless.

2. If you are using SET-BASED T-SQL programming, don't expect miracles. Yet, the cumulative effects of many thousands sproc calls will be benefial on general performance.

3. If you have large WHILE loops, it is really beneficial.

Kalman Toth, Database Architect
SQL Server 2005 Training - http://www.sqlusa.com/businessintelligence

You are not authorized to post a reply.

Acceptable Use Policy