DBCC SHOWCONTIG

Last Post 19 Feb 2007 04:12 AM by TRACEYSQL. 8 Replies.
AddThis - Bookmarking and Sharing Button Printer Friendly
  •  
  •  
  •  
  •  
  •  
Sort:
PrevPrev NextNext
You are not authorized to post a reply.
Author Messages
TRACEYSQL
New Member
New Member

--
14 Feb 2007 05:03 AM
Hi i have run the showcontig

Type 2 1
Rows 10000 10000
Pages Scanned 23 10000
Size 80656 80656
Pages Scanned / 8 kb Extents Scanned 3 1254
One Less than Extents Switched Extents Switched 2 1253
(8 Kb) lower Pages/Extent 7 8

I was wondering how the 23 is calculated.

The index non clustered is int of 4 bytes and i have 10000 rows

Thank you
TRACEYSQL
New Member
New Member

--
16 Feb 2007 06:52 AM
results.

I was wondering how 23 is calculated...so if i am going to add indexes to my table i can have some indication of how much data bytes this will take up.

Thanks

DBCC SHOWCONTIG scanning 'testb' table...
Table: 'testb' (373576369); index ID: 1, database ID: 39
TABLE level scan performed.
- Pages Scanned................................: 10000
- Extents Scanned..............................: 1254
- Extent Switches..............................: 1253
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.68% [1250:1254]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 0.16%
- Avg. Bytes Free per Page.....................: 40.0
- Avg. Page Density (full).....................: 99.51%
DBCC SHOWCONTIG scanning 'testb' table...
Table: 'testb' (373576369); index ID: 2, database ID: 39
LEAF level scan performed.
- Pages Scanned................................: 23
- Extents Scanned..............................: 3
- Extent Switches..............................: 2
- Avg. Pages per Extent........................: 7.7
- Scan Density [Best Count:Actual Count].......: 100.00% [3:3]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 0.00%
- Avg. Bytes Free per Page.....................: 270.3
- Avg. Page Density (full).....................: 96.66%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
JHunter
New Member
New Member

--
16 Feb 2007 08:55 AM
Two key bits of information are missing - how many rows your index will contain, and the data types of all the columns included in the index.

But basically, an index stores (in its leaf) the indexed columns plus a pointer to the data - either the clustered index or a bookmark page id into a heap). The total size of these represents one index entry, so you can work out how many index entries will fit on a page, and hence how many pages you require (but you'll still have to evaluate the number of rows in your index).

Additionally, you need to factor in the fillfactor used when you create the index (percentage of the page left free after the index has been created). And there are overheads associated with the tree structure and the padding used here too!

A simpler way to track how many pages are used is by looking up the dpages column for the index entry in sysindexes. You can track the number of columns in each index with keycnt column.

Personally, i'd just create the index in your test environment and take the figures from the system tables - I've never had to sit down and calculate an index size before create one!

Hope that helps,
Jamie
TRACEYSQL
New Member
New Member

--
16 Feb 2007 09:07 AM
True may i should not worry about it and just know that adding an index increasing productivity...and just record the number of bytes read that is saved by adding the index

Thank you guys
TRACEYSQL
New Member
New Member

--
17 Feb 2007 03:02 AM
The benchmarks is what im trying to put together.....im doing the execution plans and seeing the reads and cpu lower results......

What the best benchmarks you do.
SQLUSA
New Member
New Member

--
17 Feb 2007 05:23 AM
For OLTP queries you should not have more than 200 (ballpark) reads.

If you do, it is probably a missing index or index went south.

For batch/report queries, you should try to bring it down below 20K reads (ballpark), if it is run several times an hour (ballpark). If you have like 300K, it requires index optimization and/or algorithm optimization.

Kalman Toth
SQLUSA: http://www.sqlusa.com
TRACEYSQL
New Member
New Member

--
17 Feb 2007 01:19 PM
Thanks...i look into this...this information i can get from profiler....and i will have a look if i can do reads > 200
SQLUSA
New Member
New Member

--
18 Feb 2007 11:13 AM
>Should test some insert, delete and update to see if new index slows down those process.

Insert: you can keep it under control by the FILL FACTOR (experiment between 50-80) .

Delete: try to avoid. This really messes up the index tree. Use an IsActive bit flag instead.

Update: may or may not be an issue. If in place or not on index column, fairly fast.

Kalman Toth
SQLUSA: http://www.sqlusa.com

TRACEYSQL
New Member
New Member

--
19 Feb 2007 04:12 AM
Thanks......im doing the index rebuild everynight.....and i am gathering statistics to see how ofter page splits to i can adjust the fill factor accordingly.

Im now doing the disk activity.....820 reads a second.....so im checking this.....
(I wanted to make sure all the tables i had .....(had the necessary indexes)......

Like finding a needle in hole....
You are not authorized to post a reply.

Acceptable Use Policy