How many data files for best performance

Last Post 24 May 2009 06:19 AM by MattiasLind. 4 Replies.
AddThis - Bookmarking and Sharing Button
Author Messages
jelliott
New Member
New Member

--
22 May 2009 09:36 AM
If a server has two volumes in a RAID 10 configuration with volume 1 using 10 disks and volume 2 using 6 disks, how many data files for a user database should be created on each volume to achieve the best performance and availability?

Should there be one data file per volume OR should there be 10 data files on the first volume and 6 data files on the second volume?

Thanks
MattiasLind
New Member
New Member

--
24 May 2009 06:19 AM
There isn't a short answer to this one!
The first thought I got was, why two R10s and not sixteen R1s? Or four R10s
with four disks each? Or a R5 of six disks and five R1s? Or...
There can be many combinations of this. And don't forget to align the
partition offset for MBR, i.e. with the diskpart-tool:
SELECT DISK {physical disk number}
CREATE PARTITION PRIMARY ALIGN=64
This will align the physical disk io with the requests from SQL Server.

First I would think is recoverability!
The number one rule is to seperate the datafiles from the logfiles.
This minimize the amounth of dataloss if/when the disksubsystem fails.
Combine this with the Database Recovery Model Full and transactionlog
backups. And multiple filegroups per database. Keep the PRIMARY filegroup
at it's minimum just containing the database catalog, i.e system tables.
And add at least two filegroups, with one datafile per cpu (not sockets!),
preferrable separated on different physical disksubsystems (filegroups
primarly, datafiles if possible). Split tables; i.e. heaps and clustered
indexes; from nonclustered indexes, and large tables that are queried
simultaniously. If possible, i.e. with Enterprise Edition, partition
large tables with the most frequent data on the fast and dedicated disksub-
systems(R10) and less frequent data on larger, cheaper disksubsystems(R5).

My second approach on this would be performance!
Put frequent data on fast disksubsystems(R10) and nonfrequent/archived data on
cheap disksubsystems(R5). The logfile needs fast disks aswell, R10.

I can recommend the InstantDoc ID 101621 from the May/2009 issue. Andrew has
put together a great article for us all to read.
MattiasLind
New Member
New Member

--
24 May 2009 06:27 AM
@rm: That's not really true. You can gain better performance by adding extra datafiles on the same physical disksubsystem. A rule of thumb is one file per cpu (not socket). Exactly as in the recommendation for TempDB. This because of the first extent in every datafile;
Fileheaderpage, PageFreeSpace, GlobalAllocationMap, SecondaryGlobalAllocationMap and so forth; and how the server locates and retrieves available space.

Combine this with a resonable amount of file growth on dedicated volumes and you get less external fragmentation.
MattiasLind
New Member
New Member

--
24 May 2009 06:31 AM
And I almost forgot. Calculate the FILLFACTOR per index how the data adds and changes per table. Dont forget the clustered index and PAD_INDEX.

Ooops, now I'm getting off-topic... *smiling
MattiasLind
New Member
New Member

--
24 May 2009 11:05 AM
quote:

Originally posted by: rm
Not much since sql2k5.


I agree on that, still you can gain some increase in IO performance...

One on the biggest why more datafiles goes together with filegroups...
Recoverability, you can create more delicate disaster recovery solutions with more files and filegroups.


Acceptable Use Policy
---