I recently changed the Max Degree of Parallelism and Cost Threshold of Parallelism on a server and was attempting to gauge the results, whether positive or negative, and I am unsure as to the results, if they are positive or negative.
In order to evaluate the change I have primarily been using sys.dm_os_wait_stats, looking primarily at CXPACKET waits. Not sure if that is the best way to monitor a change to Max Degree of Parallelism and Cost Threshold of Parallelism, but that is what I have focused upon.
This box has 8 processors and 32 GB RAM with Max Server Memory set at 25 GB. We are running SQL Server 2008 R2 Enterprise.
The original configuration run values were:
Max Degree of Parallelism = 2
Cost Threshold of Parallelism = 5
The new configuration run values are:
Max Degree of Parallelism = 4
Cost Threshold of Parallelism = 40
The resulting sampling of CXPACKET wait data from sys.dm_os_wait_stats is the following:
SignalWaitsPct = 14.35%
SignalWaitsPct = 8.7%
So, based off the SignalWaitsPct, if appears the change (the New Configuration) is positive, but the New Configuration Waiting_Tasks_Count and Wait_Time_ms are clearly higher in comparison to the Original Configuration.
Any ideas on how to interpret these metrics, or are there other counters or metrics I should use to make a better evaluation?