Min and Max server memory

Last Post 17 Oct 2012 04:08 PM by gunneyk. 1 Replies.
AddThis - Bookmarking and Sharing Button
Author Messages
e90fleet
New Member
New Member

--
17 Oct 2012 02:16 PM
Hi all,

I am trying to find the best possible solution on how to set min and max server memory,
but so far I can't find the good source and breaking my head here.

So let's say I have 4 node cluster running 1 sql instance on each node.
Each sql instance has about the same workload.
Each node has 72GB of memory. Currently the values for each instance
are set to 0(min) and 64GB(max) leaving 8GB to OS.

What I would like to do is to set the values the way that if for example 3 nodes are failed and all 4 instances run on 1 node the memory configuration should be handled automatically.
Or sometimes just 1 instance is failing over to another node and we need to set the max server memory accordingly for 2 instances running on the same node.
The problem is that I can split the memory (64GB) between 4 instances and set the max setting let's say to 16GB for each instance,but that means that each instance can't use more than 16GB out of 64GB if an instance is running on its dedicated node.

The questions are:
1) What is the best option to handle this scenario?
2) Is there an advantage of modifying minimum server memory in this case from 0 to something else?

Thanks so much in advance
gunneyk
New Member
New Member

--
17 Oct 2012 04:08 PM
Well there is no straight answer for this. As you already found out setting a max gets complicated real fast when you have multiple instances that can be on the same node. So basically you have to decide what are the priorities. Lets assume all memory is dynamic (no lock pages). First off the chances of all 4 instances runing on a single node are pretty slim. If it happens someone probably screwed up and you can most likely correct it and get them evened out a bit better. The likely scenario is two nodes at a time when you have 4 to work with. Then I would start by setting a minimum amount that you know each instance needs to have to work yet still leaves enough for the other instances and the OS. Keep in mind a minimum does not mean it gets that on start up. Once it reaches the minimum it won't go below that. As for the MAX memory thats not so easy. Many people put enough memory on each node so that they all have what they need when multiple instance are running on a single node. However that can be expensive. Many opt for a startup sp that checks to see how many instances are on a particualr node and adjust the max memory accordingly. But again it depends on priorities. Is there a particular node that always needs x amount of memory? Can they all function for a short term with minimal memory? etc. Those requirements / priorities will help you determine how to divy up the memory. Also keep in mind that if an instance is using say 64GB and another instance fails over to that node it may be a while before enough of the memory is dynamically adjusted for the new instance to run properly. But that depends on lots of factors so test is the only way to know how these will all work.


Acceptable Use Policy
---