How to design database for millions of concurrent users relying on each others data

Last Post 21 Jan 2013 06:11 AM by gunneyk. 1 Replies.
AddThis - Bookmarking and Sharing Button
Author Messages Not Resolved
A1Friend4u
New Member
New Member

--
21 Jan 2013 04:01 AM
This is completely new game application being developed and I want the design to be highly scalable. As each user's score changes (increases or decreases by certain number) it will be updated to the server. In turn Server will reply with "list of nearest 50 users" to that score. This is planned to happen in not less than 1 minute per user.

Thus if DB table is indexed on score, we can say single write and 50 reads will happen per minute by each user. Now lets assume I've 50 "connection servers" and around 20K users are connected to each "connection server". All those "connection servers" in turn querying single DB Server. So it is effectively queuing all the (50*20000) millions of requests to single DB Server. Which obviously cannot complete all those requests in a minute.

Hence, I am looking if is it possible to have *multiple* servers sharing *single* database? So that it will enable me to distribute my queries across multiple database servers. If this is not possible, how can this be achieved?

gunneyk
New Member
New Member

--
21 Jan 2013 06:11 AM
Why would you have 50 reads for each write? You certainly don't need to do 50 reads to get he 50 closest scores to any given other score. You can do that in a single read. In any case you can use several approaches to replicate the score data to different sql servers to scale the reads. One way is to use transactional replication built into SQL Server but you can easily do your own form of replication, especially if you are only talking about a few tables.


Acceptable Use Policy
---