The formula for calculating NTP server client load is:
Number of clients supported = (packets per second) X (number of seconds between the poll interval of the clients)
Our servers support 10000 packets per second each and our research seems to show the number of seconds between poll intervals is as follows:
- for Unices, including OSX, the man page for ntp.conf says the default is between 64s and 1024s
- for Windows XP and Vista (and probably 7) the default is 900s
- for Windows 2000 it is 28,800s so not worth bothering about
So if we use the worst case of 64s for Unix, which our testing shows is the normal interval, and if we assume a 75%/25% (5250/1750) split of Windows to Unix we get
Windows clients = 5250 x 900 = 4,725,000 Unix clients = 1750 x 64 = 112,000
which is probably enough for NZ and the Pacific islands since not everyone is going to use our service.
A different way to calculate load is if we assume that the maximum number of Windows clients we could support is say 2 million, then the number of qps that uses is 2222, leaving 4778, which supports 305,792 Unix clients.
Both these calculations assume that each client talks to every server, which certainly seems the way our clients work. So with three servers there is no scaling effect but when we add more servers there will be as most people should not use more than three.
Using the maximum NTP packet size of 128 bytes then 10000 packets per second gives 1.28Mb/s, which is not enough for us to worry that much about.
If a device could handle NTP packets at wire speed then on a 100Mb/s link it can handle 102,400 128 byte packets per second, which equates to 6,553,600 Unix boxes or 92,160,000 Windows boxes