Well this topic has been cussed and discussed here and elsewhere, but I'll take another stab at it try and be helpful. Some advice has been to avoid connection tracking (such as this post) while others have indicated they just set the limit to "millions" and didn't worry about it.
As background, the key to connection tracking limits starts with the hash table that is initialized when the linux kernel module is loaded (from the source code here):
int nf_conntrack_init_start(void)
{
unsigned long nr_pages = totalram_pages();
int max_factor = 8;
int ret = -ENOMEM;
int i;
/* struct nf_ct_ext uses u8 to store offsets/size */
BUILD_BUG_ON(total_extension_size() > 255u);
seqcount_init(&nf_conntrack_generation);
for (i = 0; i < CONNTRACK_LOCKS; i++)
spin_lock_init(&nf_conntrack_locks[i]);
if (!nf_conntrack_htable_size) {
/* Idea from tcp.c: use 1/16384 of memory.
* On i386: 32MB machine has 512 buckets.
* >= 1GB machines have 16384 buckets.
* >= 4GB machines have 65536 buckets.
*/
nf_conntrack_htable_size
= (((nr_pages << PAGE_SHIFT) / 16384)
/ sizeof(struct hlist_head));
if (nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE)))
nf_conntrack_htable_size = 65536;
else if (nr_pages > (1024 * 1024 * 1024 / PAGE_SIZE))
nf_conntrack_htable_size = 16384;
if (nf_conntrack_htable_size < 32)
nf_conntrack_htable_size = 32;
/* Use a max. factor of four by default to get the same max as
* with the old struct list_heads. When a table size is given
* we use the old value of 8 to avoid reducing the max.
* entries. */
max_factor = 4;
}
nf_conntrack_hash = nf_ct_alloc_hashtable(&nf_conntrack_htable_size, 1);
if (!nf_conntrack_hash)
return -ENOMEM;
nf_conntrack_max = max_factor * nf_conntrack_htable_size;
Given that, I contend the MINIMUM conntrack hash table for the ER Infinity with 16GB of RAM should be 262144. Anyway, it can be manually set with:
set system conntrack hash-size 262144
commit ; save
You'll be prompted to reboot because the setting can only be changed when the conntrack module is loaded.
Now, we've provided the hash table size, so the kernel should set a maxium of 2097152 for nf_conntrack_max using the max_factor of 8 (the max_factor = 4 is ONLY used IFF the system has to calculate the hash size). However, it can be manually set with:
set system conntrack table-size 2097152
commit ; save
That change will take effect immediately -- no reboot required. You may also set a lower limit so long as value is a "power of 2" and the kernel will accept it.
So, IMHO, that is where to begin. If that is not sufficient, you will know it by log messages such as the following:
kernel: nf_conntrack: table full, dropping packet.
If that is your case, then first check your free memory available. BGP routing tables and other things need RAM as well. If you have plenty of RAM to spare, increase the hash-size, then reboot, then tweak the table-size (connection limit) if necessary.
A word about expect-table-size. This table separately tracks connections that the kernel thinks are going to be opening soon or "expected". This is used by things like FTP that have one channel for control and another for data. If the kernel "sees" the control traffic referring to the protocol/port for the data channel, it can use that to expect and track the eventual data traffic. These would require data channels to be unencrypted and (most likely) a helper module or rule to identify that control traffic. IMHO the kernel conntrack helper modules are not very useful in most modern implementations and sometimes even harmful.
So, unless you KNOW you need the helper module for some specific traffic, I recommend disabling them:
set system conntrack modules ftp disable
set system conntrack modules gre disable
set system conntrack modules h323 disable
set system conntrack modules pptp disable
set system conntrack modules sip disable
set system conntrack modules tftp disable
Finally, the most difficult part to get "right" is tweaking connection tracking time outs for YOUR particular use case. Therefore, that is left as an exercise for the reader.
Enjoy!