The reasons are the following:
– Unoptimized Apache settings
The default settings of Apache can induce a heavy load on memory if the sites are content rich dynamic pages (eg. WordPress). If the memory usage is not carefully calibrated and balanced between HTTP and Database servers, the server will freeze.
– Brute force attacks
The internet is infested with Bots and Bot-masters that try to infect websites and spread their malware everywhere. They do this by exploiting website vulnerabilities and cracking login details through brute force. When such a large scale attack occurs, the server can freeze.
– Database server (eg. MySQL) limits
Database servers also have built-in connection limits that can fail to rise up to the traffic, causing “too many connections” error.
1. Fix Apache configuration
Misconfigured Apache is the most common reason we’ve seen for the “too many connections” error.
We’ve seen these common configuration issues while troubleshooting customer servers:
- Unsuitable MPM for the server – Apache servers use Prefork MPM (Multi-Processing Module) by default. This takes too much memory and suited for low traffic, simple (text-based) websites. Content-rich dynamic websites (eg. WordPress) are better served by Worker MPM (or Event MPM for very busy sites).
- Insufficient memory allocation – Most web servers also run Database services, DNS services, and Mail services. So unless the MaxClient (MaxRequestWorkers in Apache 2.4) and StartServer settings are carefully calibrated to match available memory, Apache will resort to using the slower swap space, and the whole server will grind to a halt.
- Too liberal KeepAlive values – Apache uses a feature called “KeepAlive” to make connections faster, but malicious bots and slow devices can hang on to those connections for a long time, and starve server resources.
When we are called in to fix an unoptimized server, we estimate the (1) average & max traffic on the server, (2) memory usage per server request, and (3) the trend of resource usage in all services.
Based on this data, we tweak the server settings to keep the server load of less than 1.0.
2. Block DoS and brute force attacks using firewalls
Most attacks behave in predictable ways.
Some send hundreds of connections in seconds, some establish no-response connections, and some others use abuse-listed IPs.
- Harden the network & kernel settings – Some attacks such as Smurf attacks, Slow Loris attacks and Syn Flood attacks can be blocked by hardening the basic network settings. So, that’s the first step in our network security measures.
- Setup a strong firewall – Once the kernel is secure, we configure the firewall to detect port flooding, port scanning, and other such behavior that’s indicative of abusive behavior. We then set these IPs to be automatically blocked, so that legitimate requests are not affected.
- Enable anti-DoS modules in Apache – As the final layer of defense, we setup anti-DoS modules in Apache such as mod_evasive and mod_qos which limits the bandwidth per visitor. In this way, the server won’t be affected even if one user initiates a lot of connections (common to DoS attacks and bots).
- Setup a web application firewall – Web Application Firewalls such as mod_security blocks many common attacks (such as XSS, CSRF, SQLi, etc.) based on connection signatures. We configure the system to detect and block attacker IPs so that attackers won’t be able to upload a malware, much less execute it.
3. Use a reverse proxy or caching server
The most effective solution that we’ve found against Apache overload issues is to use a Reverse Proxy server in front of Apache.
The reality is, Apache is not all that great in handling multiple simultaneous connections.
So, we put caching service such as Nginx or Varnish in front of Apache that reduces the hits sent to the Apache service.
These services store a copy of all pages served in the memory, and when another request comes for the same page, it quickly sends back the cached copy.
In this way, we’ve been able to cut down the load to Apache to less than 40%, while using fewer resources than the previous Apache stand-alone process.
4. Optimize database settings
In a few rare cases, we’ve seen MySQL settings to be unsuitable for the traffic the server was receiving.
When we detect MySQL issues, we use the same approach as we did with Apache. We log the requests sent to the database and find out the actual number of connections.
Once we know the exact load exerted on the database, we tweak the Buffer size, Cache size, Sort table size, Max user connections, and more to eliminate aborted queries.
The server performance is observed for a few days to make sure the resource allocation works well with Apache settings and incoming traffic.
For any support, visit hire our experts.