When it comes to scaling, front-end productivity doesn’t matter much. The bottleneck is not how the front-end operates, but how well your web server can manage the traffic and load. Undoubtedly, the estimation of concurrent users will be used to assess whether your system and services are capable of handling the peak traffic on the website or not. So, that you will be well prepared to manage the services before it’s getting too late. Because this estimation might be the making or breaking point of your application or website. In brief, let us talk about concurrent users and their estimation.
What is specifically intended by the term “Concurrent Users”
The total number of visitors visiting the online resource/website. And perform separate transactions at the exact same time period simultaneously is known as concurrent users/visitors. It is generally measured for short durations. Estimation of Concurrent users acts as a metric used for capacity management, performance optimization, scale resources, define licenses, and to carry out load testing.
Basic formula for computing concurrent users;
Concurrent visitors = Per Day visits / Peak hours * (60/Average duration per visit in minutes)
Let us take an example to understand it;
Suppose, I have a website, and around 200 users visit my website daily. Approximately 20 minutes were taken by a single user on my website. The peak hours mean the time when I experiencing high traffic is from 6:00 P.M to 10:00 P.M i.e, 4 hours. So, let’s calculate my concurrent users by putting the values in the formula:
Concurrent visitors = 200 / 4 * (60/20) = 200 / 4*3 = 200 /12 = 16.67
As a result, the approx. no. of concurrent users on my website is 17.
Why do we calculate this?
Concurrent users are a standard way of planning, measuring, and managing service capacity. To assess the service capability, It is fair to look at the peak concurrent users for a time period. For instance, my website has the capacity to handle 500 concurrent users. When peak concurrent users reach 475, it indicates that we need to scale up the capacity.
The license for a service would be based on concurrent users. This ensures that various users can use the service, but under the license, there is a restriction to how many users can use the service at the same time.
Application and server Load Testing
Application and server Load Testing techniques are also focused on simulation of concurrent users. These tools usually execute the results on the basis of concurrent users. This illustrates how critical the assessment of these users is.
Tuning of applications for High Traffic
Like we said earlier, predicting concurrent users plays a very significant role in tuning the application and scaling up the resources in case of traffic fluctuations. Therefore it alerts us when it is necessary to scale up the resources to keep our servers load-free. As a result, regardless of the high server load, we don’t have to suffer the repercussions. And our website remains optimized which is the primary motive for all.
Thus, we are specifying concurrent user limits for some of the services that we have commonly used for our application. By using these limits, we have the basic idea of addressing the question “How to determine the number of concurrent users that your eCommerce can handle?”. Since, it all depends on the configuration of the operating system and the services that you use (Apache, MySQL, PHP, etc.)
A Single CPU core will commonly handle an average of 220 to 250 concurrent connections simultaneously. If for instance, a website runs on a server that has a single CPU with 2 CPU cores, approximately 500 visitors may access and search the website at the same time.
MY SQL Database
75 concurrent connections per gigabyte of usable memory are supported in each MySQL database. The total memory on the database, minus the approximate 350 MB used by the operating system, is usable memory. Usable memory is then rounded to the nearest gigabyte.
|Total Memory||Concurrent Connections|
|1 GB RAM||75|
|2 GB RAM||150|
|4 GB RAM||225|
|8 GB RAM||525|
|16 GB RAM||1,050|
|32 GB RAM||2,175|
|64 GB RAM||4,425|
Concurrent users limit as per the GBs
Apache2 is configured to allow by default 150 concurrent connections. This causes all concurrent requests to wait beyond the limit.
However, by using the mpm_worker or mpm_event module, we can customize Apache2 to accept more concurrent connections. This allows us to use less RAM than with mpm prefork to serve more concurrent connections.
In Redis 2.4, there was a hard-coded limit that could be handled concurrently with the maximum number of clients. This limit is dynamic in Redis 2.6: it is set to 10000 clients by default, unless otherwise defined in the Redis.conf maxclient directive.
If Redis is configured to handle a certain number of users then it is a smart idea to make sure that the operating system limit is set to the maximum number of file descriptors per operation. In simple words, ulimit is a Linux shell command that can set or report the current user’s resource limit.
To know the ulimit of your linux operating system, just enter the below command in your terminal.
To get the report in more depth, add the “-a” flag. This wil display the current user’s resource limit
There is often a trade-off between performance and scalability. When you prepare your website for high traffic, you have to balance scalability over end-user performance. Furthermore, to make the website more scalable, there are only two things we can do. We may either deploy fewer resources or can extend the resources of the server. Since requests are dynamic and can run PHP, server resources are also very necessary for any eCommerce. Caching will help to scale up somewhat but the server resources will likely be spent rapidly because of multiple sessions and carts. Hence, it is often a wise decision to use fewer resources for the web application with the estimation of concurrent users to make it faster.
Thanks For Reading!
At last, we hope it works for you! And during this blog, you have found something valuable.