d8888 888 888      88888888888 888      d8b                                 888       888          888       .d8888b.           888                               
      d88888 888 888          888     888      Y8P                                 888   o   888          888      d88P  Y88b          888                               
     d88P888 888 888          888     888                                          888  d8b  888          888      Y88b.               888                               
    d88P 888 888 888          888     88888b.  888 88888b.   .d88b.  .d8888b       888 d888b 888  .d88b.  88888b.   "Y888b.   88888b.  88888b.   .d88b.  888d888 .d88b.  
   d88P  888 888 888          888     888 "88b 888 888 "88b d88P"88b 88K           888d88888b888 d8P  Y8b 888 "88b     "Y88b. 888 "88b 888 "88b d8P  Y8b 888P"  d8P  Y8b 
  d88P   888 888 888          888     888  888 888 888  888 888  888 "Y8888b.      88888P Y88888 88888888 888  888       "888 888  888 888  888 88888888 888    88888888 
 d8888888888 888 888          888     888  888 888 888  888 Y88b 888      X88      8888P   Y8888 Y8b.     888 d88P Y88b  d88P 888 d88P 888  888 Y8b.     888    Y8b.     
d88P     888 888 888          888     888  888 888 888  888  "Y88888  88888P'      888P     Y888  "Y8888  88888P"   "Y8888P"  88888P"  888  888  "Y8888  888     "Y8888  
                                                                 888                                                          888                                        
                                                            Y8b d88P                                                          888                                        
                                                             "Y88P"                                                           888   

All Things WebSphere

Concerns and issues relating to all versions of WebSphere Application Server

Tuesday, December 20, 2011

 

Defensive queuing for website surge protection


Today's blog post explains a deployment and tuning methodology to protect your application server from a traffic surge.

In the figure below decreasing values are assigned to the tuning parameters for each of the displayed Web site components. This deliberate reduction in the maximum number of available resources assigned to each successive layer is called the Funnel model methodology. The benefit of the funnel tuning model is that we
want as many clients to connect to our system as possible, but without overwhelming the resources in each of the layers downstream (for example, database connections). The funnel helps us place these requests into various queues at each layer, where they will wait until the next layer has the capacity to process them. In summary, the funnel model helps us handle bursts of client


Queue tuning points in a Java EE Web site: the Funnel model
For some high level guidance  and theory see. The proportion and ratios  of the various queues to various another is very important in determining the queuing that occurs at load. If every incoming request requires a connection to a back-end database then  the  datasource connection pool  should be at least equal in size to the servlet thread pool.

What is the size of the WebContainer threadpool in relation to the connection pool  ?
What should be the IHS thread/process configuration in relation to the WebContainer threadpool ?

Answer from SWAT guru Kevin Grigorenko -->
  • The ideal proportion is 1:1 with all wait time on the network, but that assumes all layers perform and scale the same, which is rarely (if ever) the case, so only the general recommendation of 1:<=1 is made. Also, requests are not all the same, so one request may use more memory on a thread than others.
  • In general, I recommend customers "play it safe" -- for example, start at the max database concurrency, N, then set WAS WebContainer to 1.5N (could go more since a lot of times requests are just for static content) and then set IHS to 2N. 
  • But a lot of stress testing is required at the boundary of the incoming queues -- the biggest problem is that customers configure the front queue (e.g. IHS) at N but then don't even create a test that sends N concurrent requests in. When that happens (spike in load, DOS attack, etc.), their JVMs or database crash/OOM.

Labels:


Comments:

Post a Comment

Subscribe to Post Comments [Atom]



Links to this post:

Create a Link



<< Home

Archives

December 2006   September 2008   January 2009   February 2009   March 2009   September 2009   October 2009   November 2009   December 2009   January 2010   February 2010   March 2010   April 2010   October 2010   January 2011   February 2011   April 2011   May 2011   June 2011   July 2011   August 2011   September 2011   October 2011   November 2011   December 2011   January 2012   February 2012   March 2012   April 2012   May 2012   June 2012   July 2012   August 2012   September 2012   October 2012   November 2012   January 2013   May 2013   June 2013   July 2013   September 2013   October 2013   June 2014   August 2014  

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]